75 FR 60415 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-30
... computer systems and networks. This information collection is required to obtain the necessary data... card reflecting those benefits and privileges, and to maintain a centralized database of the eligible... card reflecting those benefits and privileges, and to maintain a centralized database of the eligible...
47 CFR 76.1700 - Records to be maintained by cable system operators.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 4 2014-10-01 2014-10-01 false Records to be maintained by cable system operators. 76.1700 Section 76.1700 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST... or part of the public inspection file may be maintained in a computer database, as long as a computer...
47 CFR 76.1700 - Records to be maintained by cable system operators.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Records to be maintained by cable system operators. 76.1700 Section 76.1700 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST... or part of the public inspection file may be maintained in a computer database, as long as a computer...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourassi, Georgia D.; Harrawood, Brian; Singh, Swatee
2007-08-15
We have previously presented a knowledge-based computer-assisted detection (KB-CADe) system for the detection of mammographic masses. The system is designed to compare a query mammographic region with mammographic templates of known ground truth. The templates are stored in an adaptive knowledge database. Image similarity is assessed with information theoretic measures (e.g., mutual information) derived directly from the image histograms. A previous study suggested that the diagnostic performance of the system steadily improves as the knowledge database is initially enriched with more templates. However, as the database increases in size, an exhaustive comparison of the query case with each stored templatemore » becomes computationally burdensome. Furthermore, blind storing of new templates may result in redundancies that do not necessarily improve diagnostic performance. To address these concerns we investigated an entropy-based indexing scheme for improving the speed of analysis and for satisfying database storage restrictions without compromising the overall diagnostic performance of our KB-CADe system. The indexing scheme was evaluated on two different datasets as (i) a search mechanism to sort through the knowledge database, and (ii) a selection mechanism to build a smaller, concise knowledge database that is easier to maintain but still effective. There were two important findings in the study. First, entropy-based indexing is an effective strategy to identify fast a subset of templates that are most relevant to a given query. Only this subset could be analyzed in more detail using mutual information for optimized decision making regarding the query. Second, a selective entropy-based deposit strategy may be preferable where only high entropy cases are maintained in the knowledge database. Overall, the proposed entropy-based indexing scheme was shown to reduce the computational cost of our KB-CADe system by 55% to 80% while maintaining the system's diagnostic performance.« less
SPIRES Tailored to a Special Library: A Mainframe Answer for a Small Online Catalog.
ERIC Educational Resources Information Center
Newton, Mary
1989-01-01
Describes the design and functions of a technical library database maintained on a mainframe computer and supported by the SPIRES database management system. The topics covered include record structures, vocabulary control, input procedures, searching features, time considerations, and cost effectiveness. (three references) (CLB)
Utilizing the Web in the Classroom: Linking Student Scientists with Professional Data.
ERIC Educational Resources Information Center
Seitz, Kristine; Leake, Devin
1999-01-01
Describes how information gathered from a computer database can be used as a springboard to scientific discovery. Specifies directions for studying the homeobox gene PAX-6 using GenBank, a database maintained by the National Center for Biotechnology Information (NCBI). Contains 16 references. (WRM)
Rhinoplasty perioperative database using a personal digital assistant.
Kotler, Howard S
2004-01-01
To construct a reliable, accurate, and easy-to-use handheld computer database that facilitates the point-of-care acquisition of perioperative text and image data specific to rhinoplasty. A user-modified database (Pendragon Forms [v.3.2]; Pendragon Software Corporation, Libertyville, Ill) and graphic image program (Tealpaint [v.4.87]; Tealpaint Software, San Rafael, Calif) were used to capture text and image data, respectively, on a Palm OS (v.4.11) handheld operating with 8 megabytes of memory. The handheld and desktop databases were maintained secure using PDASecure (v.2.0) and GoldSecure (v.3.0) (Trust Digital LLC, Fairfax, Va). The handheld data were then uploaded to a desktop database of either FileMaker Pro 5.0 (v.1) (FileMaker Inc, Santa Clara, Calif) or Microsoft Access 2000 (Microsoft Corp, Redmond, Wash). Patient data were collected from 15 patients undergoing rhinoplasty in a private practice outpatient ambulatory setting. Data integrity was assessed after 6 months' disk and hard drive storage. The handheld database was able to facilitate data collection and accurately record, transfer, and reliably maintain perioperative rhinoplasty data. Query capability allowed rapid search using a multitude of keyword search terms specific to the operative maneuvers performed in rhinoplasty. Handheld computer technology provides a method of reliably recording and storing perioperative rhinoplasty information. The handheld computer facilitates the reliable and accurate storage and query of perioperative data, assisting the retrospective review of one's own results and enhancement of surgical skills.
ERIC Educational Resources Information Center
Johnson, Donald M.; Ferguson, James A.; Vokins, Nancy W.; Lester, Melissa L.
2000-01-01
Over 50% of faculty teaching undergraduate agriculture courses (n=58) required use of word processing, Internet, and electronic mail; less than 50% required spreadsheets, databases, graphics, or specialized software. They planned to maintain or increase required computer tasks in their courses. (SK)
Data Auditor: Analyzing Data Quality Using Pattern Tableaux
NASA Astrophysics Data System (ADS)
Srivastava, Divesh
Monitoring databases maintain configuration and measurement tables about computer systems, such as networks and computing clusters, and serve important business functions, such as troubleshooting customer problems, analyzing equipment failures, planning system upgrades, etc. These databases are prone to many data quality issues: configuration tables may be incorrect due to data entry errors, while measurement tables may be affected by incorrect, missing, duplicate and delayed polls. We describe Data Auditor, a tool for analyzing data quality and exploring data semantics of monitoring databases. Given a user-supplied constraint, such as a boolean predicate expected to be satisfied by every tuple, a functional dependency, or an inclusion dependency, Data Auditor computes "pattern tableaux", which are concise summaries of subsets of the data that satisfy or fail the constraint. We discuss the architecture of Data Auditor, including the supported types of constraints and the tableau generation mechanism. We also show the utility of our approach on an operational network monitoring database.
Hand-held computer operating system program for collection of resident experience data.
Malan, T K; Haffner, W H; Armstrong, A Y; Satin, A J
2000-11-01
To describe a system for recording resident experience involving hand-held computers with the Palm Operating System (3 Com, Inc., Santa Clara, CA). Hand-held personal computers (PCs) are popular, easy to use, inexpensive, portable, and can share data among other operating systems. Residents in our program carry individual hand-held database computers to record Residency Review Committee (RRC) reportable patient encounters. Each resident's data is transferred to a single central relational database compatible with Microsoft Access (Microsoft Corporation, Redmond, WA). Patient data entry and subsequent transfer to a central database is accomplished with commercially available software that requires minimal computer expertise to implement and maintain. The central database can then be used for statistical analysis or to create required RRC resident experience reports. As a result, the data collection and transfer process takes less time for residents and program director alike, than paper-based or central computer-based systems. The system of collecting resident encounter data using hand-held computers with the Palm Operating System is easy to use, relatively inexpensive, accurate, and secure. The user-friendly system provides prompt, complete, and accurate data, enhancing the education of residents while facilitating the job of the program director.
MIPS: analysis and annotation of proteins from whole genomes in 2005
Mewes, H. W.; Frishman, D.; Mayer, K. F. X.; Münsterkötter, M.; Noubibou, O.; Pagel, P.; Rattei, T.; Oesterheld, M.; Ruepp, A.; Stümpflen, V.
2006-01-01
The Munich Information Center for Protein Sequences (MIPS at the GSF), Neuherberg, Germany, provides resources related to genome information. Manually curated databases for several reference organisms are maintained. Several of these databases are described elsewhere in this and other recent NAR database issues. In a complementary effort, a comprehensive set of >400 genomes automatically annotated with the PEDANT system are maintained. The main goal of our current work on creating and maintaining genome databases is to extend gene centered information to information on interactions within a generic comprehensive framework. We have concentrated our efforts along three lines (i) the development of suitable comprehensive data structures and database technology, communication and query tools to include a wide range of different types of information enabling the representation of complex information such as functional modules or networks Genome Research Environment System, (ii) the development of databases covering computable information such as the basic evolutionary relations among all genes, namely SIMAP, the sequence similarity matrix and the CABiNet network analysis framework and (iii) the compilation and manual annotation of information related to interactions such as protein–protein interactions or other types of relations (e.g. MPCDB, MPPI, CYGD). All databases described and the detailed descriptions of our projects can be accessed through the MIPS WWW server (). PMID:16381839
MIPS: analysis and annotation of proteins from whole genomes in 2005.
Mewes, H W; Frishman, D; Mayer, K F X; Münsterkötter, M; Noubibou, O; Pagel, P; Rattei, T; Oesterheld, M; Ruepp, A; Stümpflen, V
2006-01-01
The Munich Information Center for Protein Sequences (MIPS at the GSF), Neuherberg, Germany, provides resources related to genome information. Manually curated databases for several reference organisms are maintained. Several of these databases are described elsewhere in this and other recent NAR database issues. In a complementary effort, a comprehensive set of >400 genomes automatically annotated with the PEDANT system are maintained. The main goal of our current work on creating and maintaining genome databases is to extend gene centered information to information on interactions within a generic comprehensive framework. We have concentrated our efforts along three lines (i) the development of suitable comprehensive data structures and database technology, communication and query tools to include a wide range of different types of information enabling the representation of complex information such as functional modules or networks Genome Research Environment System, (ii) the development of databases covering computable information such as the basic evolutionary relations among all genes, namely SIMAP, the sequence similarity matrix and the CABiNet network analysis framework and (iii) the compilation and manual annotation of information related to interactions such as protein-protein interactions or other types of relations (e.g. MPCDB, MPPI, CYGD). All databases described and the detailed descriptions of our projects can be accessed through the MIPS WWW server (http://mips.gsf.de).
A computational platform to maintain and migrate manual functional annotations for BioCyc databases.
Walsh, Jesse R; Sen, Taner Z; Dickerson, Julie A
2014-10-12
BioCyc databases are an important resource for information on biological pathways and genomic data. Such databases represent the accumulation of biological data, some of which has been manually curated from literature. An essential feature of these databases is the continuing data integration as new knowledge is discovered. As functional annotations are improved, scalable methods are needed for curators to manage annotations without detailed knowledge of the specific design of the BioCyc database. We have developed CycTools, a software tool which allows curators to maintain functional annotations in a model organism database. This tool builds on existing software to improve and simplify annotation data imports of user provided data into BioCyc databases. Additionally, CycTools automatically resolves synonyms and alternate identifiers contained within the database into the appropriate internal identifiers. Automating steps in the manual data entry process can improve curation efforts for major biological databases. The functionality of CycTools is demonstrated by transferring GO term annotations from MaizeCyc to matching proteins in CornCyc, both maize metabolic pathway databases available at MaizeGDB, and by creating strain specific databases for metabolic engineering.
2017-01-01
CII-B 2800 Powder Mill Road Adelphi, MD 20783-1138 8. PERFORMING ORGANIZATION REPORT NUMBER ARL-TR-7921 9. SPONSORING/MONITORING AGENCY NAME(S...server database, structured query language, information objects, instructions, maintenance , cursor on target events, unattended ground sensors...unlimited. iii Contents List of Figures iv 1. Introduction 1 2. Computer and Software Development Tools Requirements 1 3. Database Maintenance 2 3.1
Program for Generating Graphs and Charts
NASA Technical Reports Server (NTRS)
Ackerson, C. T.
1986-01-01
Office Automation Pilot (OAP) Graphics Database system offers IBM personal computer user assistance in producing wide variety of graphs and charts and convenient data-base system, called chart base, for creating and maintaining data associated with graphs and charts. Thirteen different graphics packages available. Access graphics capabilities obtained in similar manner. User chooses creation, revision, or chartbase-maintenance options from initial menu; Enters or modifies data displayed on graphic chart. OAP graphics data-base system written in Microsoft PASCAL.
How to maintain blood supply during computer network breakdown: a manual backup system.
Zeiler, T; Slonka, J; Bürgi, H R; Kretschmer, V
2000-12-01
Electronic data management systems using computer network systems and client/server architecture are increasingly used in laboratories and transfusion services. Severe problems arise if there is no network access to the database server and critical functions are not available. We describe a manual backup system (MBS) developed to maintain the delivery of blood products to patients in a hospital transfusion service in case of a computer network breakdown. All data are kept on a central SQL database connected to peripheral workstations in a local area network (LAN). Request entry from wards is performed via machine-readable request forms containing self-adhesive specimen labels with barcodes for test tubes. Data entry occurs on-line by bidirectional automated systems or off-line manually. One of the workstations in the laboratory contains a second SQL database which is frequently and incrementally updated. This workstation is run as a stand-alone, read-only database if the central SQL database is not available. In case of a network breakdown, the time-graded MBS is launched. Patient data, requesting ward and ordered tests/requests, are photocopied through a template from the request forms on special MBS worksheets serving as laboratory journal for manual processing and result report (a copy is left in the laboratory). As soon as the network is running again the data from the off-line period are entered into the primary SQL server. The MBS was successfully used at several occasions. The documentation of a 90-min breakdown period is presented in detail. Additional work resulted from the copy work and the belated manual data entry after restoration of the system. There was no delay in issue of blood products or result reporting. The backup system described has been proven to be simple, quick and safe to maintain urgent blood supply and distribution of laboratory results in case of unexpected network breakdown.
Development of expert systems for analyzing electronic documents
NASA Astrophysics Data System (ADS)
Abeer Yassin, Al-Azzawi; Shidlovskiy, S.; Jamal, A. A.
2018-05-01
The paper analyses a Database Management System (DBMS). Expert systems, Databases, and database technology have become an essential component of everyday life in the modern society. As databases are widely used in every organization with a computer system, data resource control and data management are very important [1]. DBMS is the most significant tool developed to serve multiple users in a database environment consisting of programs that enable users to create and maintain a database. This paper focuses on development of a database management system for General Directorate for education of Diyala in Iraq (GDED) using Clips, java Net-beans and Alfresco and system components, which were previously developed in Tomsk State University at the Faculty of Innovative Technology.
Combining computational models, semantic annotations and simulation experiments in a graph database
Henkel, Ron; Wolkenhauer, Olaf; Waltemath, Dagmar
2015-01-01
Model repositories such as the BioModels Database, the CellML Model Repository or JWS Online are frequently accessed to retrieve computational models of biological systems. However, their storage concepts support only restricted types of queries and not all data inside the repositories can be retrieved. In this article we present a storage concept that meets this challenge. It grounds on a graph database, reflects the models’ structure, incorporates semantic annotations and simulation descriptions and ultimately connects different types of model-related data. The connections between heterogeneous model-related data and bio-ontologies enable efficient search via biological facts and grant access to new model features. The introduced concept notably improves the access of computational models and associated simulations in a model repository. This has positive effects on tasks such as model search, retrieval, ranking, matching and filtering. Furthermore, our work for the first time enables CellML- and Systems Biology Markup Language-encoded models to be effectively maintained in one database. We show how these models can be linked via annotations and queried. Database URL: https://sems.uni-rostock.de/projects/masymos/ PMID:25754863
Use of a secure Internet Web site for collaborative medical research.
Marshall, W W; Haley, R W
2000-10-11
Researchers who collaborate on clinical research studies from diffuse locations need a convenient, inexpensive, secure way to record and manage data. The Internet, with its World Wide Web, provides a vast network that enables researchers with diverse types of computers and operating systems anywhere in the world to log data through a common interface. Development of a Web site for scientific data collection can be organized into 10 steps, including planning the scientific database, choosing a database management software system, setting up database tables for each collaborator's variables, developing the Web site's screen layout, choosing a middleware software system to tie the database software to the Web site interface, embedding data editing and calculation routines, setting up the database on the central server computer, obtaining a unique Internet address and name for the Web site, applying security measures to the site, and training staff who enter data. Ensuring the security of an Internet database requires limiting the number of people who have access to the server, setting up the server on a stand-alone computer, requiring user-name and password authentication for server and Web site access, installing a firewall computer to prevent break-ins and block bogus information from reaching the server, verifying the identity of the server and client computers with certification from a certificate authority, encrypting information sent between server and client computers to avoid eavesdropping, establishing audit trails to record all accesses into the Web site, and educating Web site users about security techniques. When these measures are carefully undertaken, in our experience, information for scientific studies can be collected and maintained on Internet databases more efficiently and securely than through conventional systems of paper records protected by filing cabinets and locked doors. JAMA. 2000;284:1843-1849.
ENFIN--A European network for integrative systems biology.
Kahlem, Pascal; Clegg, Andrew; Reisinger, Florian; Xenarios, Ioannis; Hermjakob, Henning; Orengo, Christine; Birney, Ewan
2009-11-01
Integration of biological data of various types and the development of adapted bioinformatics tools represent critical objectives to enable research at the systems level. The European Network of Excellence ENFIN is engaged in developing an adapted infrastructure to connect databases, and platforms to enable both the generation of new bioinformatics tools and the experimental validation of computational predictions. With the aim of bridging the gap existing between standard wet laboratories and bioinformatics, the ENFIN Network runs integrative research projects to bring the latest computational techniques to bear directly on questions dedicated to systems biology in the wet laboratory environment. The Network maintains internally close collaboration between experimental and computational research, enabling a permanent cycling of experimental validation and improvement of computational prediction methods. The computational work includes the development of a database infrastructure (EnCORE), bioinformatics analysis methods and a novel platform for protein function analysis FuncNet.
NASA Astrophysics Data System (ADS)
Ghiorso, M. S.
2014-12-01
Computational thermodynamics (CT) has now become an essential tool of petrologic and geochemical research. CT is the basis for the construction of phase diagrams, the application of geothermometers and geobarometers, the equilibrium speciation of solutions, the construction of pseudosections, calculations of mass transfer between minerals, melts and fluids, and, it provides a means of estimating materials properties for the evaluation of constitutive relations in fluid dynamical simulations. The practical application of CT to Earth science problems requires data. Data on the thermochemical properties and the equation of state of relevant materials, and data on the relative stability and partitioning of chemical elements between phases as a function of temperature and pressure. These data must be evaluated and synthesized into a self consistent collection of theoretical models and model parameters that is colloquially known as a thermodynamic database. Quantitative outcomes derived from CT reply on the existence, maintenance and integrity of thermodynamic databases. Unfortunately, the community is reliant on too few such databases, developed by a small number of research groups, and mostly under circumstances where refinement and updates to the database lag behind or are unresponsive to need. Given the increasing level of reliance on CT calculations, what is required is a paradigm shift in the way thermodynamic databases are developed, maintained and disseminated. They must become community resources, with flexible and assessable software interfaces that permit easy modification, while at the same time maintaining theoretical integrity and fidelity to the underlying experimental observations. Advances in computational and data science give us the tools and resources to address this problem, allowing CT results to be obtained at the speed of thought, and permitting geochemical and petrological intuition to play a key role in model development and calibration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Browne, S.V.; Green, S.C.; Moore, K.
1994-04-01
The Netlib repository, maintained by the University of Tennessee and Oak Ridge National Laboratory, contains freely available software, documents, and databases of interest to the numerical, scientific computing, and other communities. This report includes both the Netlib User`s Guide and the Netlib System Manager`s Guide, and contains information about Netlib`s databases, interfaces, and system implementation. The Netlib repository`s databases include the Performance Database, the Conferences Database, and the NA-NET mail forwarding and Whitepages Databases. A variety of user interfaces enable users to access the Netlib repository in the manner most convenient and compatible with their networking capabilities. These interfaces includemore » the Netlib email interface, the Xnetlib X Windows client, the netlibget command-line TCP/IP client, anonymous FTP, anonymous RCP, and gopher.« less
Maintaining Privacy in Pervasive Computing - Enabling Acceptance of Sensor-based Services
NASA Astrophysics Data System (ADS)
Soppera, A.; Burbridge, T.
During the 1980s, Mark Weiser [1] predicted a world in which computing was so pervasive that devices embedded in the environment could sense their relationship to us and to each other. These tiny ubiquitous devices would continually feed information from the physical world into the information world. Twenty years ago, this vision was the exclusive territory of academic computer scientists and science fiction writers. Today this subject has become of interest to business, government, and society. Governmental authorities exercise their power through the networked environment. Credit card databases maintain our credit history and decide whether we are allowed to rent a house or obtain a loan. Mobile telephones can locate us in real time so that we do not miss calls. Within another 10 years, all sorts of devices will be connected through the network. Our fridge, our food, together with our health information, may all be networked for the purpose of maintaining diet and well-being. The Internet will move from being an infrastructure to connect computers, to being an infrastructure to connect everything [2, 3].
NASA Technical Reports Server (NTRS)
Stutte, G. W.; Mackowiak, C. L.; Markwell, G. A.; Wheeler, R. M.; Sager, J. C.
1993-01-01
This KSC database is being made available to the scientific research community to facilitate the development of crop development models, to test monitoring and control strategies, and to identify environmental limitations in crop production systems. The KSC validated dataset consists of 17 parameters necessary to maintain bioregenerative life support functions: water purification, CO2 removal, O2 production, and biomass production. The data are available on disk as either a DATABASE SUBSET (one week of 5-minute data) or DATABASE SUMMARY (daily averages of parameters). Online access to the VALIDATED DATABASE will be made available to institutions with specific programmatic requirements. Availability and access to the KSC validated database are subject to approval and limitations implicit in KSC computer security policies.
Heterogeneous distributed databases: A case study
NASA Technical Reports Server (NTRS)
Stewart, Tracy R.; Mukkamala, Ravi
1991-01-01
Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.
Using Spreadsheets to Teach Statistics in Geography.
ERIC Educational Resources Information Center
Lee, M. P.; Soper, J. B.
1987-01-01
Maintains that teaching methods of statistical calculation in geography may be enhanced by using a computer spreadsheet. The spreadsheet format of rows and columns allows the data to be inspected and altered to demonstrate various statistical properties. The inclusion of graphics and database facilities further adds to the value of a spreadsheet.…
A rudimentary database for three-dimensional objects using structural representation
NASA Technical Reports Server (NTRS)
Sowers, James P.
1987-01-01
A database which enables users to store and share the description of three-dimensional objects in a research environment is presented. The main objective of the design is to make it a compact structure that holds sufficient information to reconstruct the object. The database design is based on an object representation scheme which is information preserving, reasonably efficient, and yet economical in terms of the storage requirement. The determination of the needed data for the reconstruction process is guided by the belief that it is faster to do simple computations to generate needed data/information for construction than to retrieve everything from memory. Some recent techniques of three-dimensional representation that influenced the design of the database are discussed. The schema for the database and the structural definition used to define an object are given. The user manual for the software developed to create and maintain the contents of the database is included.
[Selected aspects of computer-assisted literature management].
Reiss, M; Reiss, G
1998-01-01
We want to report about our own experiences with a database manager. Bibliography database managers are used to manage information resources: specifically, to maintain a database to references and create bibliographies and reference lists for written works. A database manager allows to enter summary information (record) for articles, book sections, books, dissertations, conference proceedings, and so on. Other features that may be included in a database manager include the ability to import references from different sources, such as MEDLINE. The word processing components allow to generate reference list and bibliographies in a variety of different styles, generates a reference list from a word processor manuscript. The function and the use of the software package EndNote 2 for Windows are described. Its advantages in fulfilling different requirements for the citation style and the sort order of reference lists are emphasized.
Selection of examples in case-based computer-aided decision systems
Mazurowski, Maciej A.; Zurada, Jacek M.; Tourassi, Georgia D.
2013-01-01
Case-based computer-aided decision (CB-CAD) systems rely on a database of previously stored, known examples when classifying new, incoming queries. Such systems can be particularly useful since they do not need retraining every time a new example is deposited in the case base. The adaptive nature of case-based systems is well suited to the current trend of continuously expanding digital databases in the medical domain. To maintain efficiency, however, such systems need sophisticated strategies to effectively manage the available evidence database. In this paper, we discuss the general problem of building an evidence database by selecting the most useful examples to store while satisfying existing storage requirements. We evaluate three intelligent techniques for this purpose: genetic algorithm-based selection, greedy selection and random mutation hill climbing. These techniques are compared to a random selection strategy used as the baseline. The study is performed with a previously presented CB-CAD system applied for false positive reduction in screening mammograms. The experimental evaluation shows that when the development goal is to maximize the system’s diagnostic performance, the intelligent techniques are able to reduce the size of the evidence database to 37% of the original database by eliminating superfluous and/or detrimental examples while at the same time significantly improving the CAD system’s performance. Furthermore, if the case-base size is a main concern, the total number of examples stored in the system can be reduced to only 2–4% of the original database without a decrease in the diagnostic performance. Comparison of the techniques shows that random mutation hill climbing provides the best balance between the diagnostic performance and computational efficiency when building the evidence database of the CB-CAD system. PMID:18854606
The Role Of Moral Awareness In Computer Security
NASA Astrophysics Data System (ADS)
Stawinski, Arthur
1984-08-01
Maintaining security of databases and other computer systems requires constraining the behavior of those persons who are able to access these systems so that they do not obtain, alter, or abuse the information contained in these systems. Three types of constraints are available: Physical contraints are obstructions designed to prevent (or at least make difficult) access to data by unauthorized persons; external constraints restrict behavior through threat of detection and punishment; internal constraints are self-imposed limitations on behavior which are derived from a person's moral standards. This paper argues that an effective computer security program will require attention to internal constraints as well as physical and external ones. Recent developments in moral philosophy and the psychology of moral development have given us new understanding of how individuals grow in moral awareness and how this growth can be encouraged. These insights are the foundation for some practical proposals for encouraging morally responsible behavior by computer professionals and others with access to confidential data. The aim of this paper is to encourage computer security professionals to discuss, refine and incorporate systems of internal constraints in developing methods of maintaining security.
Security and privacy qualities of medical devices: an analysis of FDA postmarket surveillance.
Kramer, Daniel B; Baker, Matthew; Ransford, Benjamin; Molina-Markham, Andres; Stewart, Quinn; Fu, Kevin; Reynolds, Matthew R
2012-01-01
Medical devices increasingly depend on computing functions such as wireless communication and Internet connectivity for software-based control of therapies and network-based transmission of patients' stored medical information. These computing capabilities introduce security and privacy risks, yet little is known about the prevalence of such risks within the clinical setting. We used three comprehensive, publicly available databases maintained by the Food and Drug Administration (FDA) to evaluate recalls and adverse events related to security and privacy risks of medical devices. Review of weekly enforcement reports identified 1,845 recalls; 605 (32.8%) of these included computers, 35 (1.9%) stored patient data, and 31 (1.7%) were capable of wireless communication. Searches of databases specific to recalls and adverse events identified only one event with a specific connection to security or privacy. Software-related recalls were relatively common, and most (81.8%) mentioned the possibility of upgrades, though only half of these provided specific instructions for the update mechanism. Our review of recalls and adverse events from federal government databases reveals sharp inconsistencies with databases at individual providers with respect to security and privacy risks. Recalls related to software may increase security risks because of unprotected update and correction mechanisms. To detect signals of security and privacy problems that adversely affect public health, federal postmarket surveillance strategies should rethink how to effectively and efficiently collect data on security and privacy problems in devices that increasingly depend on computing systems susceptible to malware.
Security and Privacy Qualities of Medical Devices: An Analysis of FDA Postmarket Surveillance
Kramer, Daniel B.; Baker, Matthew; Ransford, Benjamin; Molina-Markham, Andres; Stewart, Quinn; Fu, Kevin; Reynolds, Matthew R.
2012-01-01
Background Medical devices increasingly depend on computing functions such as wireless communication and Internet connectivity for software-based control of therapies and network-based transmission of patients’ stored medical information. These computing capabilities introduce security and privacy risks, yet little is known about the prevalence of such risks within the clinical setting. Methods We used three comprehensive, publicly available databases maintained by the Food and Drug Administration (FDA) to evaluate recalls and adverse events related to security and privacy risks of medical devices. Results Review of weekly enforcement reports identified 1,845 recalls; 605 (32.8%) of these included computers, 35 (1.9%) stored patient data, and 31 (1.7%) were capable of wireless communication. Searches of databases specific to recalls and adverse events identified only one event with a specific connection to security or privacy. Software-related recalls were relatively common, and most (81.8%) mentioned the possibility of upgrades, though only half of these provided specific instructions for the update mechanism. Conclusions Our review of recalls and adverse events from federal government databases reveals sharp inconsistencies with databases at individual providers with respect to security and privacy risks. Recalls related to software may increase security risks because of unprotected update and correction mechanisms. To detect signals of security and privacy problems that adversely affect public health, federal postmarket surveillance strategies should rethink how to effectively and efficiently collect data on security and privacy problems in devices that increasingly depend on computing systems susceptible to malware. PMID:22829874
Integrating Radar Image Data with Google Maps
NASA Technical Reports Server (NTRS)
Chapman, Bruce D.; Gibas, Sarah
2010-01-01
A public Web site has been developed as a method for displaying the multitude of radar imagery collected by NASA s Airborne Synthetic Aperture Radar (AIRSAR) instrument during its 16-year mission. Utilizing NASA s internal AIRSAR site, the new Web site features more sophisticated visualization tools that enable the general public to have access to these images. The site was originally maintained at NASA on six computers: one that held the Oracle database, two that took care of the software for the interactive map, and three that were for the Web site itself. Several tasks were involved in moving this complicated setup to just one computer. First, the AIRSAR database was migrated from Oracle to MySQL. Then the back-end of the AIRSAR Web site was updated in order to access the MySQL database. To do this, a few of the scripts needed to be modified; specifically three Perl scripts that query that database. The database connections were then updated from Oracle to MySQL, numerous syntax errors were corrected, and a query was implemented that replaced one of the stored Oracle procedures. Lastly, the interactive map was designed, implemented, and tested so that users could easily browse and access the radar imagery through the Google Maps interface.
Large scale database scrubbing using object oriented software components.
Herting, R L; Barnes, M R
1998-01-01
Now that case managers, quality improvement teams, and researchers use medical databases extensively, the ability to share and disseminate such databases while maintaining patient confidentiality is paramount. A process called scrubbing addresses this problem by removing personally identifying information while keeping the integrity of the medical information intact. Scrubbing entire databases, containing multiple tables, requires that the implicit relationships between data elements in different tables of the database be maintained. To address this issue we developed DBScrub, a Java program that interfaces with any JDBC compliant database and scrubs the database while maintaining the implicit relationships within it. DBScrub uses a small number of highly configurable object-oriented software components to carry out the scrubbing. We describe the structure of these software components and how they maintain the implicit relationships within the database.
National Geochronological Database
Revised by Sloan, Jan; Henry, Christopher D.; Hopkins, Melanie; Ludington, Steve; Original database by Zartman, Robert E.; Bush, Charles A.; Abston, Carl
2003-01-01
The National Geochronological Data Base (NGDB) was established by the United States Geological Survey (USGS) to collect and organize published isotopic (also known as radiometric) ages of rocks in the United States. The NGDB (originally known as the Radioactive Age Data Base, RADB) was started in 1974. A committee appointed by the Director of the USGS was given the mission to investigate the feasibility of compiling the published radiometric ages for the United States into a computerized data bank for ready access by the user community. A successful pilot program, which was conducted in 1975 and 1976 for the State of Wyoming, led to a decision to proceed with the compilation of the entire United States. For each dated rock sample reported in published literature, a record containing information on sample location, rock description, analytical data, age, interpretation, and literature citation was constructed and included in the NGDB. The NGDB was originally constructed and maintained on a mainframe computer, and later converted to a Helix Express relational database maintained on an Apple Macintosh desktop computer. The NGDB and a program to search the data files were published and distributed on Compact Disc-Read Only Memory (CD-ROM) in standard ISO 9660 format as USGS Digital Data Series DDS-14 (Zartman and others, 1995). As of May 1994, the NGDB consisted of more than 18,000 records containing over 30,000 individual ages, which is believed to represent approximately one-half the number of ages published for the United States through 1991. Because the organizational unit responsible for maintaining the database was abolished in 1996, and because we wanted to provide the data in more usable formats, we have reformatted the data, checked and edited the information in some records, and provided this online version of the NGDB. This report describes the changes made to the data and formats, and provides instructions for the use of the database in geographic information system (GIS) applications. The data are provided in .mdb (Microsoft Access), .xls (Microsoft Excel), and .txt (tab-separated value) formats. We also provide a single non-relational file that contains a subset of the data for ease of use.
77 FR 24925 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-26
... CES Personnel Information System database of NIFA. This database is updated annually from data provided by 1862 and 1890 land-grant universities. This database is maintained by the Agricultural Research... reviewer. NIFA maintains a database of potential reviewers. Information in the database is used to match...
Development of the Orion Crew Module Static Aerodynamic Database. Part 1; Hypersonic
NASA Technical Reports Server (NTRS)
Bibb, Karen L.; Walker, Eric L.; Robinson, Philip E.
2011-01-01
The Orion aerodynamic database provides force and moment coefficients given the velocity, attitude, configuration, etc. of the Crew Exploration Vehicle (CEV). The database is developed and maintained by the NASA CEV Aerosciences Project team from computational and experimental aerodynamic simulations. The database is used primarily by the Guidance, Navigation, and Control (GNC) team to design vehicle trajectories and assess flight performance. The initial hypersonic re-entry portion of the Crew Module (CM) database was developed in 2006. Updates incorporating additional data and improvements to the database formulation and uncertainty methodologies have been made since then. This paper details the process used to develop the CM database, including nominal values and uncertainties, for Mach numbers greater than 8 and angles of attack between 140deg and 180deg. The primary available data are more than 1000 viscous, reacting gas chemistry computational simulations using both the Laura and Dplr codes, over a range of Mach numbers from 2 to 37 and a range of angles of attack from 147deg to 172deg. Uncertainties were based on grid convergence, laminar-turbulent solution variations, combined altitude and code-to-code variations, and expected heatshield asymmetry. A radial basis function response surface tool, NEAR-RS, was used to fit the coefficient data smoothly in a velocity-angle-of-attack space. The resulting database is presented and includes some data comparisons and a discussion of the predicted variation of trim angle of attack and lift-to-drag ratio. The database provides a variation in trim angle of attack on the order of +/-2deg, and a range in lift-to-drag ratio of +/-0.035 for typical vehicle flight conditions.
Process description language: an experiment in robust programming for manufacturing systems
NASA Astrophysics Data System (ADS)
Spooner, Natalie R.; Creak, G. Alan
1998-10-01
Maintaining stable, robust, and consistent software is difficult in face of the increasing rate of change of customers' preferences, materials, manufacturing techniques, computer equipment, and other characteristic features of manufacturing systems. It is argued that software is commonly difficult to keep up to date because many of the implications of these changing features on software details are obscure. A possible solution is to use a software generation system in which the transformation of system properties into system software is made explicit. The proposed generation system stores the system properties, such as machine properties, product properties and information on manufacturing techniques, in databases. As a result this information, on which system control is based, can also be made available to other programs. In particular, artificial intelligence programs such as fault diagnosis programs, can benefit from using the same information as the control system, rather than a separate database which must be developed and maintained separately to ensure consistency. Experience in developing a simplified model of such a system is presented.
49 CFR 384.229 - Skills test examiner auditing and monitoring.
Code of Federal Regulations, 2014 CFR
2014-10-01
... overt monitoring must be performed at least once every year; (c) Establish and maintain a database to...; (d) Establish and maintain a database of all third party testers and examiners, which at a minimum... examiner; (e) Establish and maintain a database of all State CDL skills examiners, which at a minimum...
49 CFR 384.229 - Skills test examiner auditing and monitoring.
Code of Federal Regulations, 2013 CFR
2013-10-01
... overt monitoring must be performed at least once every year; (c) Establish and maintain a database to...; (d) Establish and maintain a database of all third party testers and examiners, which at a minimum... examiner; (e) Establish and maintain a database of all State CDL skills examiners, which at a minimum...
Caron, Alexandre; Clement, Guillaume; Heyman, Christophe; Aernout, Eva; Chazard, Emmanuel; Le Tertre, Alain
2015-01-01
Incompleteness of epidemiological databases is a major drawback when it comes to analyzing data. We conceived an epidemiological study to assess the association between newborn thyroid function and the exposure to perchlorates found in the tap water of the mother's home. Only 9% of newborn's exposure to perchlorate was known. The aim of our study was to design, test and evaluate an original method for imputing perchlorate exposure of newborns based on their maternity of birth. In a first database, an exhaustive collection of newborn's thyroid function measured during a systematic neonatal screening was collected. In this database the municipality of residence of the newborn's mother was only available for 2012. Between 2004 and 2011, the closest data available was the municipality of the maternity of birth. Exposure was assessed using a second database which contained the perchlorate levels for each municipality. We computed the catchment area of every maternity ward based on the French nationwide exhaustive database of inpatient stay. Municipality, and consequently perchlorate exposure, was imputed by a weighted draw in the catchment area. Missing values for remaining covariates were imputed by chained equation. A linear mixture model was computed on each imputed dataset. We compared odds ratios (ORs) and 95% confidence intervals (95% CI) estimated on real versus imputed 2012 data. The same model was then carried out for the whole imputed database. The ORs estimated on 36,695 observations by our multiple imputation method are comparable to the real 2012 data. On the 394,979 observations of the whole database, the ORs remain stable but the 95% CI tighten considerably. The model estimates computed on imputed data are similar to those calculated on real data. The main advantage of multiple imputation is to provide unbiased estimate of the ORs while maintaining their variances. Thus, our method will be used to increase the statistical power of future studies by including all 394,979 newborns.
NASA Technical Reports Server (NTRS)
Barth, Tim; Zapata, Edgar; Benjamin, Perakath; Graul, Mike; Jones, Doug
2005-01-01
Portfolio Analysis Tool (PAT) is a Web-based, client/server computer program that helps managers of multiple projects funded by different customers to make decisions regarding investments in those projects. PAT facilitates analysis on a macroscopic level, without distraction by parochial concerns or tactical details of individual projects, so that managers decisions can reflect the broad strategy of their organization. PAT is accessible via almost any Web-browser software. Experts in specific projects can contribute to a broad database that managers can use in analyzing the costs and benefits of all projects, but do not have access for modifying criteria for analyzing projects: access for modifying criteria is limited to managers according to levels of administrative privilege. PAT affords flexibility for modifying criteria for particular "focus areas" so as to enable standardization of criteria among similar projects, thereby making it possible to improve assessments without need to rewrite computer code or to rehire experts, and thereby further reducing the cost of maintaining and upgrading computer code. Information in the PAT database and results of PAT analyses can be incorporated into a variety of ready-made or customizable tabular or graphical displays.
Computational Thermochemistry of Jet Fuels and Rocket Propellants
NASA Technical Reports Server (NTRS)
Crawford, T. Daniel
2002-01-01
The design of new high-energy density molecules as candidates for jet and rocket fuels is an important goal of modern chemical thermodynamics. The NASA Glenn Research Center is home to a database of thermodynamic data for over 2000 compounds related to this goal, in the form of least-squares fits of heat capacities, enthalpies, and entropies as functions of temperature over the range of 300 - 6000 K. The chemical equilibrium with applications (CEA) program written and maintained by researchers at NASA Glenn over the last fifty years, makes use of this database for modeling the performance of potential rocket propellants. During its long history, the NASA Glenn database has been developed based on experimental results and data published in the scientific literature such as the standard JANAF tables. The recent development of efficient computational techniques based on quantum chemical methods provides an alternative source of information for expansion of such databases. For example, it is now possible to model dissociation or combustion reactions of small molecules to high accuracy using techniques such as coupled cluster theory or density functional theory. Unfortunately, the current applicability of reliable computational models is limited to relatively small molecules containing only around a dozen (non-hydrogen) atoms. We propose to extend the applicability of coupled cluster theory- often referred to as the 'gold standard' of quantum chemical methods- to molecules containing 30-50 non-hydrogen atoms. The centerpiece of this work is the concept of local correlation, in which the description of the electron interactions- known as electron correlation effects- are reduced to only their most important localized components. Such an advance has the potential to greatly expand the current reach of computational thermochemistry and thus to have a significant impact on the theoretical study of jet and rocket propellants.
49 CFR 384.229 - Skills test examiner auditing and monitoring.
Code of Federal Regulations, 2011 CFR
2011-10-01
... must be performed at least once every year; (c) Establish and maintain a database to track pass/fail... maintain a database of all third party testers and examiners, which at a minimum tracks the dates and... and maintain a database of all State CDL skills examiners, which at a minimum tracks the dates and...
49 CFR 384.229 - Skills test examiner auditing and monitoring.
Code of Federal Regulations, 2012 CFR
2012-10-01
... must be performed at least once every year; (c) Establish and maintain a database to track pass/fail... maintain a database of all third party testers and examiners, which at a minimum tracks the dates and... and maintain a database of all State CDL skills examiners, which at a minimum tracks the dates and...
Database for Safety-Oriented Tracking of Chemicals
NASA Technical Reports Server (NTRS)
Stump, Jacob; Carr, Sandra; Plumlee, Debrah; Slater, Andy; Samson, Thomas M.; Holowaty, Toby L.; Skeete, Darren; Haenz, Mary Alice; Hershman, Scot; Raviprakash, Pushpa
2010-01-01
SafetyChem is a computer program that maintains a relational database for tracking chemicals and associated hazards at Johnson Space Center (JSC) by use of a Web-based graphical user interface. The SafetyChem database is accessible to authorized users via a JSC intranet. All new chemicals pass through a safety office, where information on hazards, required personal protective equipment (PPE), fire-protection warnings, and target organ effects (TOEs) is extracted from material safety data sheets (MSDSs) and recorded in the database. The database facilitates real-time management of inventory with attention to such issues as stability, shelf life, reduction of waste through transfer of unused chemicals to laboratories that need them, quantification of chemical wastes, and identification of chemicals for which disposal is required. Upon searching the database for a chemical, the user receives information on physical properties of the chemical, hazard warnings, required PPE, a link to the MSDS, and references to the applicable International Standards Organization (ISO) 9000 standard work instructions and the applicable job hazard analysis. Also, to reduce the labor hours needed to comply with reporting requirements of the Occupational Safety and Health Administration, the data can be directly exported into the JSC hazardous- materials database.
A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)
2001-01-01
NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.
The Biomolecular Interaction Network Database and related tools 2005 update
Alfarano, C.; Andrade, C. E.; Anthony, K.; Bahroos, N.; Bajec, M.; Bantoft, K.; Betel, D.; Bobechko, B.; Boutilier, K.; Burgess, E.; Buzadzija, K.; Cavero, R.; D'Abreo, C.; Donaldson, I.; Dorairajoo, D.; Dumontier, M. J.; Dumontier, M. R.; Earles, V.; Farrall, R.; Feldman, H.; Garderman, E.; Gong, Y.; Gonzaga, R.; Grytsan, V.; Gryz, E.; Gu, V.; Haldorsen, E.; Halupa, A.; Haw, R.; Hrvojic, A.; Hurrell, L.; Isserlin, R.; Jack, F.; Juma, F.; Khan, A.; Kon, T.; Konopinsky, S.; Le, V.; Lee, E.; Ling, S.; Magidin, M.; Moniakis, J.; Montojo, J.; Moore, S.; Muskat, B.; Ng, I.; Paraiso, J. P.; Parker, B.; Pintilie, G.; Pirone, R.; Salama, J. J.; Sgro, S.; Shan, T.; Shu, Y.; Siew, J.; Skinner, D.; Snyder, K.; Stasiuk, R.; Strumpf, D.; Tuekam, B.; Tao, S.; Wang, Z.; White, M.; Willis, R.; Wolting, C.; Wong, S.; Wrong, A.; Xin, C.; Yao, R.; Yates, B.; Zhang, S.; Zheng, K.; Pawson, T.; Ouellette, B. F. F.; Hogue, C. W. V.
2005-01-01
The Biomolecular Interaction Network Database (BIND) (http://bind.ca) archives biomolecular interaction, reaction, complex and pathway information. Our aim is to curate the details about molecular interactions that arise from published experimental research and to provide this information, as well as tools to enable data analysis, freely to researchers worldwide. BIND data are curated into a comprehensive machine-readable archive of computable information and provides users with methods to discover interactions and molecular mechanisms. BIND has worked to develop new methods for visualization that amplify the underlying annotation of genes and proteins to facilitate the study of molecular interaction networks. BIND has maintained an open database policy since its inception in 1999. Data growth has proceeded at a tremendous rate, approaching over 100 000 records. New services provided include a new BIND Query and Submission interface, a Standard Object Access Protocol service and the Small Molecule Interaction Database (http://smid.blueprint.org) that allows users to determine probable small molecule binding sites of new sequences and examine conserved binding residues. PMID:15608229
A Comprehensive Opacities/Atomic Database for the Analysis of Astrophysical Spectra and Modeling
NASA Technical Reports Server (NTRS)
Pradhan, Anil K. (Principal Investigator)
1997-01-01
The main goals of this ADP award have been accomplished. The electronic database TOPBASE, consisting of the large volume of atomic data from the Opacity Project, has been installed and is operative at a NASA site at the Laboratory for High Energy Astrophysics Science Research Center (HEASRC) at the Goddard Space Flight Center. The database will be continually maintained and updated by the PI and collaborators. TOPBASE is publicly accessible from IP: topbase.gsfc.nasa.gov. During the last six months (since the previous progress report), considerable work has been carried out to: (1) put in the new data for low ionization stages of iron: Fe I - V, beginning with Fe II, (2) high-energy photoionization cross sections computed by Dr. Hong Lin Zhang (consultant on the Project) were 'merged' with the current Opacity Project data and input into TOPbase; (3) plans laid out for a further extension of TOPbase to include TIPbase, the database for collisional data to complement the radiative data in TOPbase.
Site partitioning for distributed redundant disk arrays
NASA Technical Reports Server (NTRS)
Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.
1992-01-01
Distributed redundant disk arrays can be used in a distributed computing system or database system to provide recovery in the presence of temporary and permanent failures of single sites. In this paper, we look at the problem of partitioning the sites into redundant arrays in such way that the communication costs for maintaining the parity information are minimized. We show that the partitioning problem is NP-complete and we propose two heuristic algorithms for finding approximate solutions.
Current experiments in elementary particle physics. Revision
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galic, H.; Armstrong, F.E.; von Przewoski, B.
1994-08-01
This report contains summaries of 568 current and recent experiments in elementary particle physics. Experiments that finished taking data before 1988 are excluded. Included are experiments at BEPC (Beijing), BNL, CEBAF, CERN, CESR, DESY, FNAL, INS (Tokyo), ITEP (Moscow), IUCF (Bloomington), KEK, LAMPF, Novosibirsk, PNPI (St. Petersburg), PSI, Saclay, Serpukhov, SLAC, and TRIUMF, and also several underground and underwater experiments. Instructions are given for remote searching of the computer database (maintained under the SLAC/SPIRES system) that contains the summaries.
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2018-01-01
This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form. PMID:29608174
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2018-03-19
This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form.
An SQL query generator for CLIPS
NASA Technical Reports Server (NTRS)
Snyder, James; Chirica, Laurian
1990-01-01
As expert systems become more widely used, their access to large amounts of external information becomes increasingly important. This information exists in several forms such as statistical, tabular data, knowledge gained by experts and large databases of information maintained by companies. Because many expert systems, including CLIPS, do not provide access to this external information, much of the usefulness of expert systems is left untapped. The scope of this paper is to describe a database extension for the CLIPS expert system shell. The current industry standard database language is SQL. Due to SQL standardization, large amounts of information stored on various computers, potentially at different locations, will be more easily accessible. Expert systems should be able to directly access these existing databases rather than requiring information to be re-entered into the expert system environment. The ORACLE relational database management system (RDBMS) was used to provide a database connection within the CLIPS environment. To facilitate relational database access a query generation system was developed as a CLIPS user function. The queries are entered in a CLlPS-like syntax and are passed to the query generator, which constructs and submits for execution, an SQL query to the ORACLE RDBMS. The query results are asserted as CLIPS facts. The query generator was developed primarily for use within the ICADS project (Intelligent Computer Aided Design System) currently being developed by the CAD Research Unit in the California Polytechnic State University (Cal Poly). In ICADS, there are several parallel or distributed expert systems accessing a common knowledge base of facts. Expert system has a narrow domain of interest and therefore needs only certain portions of the information. The query generator provides a common method of accessing this information and allows the expert system to specify what data is needed without specifying how to retrieve it.
Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A.
2013-01-01
Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED are maintained at http://vislab.github.com/MobbedMatlab/ PMID:24124417
Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A
2013-01-01
Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED are maintained at http://vislab.github.com/MobbedMatlab/
Development of a statewide trauma registry using multiple linked sources of data.
Clark, D. E.
1993-01-01
In order to develop a cost-effective method of injury surveillance and trauma system evaluation in a rural state, computer programs were written linking records from two major hospital trauma registries, a statewide trauma tracking study, hospital discharge abstracts, death certificates, and ambulance run reports. A general-purpose database management system, programming language, and operating system were used. Data from 1991 appeared to be successfully linked using only indirect identifying information. Familiarity with local geography and the idiosyncracies of each data source were helpful in programming for effective matching of records. For each individual case identified in this way, data from all available sources were then merged and imported into a standard database format. This inexpensive, population-based approach, maintaining flexibility for end-users with some database training, may be adaptable for other regions. There is a need for further improvement and simplification of the record-linkage process for this and similar purposes. PMID:8130556
Dalpé, Gratien; Joly, Yann
2014-09-01
Healthcare-related bioinformatics databases are increasingly offering the possibility to maintain, organize, and distribute DNA sequencing data. Different national and international institutions are currently hosting such databases that offer researchers website platforms where they can obtain sequencing data on which they can perform different types of analysis. Until recently, this process remained mostly one-dimensional, with most analysis concentrated on a limited amount of data. However, newer genome sequencing technology is producing a huge amount of data that current computer facilities are unable to handle. An alternative approach has been to start adopting cloud computing services for combining the information embedded in genomic and model system biology data, patient healthcare records, and clinical trials' data. In this new technological paradigm, researchers use virtual space and computing power from existing commercial or not-for-profit cloud service providers to access, store, and analyze data via different application programming interfaces. Cloud services are an alternative to the need of larger data storage; however, they raise different ethical, legal, and social issues. The purpose of this Commentary is to summarize how cloud computing can contribute to bioinformatics-based drug discovery and to highlight some of the outstanding legal, ethical, and social issues that are inherent in the use of cloud services. © 2014 Wiley Periodicals, Inc.
Kashefpur, Masoud; Kafieh, Rahele; Jorjandi, Sahar; Golmohammadi, Hadis; Khodabande, Zahra; Abbasi, Mohammadreza; Teifuri, Nilufar; Fakharzadeh, Ali Akbar; Kashefpoor, Maryam; Rabbani, Hossein
2017-01-01
An online depository was introduced to share clinical ground truth with the public and provide open access for researchers to evaluate their computer-aided algorithms. PHP was used for web programming and MySQL for database managing. The website was entitled "biosigdata.com." It was a fast, secure, and easy-to-use online database for medical signals and images. Freely registered users could download the datasets and could also share their own supplementary materials while maintaining their privacies (citation and fee). Commenting was also available for all datasets, and automatic sitemap and semi-automatic SEO indexing have been set for the site. A comprehensive list of available websites for medical datasets is also presented as a Supplementary (http://journalonweb.com/tempaccess/4800.584.JMSS_55_16I3253.pdf).
PRAIRIEMAP: A GIS database for prairie grassland management in western North America
,
2003-01-01
The USGS Forest and Rangeland Ecosystem Science Center, Snake River Field Station (SRFS) maintains a database of spatial information, called PRAIRIEMAP, which is needed to address the management of prairie grasslands in western North America. We identify and collect spatial data for the region encompassing the historical extent of prairie grasslands (Figure 1). State and federal agencies, the primary entities responsible for management of prairie grasslands, need this information to develop proactive management strategies to prevent prairie-grassland wildlife species from being listed as Endangered Species, or to develop appropriate responses if listing does occur. Spatial data are an important component in documenting current habitat and other environmental conditions, which can be used to identify areas that have undergone significant changes in land cover and to identify underlying causes. Spatial data will also be a critical component guiding the decision processes for restoration of habitat in the Great Plains. As such, the PRAIRIEMAP database will facilitate analyses of large-scale and range-wide factors that may be causing declines in grassland habitat and populations of species that depend on it for their survival. Therefore, development of a reliable spatial database carries multiple benefits for land and wildlife management. The project consists of 3 phases: (1) identify relevant spatial data, (2) assemble, document, and archive spatial data on a computer server, and (3) develop and maintain the web site (http://prairiemap.wr.usgs.gov) for query and transfer of GIS data to managers and researchers.
Lewinski, Allison A; Fisher, Edwin B
2016-06-01
Interventions via the internet provide support to individuals managing chronic illness. The purpose of this integrative review was to determine how the features of a computer-mediated environment influence social interactions among individuals with type 2 diabetes. A combination of MeSH and keyword terms, based on the cognates of three broad groupings: social interaction, computer-mediated environments, and chronic illness, was used to search the PubMed, PsychInfo, Sociology Research Database, and Cumulative Index to Nursing and Allied Health Literature databases. Eleven articles met the inclusion criteria. Computer-mediated environments enhance an individual's ability to interact with peers while increasing the convenience of obtaining personalized support. A matrix, focused on social interaction among peers, identified themes across all articles, and five characteristics emerged: (1) the presence of synchronous and asynchronous communication, (2) the ability to connect with similar peers, (3) the presence or absence of a moderator, (4) personalization of feedback regarding individual progress and self-management, and (5) the ability of individuals to maintain choice during participation. Individuals interact with peers to obtain relevant, situation-specific information and knowledge about managing their own care. Computer-mediated environments facilitate the ability of individuals to exchange this information despite temporal or geographical barriers that may be present, thus improving T2D self-management. © The Author(s) 2015.
SPSmart: adapting population based SNP genotype databases for fast and comprehensive web access.
Amigo, Jorge; Salas, Antonio; Phillips, Christopher; Carracedo, Angel
2008-10-10
In the last five years large online resources of human variability have appeared, notably HapMap, Perlegen and the CEPH foundation. These databases of genotypes with population information act as catalogues of human diversity, and are widely used as reference sources for population genetics studies. Although many useful conclusions may be extracted by querying databases individually, the lack of flexibility for combining data from within and between each database does not allow the calculation of key population variability statistics. We have developed a novel tool for accessing and combining large-scale genomic databases of single nucleotide polymorphisms (SNPs) in widespread use in human population genetics: SPSmart (SNPs for Population Studies). A fast pipeline creates and maintains a data mart from the most commonly accessed databases of genotypes containing population information: data is mined, summarized into the standard statistical reference indices, and stored into a relational database that currently handles as many as 4 x 10(9) genotypes and that can be easily extended to new database initiatives. We have also built a web interface to the data mart that allows the browsing of underlying data indexed by population and the combining of populations, allowing intuitive and straightforward comparison of population groups. All the information served is optimized for web display, and most of the computations are already pre-processed in the data mart to speed up the data browsing and any computational treatment requested. In practice, SPSmart allows populations to be combined into user-defined groups, while multiple databases can be accessed and compared in a few simple steps from a single query. It performs the queries rapidly and gives straightforward graphical summaries of SNP population variability through visual inspection of allele frequencies outlined in standard pie-chart format. In addition, full numerical description of the data is output in statistical results panels that include common population genetics metrics such as heterozygosity, Fst and In.
Conditions Database for the Belle II Experiment
NASA Astrophysics Data System (ADS)
Wood, L.; Elsethagen, T.; Schram, M.; Stephan, E.
2017-10-01
The Belle II experiment at KEK is preparing for first collisions in 2017. Processing the large amounts of data that will be produced will require conditions data to be readily available to systems worldwide in a fast and efficient manner that is straightforward for both the user and maintainer. The Belle II conditions database was designed with a straightforward goal: make it as easily maintainable as possible. To this end, HEP-specific software tools were avoided as much as possible and industry standard tools used instead. HTTP REST services were selected as the application interface, which provide a high-level interface to users through the use of standard libraries such as curl. The application interface itself is written in Java and runs in an embedded Payara-Micro Java EE application server. Scalability at the application interface is provided by use of Hazelcast, an open source In-Memory Data Grid (IMDG) providing distributed in-memory computing and supporting the creation and clustering of new application interface instances as demand increases. The IMDG provides fast and efficient access to conditions data via in-memory caching.
Current Experiments in Particle Physics. 1996 Edition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galic, Hrvoje
2003-06-27
This report contains summaries of current and recent experiments in Particle Physics. Included are experiments at BEPC (Beijing), BNL, CEBAF, CERN, CESR, DESY, FNAL, Frascati, ITEP (Moscow), JINR (Dubna), KEK, LAMPF, Novosibirsk, PNPI (St. Petersburg), PSI, Saclay, Serpukhov, SLAC, and TRIUMF, and also several proton decay and solar neutrino experiments. Excluded are experiments that finished taking data before 1991. Instructions are given for the World Wide Web (WWW) searching of the computer database (maintained under the SLAC-SPIRES system) that contains the summaries.
Current experiments in elementary particle physics. Revised
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galic, H.; Wohl, C.G.; Armstrong, B.
This report contains summaries of 584 current and recent experiments in elementary particle physics. Experiments that finished taking data before 1986 are excluded. Included are experiments at Brookhaven, CERN, CESR, DESY, Fermilab, Tokyo Institute of Nuclear Studies, Moscow Institute of Theoretical and Experimental Physics, KEK, LAMPF, Novosibirsk, Paul Scherrer Institut (PSI), Saclay, Serpukhov, SLAC, SSCL, and TRIUMF, and also several underground and underwater experiments. Instructions are given for remote searching of the computer database (maintained under the SLAC/SPIRES system) that contains the summaries.
Halligan, Brian D.; Geiger, Joey F.; Vallejos, Andrew K.; Greene, Andrew S.; Twigger, Simon N.
2009-01-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step by step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center website (http://proteomics.mcw.edu/vipdac). PMID:19358578
Halligan, Brian D; Geiger, Joey F; Vallejos, Andrew K; Greene, Andrew S; Twigger, Simon N
2009-06-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).
Ultra-Structure database design methodology for managing systems biology data and analyses
Maier, Christopher W; Long, Jeffrey G; Hemminger, Bradley M; Giddings, Morgan C
2009-01-01
Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping). Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find Ultra-Structure offers substantial benefits for biological information systems, the largest being the integration of diverse information sources into a common framework. This facilitates systems biology research by integrating data from disparate high-throughput techniques. It also enables us to readily incorporate new data types, sources, and domain knowledge with no change to the database structure or associated computer code. Ultra-Structure may be a significant step towards solving the hard problem of data management and integration in the systems biology era. PMID:19691849
48 CFR 352.227-14 - Rights in Data-Exceptional Circumstances.
Code of Federal Regulations, 2014 CFR
2014-10-01
....] Computer database or database means a collection of recorded information in a form capable of, and for the... databases or computer software documentation. Computer software documentation means owner's manuals, user's... nature (including computer databases and computer software documentation). This term does not include...
Design and implementation of an audit trail in compliance with US regulations.
Jiang, Keyuan; Cao, Xiang
2011-10-01
Audit trails have been used widely to ensure quality of study data and have been implemented in computerized clinical trials data systems. Increasingly, there is a need to audit access to study participant identifiable information to provide assurance that study participant privacy is protected and confidentiality is maintained. In the United States, several federal regulations specify how the audit trail function should be implemented. To describe the development and implementation of a comprehensive audit trail system that meets the regulatory requirements of assuring data quality and integrity and protecting participant privacy and that is also easy to implement and maintain. The audit trail system was designed and developed after we examined regulatory requirements, data access methods, prevailing application architecture, and good security practices. Our comprehensive audit trail system was developed and implemented at the database level using a commercially available database management software product. It captures both data access and data changes with the correct user identifier. Documentation of access is initiated automatically in response to either data retrieval or data change at the database level. Currently, our system has been implemented only on one commercial database management system. Although our audit trail algorithm does not allow for logging aggregate operations, aggregation does not reveal sensitive private participant information. Careful consideration must be given to data items selected for monitoring because selection of all data items using our system can dramatically increase the requirements for computer disk space. Evaluating the criticality and sensitivity of individual data items selected can control the storage requirements for clinical trial audit trail records. Our audit trail system is capable of logging data access and data change operations to satisfy regulatory requirements. Our approach is applicable to virtually any data that can be stored in a relational database.
NASA Astrophysics Data System (ADS)
Sugriwan, I.; Soesanto, O.
2017-05-01
The research was focused on development of data acquisition system to monitor the content of methane, relative humidity and temperature on peatlands in South Kalimantan, Indonesia. Methane is one of greenhouse gases that emitted from peatlands; while humidity and temperature are important parameters of microclimate on peatlands. The content of methane, humidity and temperature are three parameters were monitored digitally, real time, continuously and automatically record by data acquisition systems that interfaced to the personal computer. The hardware of data acquisition system consists of power supply unit, TGS2611 methane gas sensor, SHT11 humidity and temperature sensors, voltage follower, ATMega8535 microcontroller, 16 × 2 LCD character and personal computer. ATMega8535 module is a device to manage all part in measuring instrument. The software which is responsible to take sensor data, calculate characteristic equation and send data to 16 × 2 LCD character are Basic Compiler. To interface between measuring instrument and personal computer is maintained by Delphi 7. The result of data acquisition showed on 16 × 2 LCD characters, PC monitor and database with developed by XAMPP. Methane, humidity, and temperature which release from peatlands are trapped by Closed-Chamber Measurement with dimension 60 × 50 × 40 cm3. TGS2611 methane gas sensor and SHT11 humidity and temperature sensor are calibrated to determine transfer function used to data communication between sensors and microcontroller and integrated into ATMega8535 Microcontroller. Calculation of RS and RL of TGS2611 methane gas sensor refer to data sheet and obtained respectively 1360 ohm and 905 ohm. The characteristic equation of TGS2611 satisfies equation VRL = 0.561 ln n - 2.2641 volt, with n is a various concentrations and VRL in volt. The microcontroller maintained the voltage signal than interfaced it to liquid crystal displays and personal computer (laptop) to display result of the measurement. The result of data acquisition saved on excels and database format.
8 CFR 338.11 - Execution and issuance of certificate of naturalization by clerk of court.
Code of Federal Regulations, 2010 CFR
2010-01-01
... the petitioner. If the court maintains naturalization records on an electronic database then only the... and maintained in the court's electronic database. (b) The certificate shall show under “former..., or if using automation equipment, ensure it is part of the electronic database record. The clerk of...
8 CFR 338.11 - Execution and issuance of certificate of naturalization by clerk of court.
Code of Federal Regulations, 2011 CFR
2011-01-01
... the petitioner. If the court maintains naturalization records on an electronic database then only the... and maintained in the court's electronic database. (b) The certificate shall show under “former..., or if using automation equipment, ensure it is part of the electronic database record. The clerk of...
Computer-Aided Systems Engineering for Flight Research Projects Using a Workgroup Database
NASA Technical Reports Server (NTRS)
Mizukami, Masahi
2004-01-01
An online systems engineering tool for flight research projects has been developed through the use of a workgroup database. Capabilities are implemented for typical flight research systems engineering needs in document library, configuration control, hazard analysis, hardware database, requirements management, action item tracking, project team information, and technical performance metrics. Repetitive tasks are automated to reduce workload and errors. Current data and documents are instantly available online and can be worked on collaboratively. Existing forms and conventional processes are used, rather than inventing or changing processes to fit the tool. An integrated tool set offers advantages by automatically cross-referencing data, minimizing redundant data entry, and reducing the number of programs that must be learned. With a simplified approach, significant improvements are attained over existing capabilities for minimal cost. By using a workgroup-level database platform, personnel most directly involved in the project can develop, modify, and maintain the system, thereby saving time and money. As a pilot project, the system has been used to support an in-house flight experiment. Options are proposed for developing and deploying this type of tool on a more extensive basis.
The Cambridge Structural Database
Groom, Colin R.; Bruno, Ian J.; Lightfoot, Matthew P.; Ward, Suzanna C.
2016-01-01
The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal–organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface. PMID:27048719
The Cambridge Structural Database.
Groom, Colin R; Bruno, Ian J; Lightfoot, Matthew P; Ward, Suzanna C
2016-04-01
The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal-organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface.
NASA Technical Reports Server (NTRS)
Johnson, Paul W.
2008-01-01
ePORT (electronic Project Online Risk Tool) provides a systematic approach to using an electronic database program to manage a program/project risk management processes. This presentation will briefly cover the standard risk management procedures, then thoroughly cover NASA's Risk Management tool called ePORT. This electronic Project Online Risk Tool (ePORT) is a web-based risk management program that provides a common framework to capture and manage risks, independent of a programs/projects size and budget. It is used to thoroughly cover the risk management paradigm providing standardized evaluation criterion for common management reporting, ePORT improves Product Line, Center and Corporate Management insight, simplifies program/project manager reporting, and maintains an archive of data for historical reference.
The MIGenAS integrated bioinformatics toolkit for web-based sequence analysis
Rampp, Markus; Soddemann, Thomas; Lederer, Hermann
2006-01-01
We describe a versatile and extensible integrated bioinformatics toolkit for the analysis of biological sequences over the Internet. The web portal offers convenient interactive access to a growing pool of chainable bioinformatics software tools and databases that are centrally installed and maintained by the RZG. Currently, supported tasks comprise sequence similarity searches in public or user-supplied databases, computation and validation of multiple sequence alignments, phylogenetic analysis and protein–structure prediction. Individual tools can be seamlessly chained into pipelines allowing the user to conveniently process complex workflows without the necessity to take care of any format conversions or tedious parsing of intermediate results. The toolkit is part of the Max-Planck Integrated Gene Analysis System (MIGenAS) of the Max Planck Society available at (click ‘Start Toolkit’). PMID:16844980
2012-01-01
Microorganisms are ubiquitous on earth and have diverse metabolic transformative capabilities important for environmental biodegradation of chemicals that helps maintain ecosystem and human health. Microbial biodegradative metabolism is the main focus of the University of Minnesota Biocatalysis/Biodegradation Database (UM-BBD). UM-BBD data has also been used to develop a computational metabolic pathway prediction system that can be applied to chemicals for which biodegradation data is currently lacking. The UM-Pathway Prediction System (UM-PPS) relies on metabolic rules that are based on organic functional groups and predicts plausible biodegradative metabolism. The predictions are useful to environmental chemists that look for metabolic intermediates, for regulators looking for potential toxic products, for microbiologists seeking to understand microbial biodegradation, and others with a wide-range of interests. PMID:22587916
Kashefpur, Masoud; Kafieh, Rahele; Jorjandi, Sahar; Golmohammadi, Hadis; Khodabande, Zahra; Abbasi, Mohammadreza; Teifuri, Nilufar; Fakharzadeh, Ali Akbar; Kashefpoor, Maryam; Rabbani, Hossein
2017-01-01
An online depository was introduced to share clinical ground truth with the public and provide open access for researchers to evaluate their computer-aided algorithms. PHP was used for web programming and MySQL for database managing. The website was entitled “biosigdata.com.” It was a fast, secure, and easy-to-use online database for medical signals and images. Freely registered users could download the datasets and could also share their own supplementary materials while maintaining their privacies (citation and fee). Commenting was also available for all datasets, and automatic sitemap and semi-automatic SEO indexing have been set for the site. A comprehensive list of available websites for medical datasets is also presented as a Supplementary (http://journalonweb.com/tempaccess/4800.584.JMSS_55_16I3253.pdf). PMID:28487832
One for All: Maintaining a Single Schedule Database for Large Development Projects
NASA Technical Reports Server (NTRS)
Hilscher, R.; Howerton, G.
1999-01-01
Efficiently maintaining and controlling a single schedule database in an Integrated Product Team environment is a significant challenge. It's accomplished effectively with the right combination of tools, skills, strategy, creativity, and teamwork. We'll share our lessons learned maintaining a 20,000 plus task network on a 36 month project.
Zhou, Hufeng; Jin, Jingjing; Zhang, Haojun; Yi, Bo; Wozniak, Michal; Wong, Limsoon
2012-01-01
Pathway data are important for understanding the relationship between genes, proteins and many other molecules in living organisms. Pathway gene relationships are crucial information for guidance, prediction, reference and assessment in biochemistry, computational biology, and medicine. Many well-established databases--e.g., KEGG, WikiPathways, and BioCyc--are dedicated to collecting pathway data for public access. However, the effectiveness of these databases is hindered by issues such as incompatible data formats, inconsistent molecular representations, inconsistent molecular relationship representations, inconsistent referrals to pathway names, and incomprehensive data from different databases. In this paper, we overcome these issues through extraction, normalization and integration of pathway data from several major public databases (KEGG, WikiPathways, BioCyc, etc). We build a database that not only hosts our integrated pathway gene relationship data for public access but also maintains the necessary updates in the long run. This public repository is named IntPath (Integrated Pathway gene relationship database for model organisms and important pathogens). Four organisms--S. cerevisiae, M. tuberculosis H37Rv, H. Sapiens and M. musculus--are included in this version (V2.0) of IntPath. IntPath uses the "full unification" approach to ensure no deletion and no introduced noise in this process. Therefore, IntPath contains much richer pathway-gene and pathway-gene pair relationships and much larger number of non-redundant genes and gene pairs than any of the single-source databases. The gene relationships of each gene (measured by average node degree) per pathway are significantly richer. The gene relationships in each pathway (measured by average number of gene pairs per pathway) are also considerably richer in the integrated pathways. Moderate manual curation are involved to get rid of errors and noises from source data (e.g., the gene ID errors in WikiPathways and relationship errors in KEGG). We turn complicated and incompatible xml data formats and inconsistent gene and gene relationship representations from different source databases into normalized and unified pathway-gene and pathway-gene pair relationships neatly recorded in simple tab-delimited text format and MySQL tables, which facilitates convenient automatic computation and large-scale referencing in many related studies. IntPath data can be downloaded in text format or MySQL dump. IntPath data can also be retrieved and analyzed conveniently through web service by local programs or through web interface by mouse clicks. Several useful analysis tools are also provided in IntPath. We have overcome in IntPath the issues of compatibility, consistency, and comprehensiveness that often hamper effective use of pathway databases. We have included four organisms in the current release of IntPath. Our methodology and programs described in this work can be easily applied to other organisms; and we will include more model organisms and important pathogens in future releases of IntPath. IntPath maintains regular updates and is freely available at http://compbio.ddns.comp.nus.edu.sg:8080/IntPath.
A signal strength priority based position estimation for mobile platforms
NASA Astrophysics Data System (ADS)
Kalgikar, Bhargav; Akopian, David; Chen, Philip
2010-01-01
Global Positioning System (GPS) products help to navigate while driving, hiking, boating, and flying. GPS uses a combination of orbiting satellites to determine position coordinates. This works great in most outdoor areas, but the satellite signals are not strong enough to penetrate inside most indoor environments. As a result, a new strain of indoor positioning technologies that make use of 802.11 wireless LANs (WLAN) is beginning to appear on the market. In WLAN positioning the system either monitors propagation delays between wireless access points and wireless device users to apply trilateration techniques or it maintains the database of location-specific signal fingerprints which is used to identify the most likely match of incoming signal data with those preliminary surveyed and saved in the database. In this paper we investigate the issue of deploying WLAN positioning software on mobile platforms with typically limited computational resources. We suggest a novel received signal strength rank order based location estimation system to reduce computational loads with a robust performance. The proposed system performance is compared to conventional approaches.
A multi-user real time inventorying system for radioactive materials: a networking approach.
Mehta, S; Bandyopadhyay, D; Hoory, S
1998-01-01
A computerized system for radioisotope management and real time inventory coordinated across a large organization is reported. It handles hundreds of individual users and their separate inventory records. Use of highly efficient computer network and database technologies makes it possible to accept, maintain, and furnish all records related to receipt, usage, and disposal of the radioactive materials for the users separately and collectively. The system's central processor is an HP-9000/800 G60 RISC server and users from across the organization use their personal computers to login to this server using the TCP/IP networking protocol, which makes distributed use of the system possible. Radioisotope decay is automatically calculated by the program, so that it can make the up-to-date radioisotope inventory data of an entire institution available immediately. The system is specifically designed to allow use by large numbers of users (about 300) and accommodates high volumes of data input and retrieval without compromising simplicity and accuracy. Overall, it is an example of a true multi-user, on-line, relational database information system that makes the functioning of a radiation safety department efficient.
Unified Planetary Coordinates System: A Searchable Database of Geodetic Information
NASA Technical Reports Server (NTRS)
Becker, K. J.a; Gaddis, L. R.; Soderblom, L. A.; Kirk, R. L.; Archinal, B. A.; Johnson, J. R.; Anderson, J. A.; Bowman-Cisneros, E.; LaVoie, S.; McAuley, M.
2005-01-01
Over the past 40 years, an enormous quantity of orbital remote sensing data has been collected for Mars from many missions and instruments. Unfortunately these datasets currently exist in a wide range of disparate coordinate systems, making it extremely difficult for the scientific community to easily correlate, combine, and compare data from different Mars missions and instruments. As part of our work for the PDS Imaging Node and on behalf of the USGS Astrogeology Team, we are working to solve this problem and to provide the NASA scientific research community with easy access to Mars orbital data in a unified, consistent coordinate system along with a wide variety of other key geometric variables. The Unified Planetary Coordinates (UPC) system is comprised of two main elements: (1) a database containing Mars orbital remote sensing data computed using a uniform coordinate system, and (2) a process by which continual maintainance and updates to the contents of the database are performed.
Current experiments in elementary particle physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohl, C.G.; Armstrong, F.E., Oyanagi, Y.; Dodder, D.C.
1987-03-01
This report contains summaries of 720 recent and current experiments in elementary particle physics (experiments that finished taking data before 1980 are excluded). Included are experiments at Brookhaven, CERN, CESR, DESY, Fermilab, Moscow Institute of Theoretical and Experimental Physics, Tokyo Institute of Nuclear Studies, KEK, LAMPF, Leningrad Nuclear Physics Institute, Saclay, Serpukhov, SIN, SLAC, and TRIUMF, and also experiments on proton decay. Instructions are given for searching online the computer database (maintained under the SLAC/SPIRES system) that contains the summaries. Properties of the fixed-target beams at most of the laboratories are summarized.
Identifying Crucial Parameter Correlations Maintaining Bursting Activity
Doloc-Mihu, Anca; Calabrese, Ronald L.
2014-01-01
Recent experimental and computational studies suggest that linearly correlated sets of parameters (intrinsic and synaptic properties of neurons) allow central pattern-generating networks to produce and maintain their rhythmic activity regardless of changing internal and external conditions. To determine the role of correlated conductances in the robust maintenance of functional bursting activity, we used our existing database of half-center oscillator (HCO) model instances of the leech heartbeat CPG. From the database, we identified functional activity groups of burster (isolated neuron) and half-center oscillator model instances and realistic subgroups of each that showed burst characteristics (principally period and spike frequency) similar to the animal. To find linear correlations among the conductance parameters maintaining functional leech bursting activity, we applied Principal Component Analysis (PCA) to each of these four groups. PCA identified a set of three maximal conductances (leak current, Leak; a persistent K current, K2; and of a persistent Na+ current, P) that correlate linearly for the two groups of burster instances but not for the HCO groups. Visualizations of HCO instances in a reduced space suggested that there might be non-linear relationships between these parameters for these instances. Experimental studies have shown that period is a key attribute influenced by modulatory inputs and temperature variations in heart interneurons. Thus, we explored the sensitivity of period to changes in maximal conductances of Leak, K2, and P, and we found that for our realistic bursters the effect of these parameters on period could not be assessed because when varied individually bursting activity was not maintained. PMID:24945358
Guidelines for establishing and maintaining construction quality databases.
DOT National Transportation Integrated Search
2006-11-01
The main objective of this study was to develop and present guidelines for State highway agencies (SHAs) in establishing and maintaining database systems geared towards construction quality issues for asphalt and concrete paving projects. To accompli...
48 CFR 52.227-14 - Rights in Data-General.
Code of Federal Regulations, 2011 CFR
2011-10-01
... database or database means a collection of recorded information in a form capable of, and for the purpose... enable the computer program to be produced, created, or compiled. (2) Does not include computer databases... databases and computer software documentation). This term does not include computer software or financial...
48 CFR 52.227-14 - Rights in Data-General.
Code of Federal Regulations, 2014 CFR
2014-10-01
... database or database means a collection of recorded information in a form capable of, and for the purpose... enable the computer program to be produced, created, or compiled. (2) Does not include computer databases... databases and computer software documentation). This term does not include computer software or financial...
48 CFR 52.227-14 - Rights in Data-General.
Code of Federal Regulations, 2012 CFR
2012-10-01
... database or database means a collection of recorded information in a form capable of, and for the purpose... enable the computer program to be produced, created, or compiled. (2) Does not include computer databases... databases and computer software documentation). This term does not include computer software or financial...
48 CFR 52.227-14 - Rights in Data-General.
Code of Federal Regulations, 2013 CFR
2013-10-01
... database or database means a collection of recorded information in a form capable of, and for the purpose... enable the computer program to be produced, created, or compiled. (2) Does not include computer databases... databases and computer software documentation). This term does not include computer software or financial...
Discrepancy Reporting Management System
NASA Technical Reports Server (NTRS)
Cooper, Tonja M.; Lin, James C.; Chatillon, Mark L.
2004-01-01
Discrepancy Reporting Management System (DRMS) is a computer program designed for use in the stations of NASA's Deep Space Network (DSN) to help establish the operational history of equipment items; acquire data on the quality of service provided to DSN customers; enable measurement of service performance; provide early insight into the need to improve processes, procedures, and interfaces; and enable the tracing of a data outage to a change in software or hardware. DRMS is a Web-based software system designed to include a distributed database and replication feature to achieve location-specific autonomy while maintaining a consistent high quality of data. DRMS incorporates commercial Web and database software. DRMS collects, processes, replicates, communicates, and manages information on spacecraft data discrepancies, equipment resets, and physical equipment status, and maintains an internal station log. All discrepancy reports (DRs), Master discrepancy reports (MDRs), and Reset data are replicated to a master server at NASA's Jet Propulsion Laboratory; Master DR data are replicated to all the DSN sites; and Station Logs are internal to each of the DSN sites and are not replicated. Data are validated according to several logical mathematical criteria. Queries can be performed on any combination of data.
Supplier's Status for Critical Solid Propellants, Explosive, and Pyrotechnic Ingredients
NASA Technical Reports Server (NTRS)
Sims, B. L.; Painter, C. R.; Nauflett, G. W.; Cramer, R. J.; Mulder, E. J.
2000-01-01
In the early 1970's a program was initiated at the Naval Surface Warfare Center/Indian Head Division (NSWC/IHDIV) to address the well-known problems associated with availability and suppliers of critical ingredients. These critical ingredients are necessary for preparation of solid propellants and explosives manufactured by the Navy. The objective of the program was to identify primary and secondary (or back-up) vendor information for these critical ingredients, and to develop suitable alternative materials if an ingredient is unavailable. In 1992 NSWC/IHDIV funded Chemical Propulsion Information Agency (CPIA) under a Technical Area Task (TAT) to expedite the task of creating a database listing critical ingredients used to manufacture Navy propellant and explosives based on known formulation quantities. Under this task CPIA provided employees that were 100 percent dedicated to the task of obtaining critical ingredient suppliers information, selecting the software and designing the interface between the computer program and the database users. TAT objectives included creating the Explosive Ingredients Source Database (EISD) for Propellant, Explosive and Pyrotechnic (PEP) critical elements. The goal was to create a readily accessible database, to provide users a quick-view summary of critical ingredient supplier's information and create a centralized archive that CPIA would update and distribute. EISD funding ended in 1996. At that time, the database entries included 53 formulations and 108 critical used to manufacture Navy propellant and explosives. CPIA turned the database tasking back over to NSWC/IHDIV to maintain and distribute at their discretion. Due to significant interest in propellant/explosives critical ingredients suppliers' status, the Propellant Development and Characterization Subcommittee (PDCS) approached the JANNAF Executive committee (EC) for authorization to continue the critical ingredient database work. In 1999, JANNAF EC approved the PDCS panel task. This paper is designed to emphasize the necessity of maintaining a JANNAF community supported database, which monitors PEP critical ingredient suppliers' status. The final product of this task is a user friendly, searchable database that provides a quick-view summary of critical ingredient supplier's information. This database must be designed to serve the needs of JANNAF and the propellant and energetic commercial manufacturing community as well. This paper provides a summary of the type of information to archive each critical ingredient.
Integrating computer programs for engineering analysis and design
NASA Technical Reports Server (NTRS)
Wilhite, A. W.; Crisp, V. K.; Johnson, S. C.
1983-01-01
The design of a third-generation system for integrating computer programs for engineering and design has been developed for the Aerospace Vehicle Interactive Design (AVID) system. This system consists of an engineering data management system, program interface software, a user interface, and a geometry system. A relational information system (ARIS) was developed specifically for the computer-aided engineering system. It is used for a repository of design data that are communicated between analysis programs, for a dictionary that describes these design data, for a directory that describes the analysis programs, and for other system functions. A method is described for interfacing independent analysis programs into a loosely-coupled design system. This method emphasizes an interactive extension of analysis techniques and manipulation of design data. Also, integrity mechanisms exist to maintain database correctness for multidisciplinary design tasks by an individual or a team of specialists. Finally, a prototype user interface program has been developed to aid in system utilization.
A novel computer-aided detection system for pulmonary nodule identification in CT images
NASA Astrophysics Data System (ADS)
Han, Hao; Li, Lihong; Wang, Huafeng; Zhang, Hao; Moore, William; Liang, Zhengrong
2014-03-01
Computer-aided detection (CADe) of pulmonary nodules from computer tomography (CT) scans is critical for assisting radiologists to identify lung lesions at an early stage. In this paper, we propose a novel approach for CADe of lung nodules using a two-stage vector quantization (VQ) scheme. The first-stage VQ aims to extract lung from the chest volume, while the second-stage VQ is designed to extract initial nodule candidates (INCs) within the lung volume. Then rule-based expert filtering is employed to prune obvious FPs from INCs, and the commonly-used support vector machine (SVM) classifier is adopted to further reduce the FPs. The proposed system was validated on 100 CT scans randomly selected from the 262 scans that have at least one juxta-pleural nodule annotation in the publicly available database - Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The two-stage VQ only missed 2 out of the 207 nodules at agreement level 1, and the INCs detection for each scan took about 30 seconds in average. Expert filtering reduced FPs more than 18 times, while maintaining a sensitivity of 93.24%. As it is trivial to distinguish INCs attached to pleural wall versus not on wall, we investigated the feasibility of training different SVM classifiers to further reduce FPs from these two kinds of INCs. Experiment results indicated that SVM classification over the entire set of INCs was in favor of, where the optimal operating of our CADe system achieved a sensitivity of 89.4% at a specificity of 86.8%.
Bigdata Driven Cloud Security: A Survey
NASA Astrophysics Data System (ADS)
Raja, K.; Hanifa, Sabibullah Mohamed
2017-08-01
Cloud Computing (CC) is a fast-growing technology to perform massive-scale and complex computing. It eliminates the need to maintain expensive computing hardware, dedicated space, and software. Recently, it has been observed that massive growth in the scale of data or big data generated through cloud computing. CC consists of a front-end, includes the users’ computers and software required to access the cloud network, and back-end consists of various computers, servers and database systems that create the cloud. In SaaS (Software as-a-Service - end users to utilize outsourced software), PaaS (Platform as-a-Service-platform is provided) and IaaS (Infrastructure as-a-Service-physical environment is outsourced), and DaaS (Database as-a-Service-data can be housed within a cloud), where leading / traditional cloud ecosystem delivers the cloud services become a powerful and popular architecture. Many challenges and issues are in security or threats, most vital barrier for cloud computing environment. The main barrier to the adoption of CC in health care relates to Data security. When placing and transmitting data using public networks, cyber attacks in any form are anticipated in CC. Hence, cloud service users need to understand the risk of data breaches and adoption of service delivery model during deployment. This survey deeply covers the CC security issues (covering Data Security in Health care) so as to researchers can develop the robust security application models using Big Data (BD) on CC (can be created / deployed easily). Since, BD evaluation is driven by fast-growing cloud-based applications developed using virtualized technologies. In this purview, MapReduce [12] is a good example of big data processing in a cloud environment, and a model for Cloud providers.
Information management and analysis system for groundwater data in Thailand
NASA Astrophysics Data System (ADS)
Gill, D.; Luckananurung, P.
1992-01-01
The Ground Water Division of the Thai Department of Mineral Resources maintains a large archive of groundwater data with information on some 50,000 water wells. Each well file contains information on well location, well completion, borehole geology, water levels, water quality, and pumping tests. In order to enable efficient use of this information a computer-based system for information management and analysis was created. The project was sponsored by the United Nations Development Program and the Thai Department of Mineral Resources. The system was designed to serve users who lack prior training in automated data processing. Access is through a friendly user/system dialogue. Tasks are segmented into a number of logical steps, each of which is managed by a separate screen. Selective retrieval is possible by four different methods of area definition and by compliance with user-specified constraints on any combination of database variables. The main types of outputs are: (1) files of retrieved data, screened according to users' specifications; (2) an assortment of pre-formatted reports; (3) computed geochemical parameters and various diagrams of water chemistry derived therefrom; (4) bivariate scatter diagrams and linear regression analysis; (5) posting of data and computed results on maps; and (6) hydraulic aquifer characteristics as computed from pumping tests. Data are entered directly from formatted screens. Most records can be copied directly from hand-written documents. The database-management program performs data integrity checks in real time, enabling corrections at the time of input. The system software can be grouped into: (1) database administration and maintenance—these functions are carried out by the SIR/DBMS software package; (2) user communication interface for task definition and execution control—the interface is written in the operating system command language (VMS/DCL) and in FORTRAN 77; and (3) scientific data-processing programs, written in FORTRAN 77. The system was implemented on a DEC MicroVAX II computer.
Analysis of Salmon and Steelhead Supplementation, 1990 Final Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, William H.; Coley, Travis C.; Burge, Howard L.
Supplementation or planting salmon and steelhead into various locations in the Columbia River drainage has occurred for over 100 years. All life stages, from eggs to adults, have been used by fishery managers in attempts to establish, rebuild, or maintain anadromous runs. This report summarizes and evaluates results of past and current supplementation of salmon and steelhead. Conclusions and recommendations are made concerning supplementation. Hatchery rearing conditions and stocking methods can affect post released survival of hatchery fish. Stress was considered by many biologists to be a key factor in survival of stocked anadromous fish. Smolts were the most commonmore » life stage released and size of smolts correlated positively with survival. Success of hatchery stockings of eggs and presmolts was found to be better if they are put into productive, underseeded habitats. Stocking time, method, species stocked, and environmental conditions of the receiving waters, including other fish species present, are factors to consider in supplementation programs. The unpublished supplementation literature was reviewed primarily by the authors of this report. Direct contact was made in person or by telephone and data compiled on a computer database. Areas covered included Oregon, Washington, Idaho, Alaska, California, British Columbia, and the New England states working with Atlantic salmon. Over 300 projects were reviewed and entered into a computer database. The database information is contained in Appendix A of this report. 6 refs., 9 figs., 21 tabs.« less
probeBase—an online resource for rRNA-targeted oligonucleotide probes and primers: new features 2016
Greuter, Daniel; Loy, Alexander; Horn, Matthias; Rattei, Thomas
2016-01-01
probeBase http://www.probebase.net is a manually maintained and curated database of rRNA-targeted oligonucleotide probes and primers. Contextual information and multiple options for evaluating in silico hybridization performance against the most recent rRNA sequence databases are provided for each oligonucleotide entry, which makes probeBase an important and frequently used resource for microbiology research and diagnostics. Here we present a major update of probeBase, which was last featured in the NAR Database Issue 2007. This update describes a complete remodeling of the database architecture and environment to accommodate computationally efficient access. Improved search functions, sequence match tools and data output now extend the opportunities for finding suitable hierarchical probe sets that target an organism or taxon at different taxonomic levels. To facilitate the identification of complementary probe sets for organisms represented by short rRNA sequence reads generated by amplicon sequencing or metagenomic analysis with next generation sequencing technologies such as Illumina and IonTorrent, we introduce a novel tool that recovers surrogate near full-length rRNA sequences for short query sequences and finds matching oligonucleotides in probeBase. PMID:26586809
The Development of a Korean Drug Dosing Database
Kim, Sun Ah; Kim, Jung Hoon; Jang, Yoo Jin; Jeon, Man Ho; Hwang, Joong Un; Jeong, Young Mi; Choi, Kyung Suk; Lee, Iyn Hyang; Jeon, Jin Ok; Lee, Eun Sook; Lee, Eun Kyung; Kim, Hong Bin; Chin, Ho Jun; Ha, Ji Hye; Kim, Young Hoon
2011-01-01
Objectives This report describes the development process of a drug dosing database for ethical drugs approved by the Korea Food & Drug Administration (KFDA). The goal of this study was to develop a computerized system that supports physicians' prescribing decisions, particularly in regards to medication dosing. Methods The advisory committee, comprised of doctors, pharmacists, and nurses from the Seoul National University Bundang Hospital, pharmacists familiar with drug databases, KFDA officials, and software developers from the BIT Computer Co. Ltd. analyzed approved KFDA drug dosing information, defined the fields and properties of the information structure, and designed a management program used to enter dosing information. The management program was developed using a web based system that allows multiple researchers to input drug dosing information in an organized manner. The whole process was improved by adding additional input fields and eliminating the unnecessary existing fields used when the dosing information was entered, resulting in an improved field structure. Results A total of 16,994 drugs sold in the Korean market in July 2009, excluding the exclusion criteria (e.g., radioactivity drugs, X-ray contrast medium), usage and dosing information were made into a database. Conclusions The drug dosing database was successfully developed and the dosing information for new drugs can be continually maintained through the management mode. This database will be used to develop the drug utilization review standards and to provide appropriate dosing information. PMID:22259729
NASA Technical Reports Server (NTRS)
Duncan, K. M.; Harm, D. L.; Crosier, W. G.; Worthington, J. W.
1993-01-01
A unique training device is being developed at the Johnson Space Center Neurosciences Laboratory to help reduce or eliminate Space Motion Sickness (SMS) and spatial orientation disturbances that occur during spaceflight. The Device for Orientation and Motion Environments Preflight Adaptation Trainer (DOME PAT) uses virtual reality technology to simulate some sensory rearrangements experienced by astronauts in microgravity. By exposing a crew member to this novel environment preflight, it is expected that he/she will become partially adapted, and thereby suffer fewer symptoms inflight. The DOME PAT is a 3.7 m spherical dome, within which a 170 by 100 deg field of view computer-generated visual database is projected. The visual database currently in use depicts the interior of a Shuttle spacelab. The trainee uses a six degree-of-freedom, isometric force hand controller to navigate through the virtual environment. Alternatively, the trainee can be 'moved' about within the virtual environment by the instructor, or can look about within the environment by wearing a restraint that controls scene motion in response to head movements. The computer system is comprised of four personal computers that provide the real time control and user interface, and two Silicon Graphics computers that generate the graphical images. The image generator computers use custom algorithms to compensate for spherical image distortion, while maintaining a video update rate of 30 Hz. The DOME PAT is the first such system known to employ virtual reality technology to reduce the untoward effects of the sensory rearrangement associated with exposure to microgravity, and it does so in a very cost-effective manner.
Amat-ur-Rasool, Hafsa; Ahmed, Mehboob
2015-01-01
Alzheimer's disease (AD), a big cause of memory loss, is a progressive neurodegenerative disorder. The disease leads to irreversible loss of neurons that result in reduced level of acetylcholine neurotransmitter (ACh). The reduction of ACh level impairs brain functioning. One aspect of AD therapy is to maintain ACh level up to a safe limit, by blocking acetylcholinesterase (AChE), an enzyme that is naturally responsible for its degradation. This research presents an in-silico screening and designing of hAChE inhibitors as potential anti-Alzheimer drugs. Molecular docking results of the database retrieved (synthetic chemicals and dietary phytochemicals) and self-drawn ligands were compared with Food and Drug Administration (FDA) approved drugs against AD as controls. Furthermore, computational ADME studies were performed on the hits to assess their safety. Human AChE was found to be most approptiate target site as compared to commonly used Torpedo AChE. Among the tested dietry phytochemicals, berberastine, berberine, yohimbine, sanguinarine, elemol and naringenin are the worth mentioning phytochemicals as potential anti-Alzheimer drugs The synthetic leads were mostly dual binding site inhibitors with two binding subunits linked by a carbon chain i.e. second generation AD drugs. Fifteen new heterodimers were designed that were computationally more efficient inhibitors than previously reported compounds. Using computational methods, compounds present in online chemical databases can be screened to design more efficient and safer drugs against cognitive symptoms of AD. PMID:26325402
Amat-Ur-Rasool, Hafsa; Ahmed, Mehboob
2015-01-01
Alzheimer's disease (AD), a big cause of memory loss, is a progressive neurodegenerative disorder. The disease leads to irreversible loss of neurons that result in reduced level of acetylcholine neurotransmitter (ACh). The reduction of ACh level impairs brain functioning. One aspect of AD therapy is to maintain ACh level up to a safe limit, by blocking acetylcholinesterase (AChE), an enzyme that is naturally responsible for its degradation. This research presents an in-silico screening and designing of hAChE inhibitors as potential anti-Alzheimer drugs. Molecular docking results of the database retrieved (synthetic chemicals and dietary phytochemicals) and self-drawn ligands were compared with Food and Drug Administration (FDA) approved drugs against AD as controls. Furthermore, computational ADME studies were performed on the hits to assess their safety. Human AChE was found to be most approptiate target site as compared to commonly used Torpedo AChE. Among the tested dietry phytochemicals, berberastine, berberine, yohimbine, sanguinarine, elemol and naringenin are the worth mentioning phytochemicals as potential anti-Alzheimer drugs The synthetic leads were mostly dual binding site inhibitors with two binding subunits linked by a carbon chain i.e. second generation AD drugs. Fifteen new heterodimers were designed that were computationally more efficient inhibitors than previously reported compounds. Using computational methods, compounds present in online chemical databases can be screened to design more efficient and safer drugs against cognitive symptoms of AD.
2012-01-01
Background Pathway data are important for understanding the relationship between genes, proteins and many other molecules in living organisms. Pathway gene relationships are crucial information for guidance, prediction, reference and assessment in biochemistry, computational biology, and medicine. Many well-established databases--e.g., KEGG, WikiPathways, and BioCyc--are dedicated to collecting pathway data for public access. However, the effectiveness of these databases is hindered by issues such as incompatible data formats, inconsistent molecular representations, inconsistent molecular relationship representations, inconsistent referrals to pathway names, and incomprehensive data from different databases. Results In this paper, we overcome these issues through extraction, normalization and integration of pathway data from several major public databases (KEGG, WikiPathways, BioCyc, etc). We build a database that not only hosts our integrated pathway gene relationship data for public access but also maintains the necessary updates in the long run. This public repository is named IntPath (Integrated Pathway gene relationship database for model organisms and important pathogens). Four organisms--S. cerevisiae, M. tuberculosis H37Rv, H. Sapiens and M. musculus--are included in this version (V2.0) of IntPath. IntPath uses the "full unification" approach to ensure no deletion and no introduced noise in this process. Therefore, IntPath contains much richer pathway-gene and pathway-gene pair relationships and much larger number of non-redundant genes and gene pairs than any of the single-source databases. The gene relationships of each gene (measured by average node degree) per pathway are significantly richer. The gene relationships in each pathway (measured by average number of gene pairs per pathway) are also considerably richer in the integrated pathways. Moderate manual curation are involved to get rid of errors and noises from source data (e.g., the gene ID errors in WikiPathways and relationship errors in KEGG). We turn complicated and incompatible xml data formats and inconsistent gene and gene relationship representations from different source databases into normalized and unified pathway-gene and pathway-gene pair relationships neatly recorded in simple tab-delimited text format and MySQL tables, which facilitates convenient automatic computation and large-scale referencing in many related studies. IntPath data can be downloaded in text format or MySQL dump. IntPath data can also be retrieved and analyzed conveniently through web service by local programs or through web interface by mouse clicks. Several useful analysis tools are also provided in IntPath. Conclusions We have overcome in IntPath the issues of compatibility, consistency, and comprehensiveness that often hamper effective use of pathway databases. We have included four organisms in the current release of IntPath. Our methodology and programs described in this work can be easily applied to other organisms; and we will include more model organisms and important pathogens in future releases of IntPath. IntPath maintains regular updates and is freely available at http://compbio.ddns.comp.nus.edu.sg:8080/IntPath. PMID:23282057
Bera, Maitreyee
2017-10-16
The U.S. Geological Survey (USGS), in cooperation with the DuPage County Stormwater Management Department, maintains a database of hourly meteorological and hydrologic data for use in a near real-time streamflow simulation system. This system is used in the management and operation of reservoirs and other flood-control structures in the West Branch DuPage River watershed in DuPage County, Illinois. The majority of the precipitation data are collected from a tipping-bucket rain-gage network located in and near DuPage County. The other meteorological data (air temperature, dewpoint temperature, wind speed, and solar radiation) are collected at Argonne National Laboratory in Argonne, Ill. Potential evapotranspiration is computed from the meteorological data using the computer program LXPET (Lamoreux Potential Evapotranspiration). The hydrologic data (water-surface elevation [stage] and discharge) are collected at U.S.Geological Survey streamflow-gaging stations in and around DuPage County. These data are stored in a Watershed Data Management (WDM) database.This report describes a version of the WDM database that is quality-assured and quality-controlled annually to ensure datasets are complete and accurate. This database is named WBDR13.WDM. It contains data from January 1, 2007, through September 30, 2013. Each precipitation dataset may have time periods of inaccurate data. This report describes the methods used to estimate the data for the periods of missing, erroneous, or snowfall-affected data and thereby improve the accuracy of these data. The other meteorological datasets are described in detail in Over and others (2010), and the hydrologic datasets in the database are fully described in the online USGS annual water data reports for Illinois (U.S. Geological Survey, 2016) and, therefore, are described in less detail than the precipitation datasets in this report.
NASA Technical Reports Server (NTRS)
Kolb, Mark A.
1990-01-01
Originally, computer programs for engineering design focused on detailed geometric design. Later, computer programs for algorithmically performing the preliminary design of specific well-defined classes of objects became commonplace. However, due to the need for extreme flexibility, it appears unlikely that conventional programming techniques will prove fruitful in developing computer aids for engineering conceptual design. The use of symbolic processing techniques, such as object-oriented programming and constraint propagation, facilitate such flexibility. Object-oriented programming allows programs to be organized around the objects and behavior to be simulated, rather than around fixed sequences of function- and subroutine-calls. Constraint propagation allows declarative statements to be understood as designating multi-directional mathematical relationships among all the variables of an equation, rather than as unidirectional assignments to the variable on the left-hand side of the equation, as in conventional computer programs. The research has concentrated on applying these two techniques to the development of a general-purpose computer aid for engineering conceptual design. Object-oriented programming techniques are utilized to implement a user-extensible database of design components. The mathematical relationships which model both geometry and physics of these components are managed via constraint propagation. In addition, to this component-based hierarchy, special-purpose data structures are provided for describing component interactions and supporting state-dependent parameters. In order to investigate the utility of this approach, a number of sample design problems from the field of aerospace engineering were implemented using the prototype design tool, Rubber Airplane. The additional level of organizational structure obtained by representing design knowledge in terms of components is observed to provide greater convenience to the program user, and to result in a database of engineering information which is easier both to maintain and to extend.
Current experiments in elementary particle physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohl, C.G.; Armstrong, F.E.; Trippe, T.G.
1989-09-01
This report contains summaries of 736 current and recent experiments in elementary particle physics (experiments that finished taking data before 1982 are excluded). Included are experiments at Brookhaven, CERN, CESR, DESY, Fermilab, Tokyo Institute of Nuclear Studies, Moscow Institute of Theoretical and Experimental Physics, Joint Institute for Nuclear Research (Dubna), KEK, LAMPF, Novosibirsk, PSI/SIN, Saclay, Serpukhov, SLAC, and TRIUMF, and also several underground experiments. Also given are instructions for searching online the computer database (maintained under the SLAC/SPIRES system) that contains the summaries. Properties of the fixed-target beams at most of the laboratories are summarized.
Providing care for America's Army.
Webb, Joseph G; von Gonten, Ann Sue; Luciano, W John
2003-01-01
The Army Dental Corps' three-part mission is to maintain soldiers fit for combat, promote health, and ensure the Dental Corps ability deploy and deliver in the field. Consistent with this mission, the corps is developing innovative dental delivery systems and promoting tobacco cessation, sealants, mouth guard use, cancer detection, and identification of child, elder, and other abuse. The corps' training programs include options and benefits at the dental student, postdoctoral residency, and specialty levels. Recent technology innovations include light-weight field equipment, an integrated computer database to manage treatment, rapid ordering and delivery of supplies, and distance education.
47 CFR 15.715 - TV bands database administrator.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer a TV bands database. Each database administrator shall: (a) Maintain a database that...
75 FR 71083 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-22
... (SMART) database is maintained at the Naval Education and Training Professional Development Technology... 20684-0010. The Data Housing and Reports Tool (DHART) database is maintained for the Commandant of the... student has performed below the minimum requirements, copies of the minutes of the Academic Review Board...
The role of digital cartographic data in the geosciences
Guptill, S.C.
1983-01-01
The increasing demand of the Nation's natural resource developers for the manipulation, analysis, and display of large quantities of earth-science data has necessitated the use of computers and the building of geoscience information systems. These systems require, in digital form, the spatial data on map products. The basic cartographic data shown on quadrangle maps provide a foundation for the addition of geological and geophysical data. If geoscience information systems are to realize their full potential, large amounts of digital cartographic base data must be available. A major goal of the U.S. Geological Survey is to create, maintain, manage, and distribute a national cartographic and geographic digital database. This unified database will contain numerous categories (hydrography, hypsography, land use, etc.) that, through the use of standardized data-element definitions and formats, can be used easily and flexibly to prepare cartographic products and perform geoscience analysis. ?? 1983.
Design document for the Surface Currents Data Base (SCDB) Management System (SCDBMS), version 1.0
NASA Technical Reports Server (NTRS)
Krisnnamagaru, Ramesh; Cesario, Cheryl; Foster, M. S.; Das, Vishnumohan
1994-01-01
The Surface Currents Database Management System (SCDBMS) provides access to the Surface Currents Data Base (SCDB) which is maintained by the Naval Oceanographic Office (NAVOCEANO). The SCDBMS incorporates database technology in providing seamless access to surface current data. The SCDBMS is an interactive software application with a graphical user interface (GUI) that supports user control of SCDBMS functional capabilities. The purpose of this document is to define and describe the structural framework and logistical design of the software components/units which are integrated into the major computer software configuration item (CSCI) identified as the SCDBMS, Version 1.0. The preliminary design is based on functional specifications and requirements identified in the governing Statement of Work prepared by the Naval Oceanographic Office (NAVOCEANO) and distributed as a request for proposal by the National Aeronautics and Space Administration (NASA).
NASA Technical Reports Server (NTRS)
Marshall, Jospeh R.; Morris, Allan T.
2007-01-01
Since 2003, AIAA's Computer Systems and Software Systems Technical Committees (TCs) have developed a database that aids technical committee management to map technical topics to their members. This Topics/Interest (T/I) database grew out of a collection of charts and spreadsheets maintained by the TCs. Since its inception, the tool has evolved into a multi-dimensional database whose dimensions include the importance, interest and expertise of TC members and whether or not a member and/or a TC is actively involved with the topic. In 2005, the database was expanded to include the TCs in AIAA s Information Systems Group and then expanded further to include all AIAA TCs. It was field tested at an AIAA Technical Activities Committee (TAC) Workshop in early 2006 through live access by over 80 users. Through the use of the topics database, TC and program committee (PC) members can accomplish relevant tasks such as: to identify topic experts (for Aerospace America articles or external contacts), to determine the interest of its members, to identify overlapping topics between diverse TCs and PCs, to guide new member drives and to reveal emerging topics. This paper will describe the origins, inception, initial development, field test and current version of the tool as well as elucidate the benefits and insights gained by using the database to aid the management of various TC functions. Suggestions will be provided to guide future development of the database for the purpose of providing dynamics and system level benefits to AIAA that currently do not exist in any technical organization.
WebCIS: large scale deployment of a Web-based clinical information system.
Hripcsak, G; Cimino, J J; Sengupta, S
1999-01-01
WebCIS is a Web-based clinical information system. It sits atop the existing Columbia University clinical information system architecture, which includes a clinical repository, the Medical Entities Dictionary, an HL7 interface engine, and an Arden Syntax based clinical event monitor. WebCIS security features include authentication with secure tokens, authorization maintained in an LDAP server, SSL encryption, permanent audit logs, and application time outs. WebCIS is currently used by 810 physicians at the Columbia-Presbyterian center of New York Presbyterian Healthcare to review and enter data into the electronic medical record. Current deployment challenges include maintaining adequate database performance despite complex queries, replacing large numbers of computers that cannot run modern Web browsers, and training users that have never logged onto the Web. Although the raised expectations and higher goals have increased deployment costs, the end result is a far more functional, far more available system.
Development of a forestry government agency enterprise GIS system: a disconnected editing approach
NASA Astrophysics Data System (ADS)
Zhu, Jin; Barber, Brad L.
2008-10-01
The Texas Forest Service (TFS) has developed a geographic information system (GIS) for use by agency personnel in central Texas for managing oak wilt suppression and other landowner assistance programs. This Enterprise GIS system was designed to support multiple concurrent users accessing shared information resources. The disconnected editing approach was adopted in this system to avoid the overhead of maintaining an active connection between TFS central Texas field offices and headquarters since most field offices are operating with commercially provided Internet service. The GIS system entails maintaining a personal geodatabase on each local field office computer. Spatial data from the field is periodically up-loaded into a central master geodatabase stored in a Microsoft SQL Server at the TFS headquarters in College Station through the ESRI Spatial Database Engine (SDE). This GIS allows users to work off-line when editing data and requires connecting to the central geodatabase only when needed.
A browsing tool for the Internet Logical Library of the HPCC Software Exchange
NASA Technical Reports Server (NTRS)
Biro, Ross
1993-01-01
As the quantity of information available on the Internet grows, locating a particular piece of information becomes more difficult. One possible solution is for a database of pointers to all available information to be maintained at a central site. Subject classifications for all the information could also be maintained in order to make searching possible. This paper describes one possible method of searching such an index. In particular a prototype browsing tool has been created using TCL/TK to demonstrate several possible features: rapidly scanning at any rank of the index, narrowing the index to any scope, regular-expression searching, and creation of a list of pointers answering to any set of index terms. The prototype browser is an easy-to-use independent X application designed for use in the Catalog of Repositories of the HPCC (High Performance Computing and Communications) Software Exchange.
Sources of Cryogenic Data and Information
NASA Astrophysics Data System (ADS)
Mohling, R. A.; Hufferd, W. L.; Marquardt, E. D.
It is commonly known that cryogenic data, technology, and information are applied across many military, National Aeronautics and Space Administration (NASA), and civilian product lines. Before 1950, however, there was no centralized US source of cryogenic technology data. The Cryogenic Data Center of the National Bureau of Standards (NBS) maintained a database of cryogenic technical documents that served the national need well from the mid 1950s to the early 1980s. The database, maintained on a mainframe computer, was a highly specific bibliography of cryogenic literature and thermophysical properties that covered over 100 years of data. In 1983, however, the Cryogenic Data Center was discontinued when NBS's mission and scope were redefined. In 1998, NASA contracted with the Chemical Propulsion Information Agency (CPIA) and Technology Applications, Inc. (TAI) to reconstitute and update Cryogenic Data Center information and establish a self-sufficient entity to provide technical services for the cryogenic community. The Cryogenic Information Center (CIC) provided this service until 2004, when it was discontinued due to a lack of market interest. The CIC technical assets were distributed to NASA Marshall Space Flight Center and the National Institute of Standards and Technology. Plans are under way in 2006 for CPIA to launch an e-commerce cryogenic website to offer bibliography data with capability to download cryogenic documents.
Update on terrestrial ecological classification in the highlands of West Virginia
James P. Vanderhorst
2010-01-01
The West Virginia Natural Heritage Program (WVNHP) maintains databases on the biological diversity of the state, including species and natural communities, to help focus conservation efforts by agencies and organizations. Information on terrestrial communities (also called vegetation, or habitat, depending on user or audience focus) is maintained in two databases. The...
Staradmin -- Starlink User Database Maintainer
NASA Astrophysics Data System (ADS)
Fish, Adrian
The subject of this SSN is a utility called STARADMIN. This utility allows the system administrator to build and maintain a Starlink User Database (UDB). The principal source of information for each user is a text file, named after their username. The content of each file is a list consisting of one keyword followed by the relevant user data per line. These user database files reside in a single directory. The STARADMIN program is used to manipulate these user data files and automatically generate user summary lists.
NASA Astrophysics Data System (ADS)
Ghiorso, M. S.
2013-12-01
Internally consistent thermodynamic databases are critical resources that facilitate the calculation of heterogeneous phase equilibria and thereby support geochemical, petrological, and geodynamical modeling. These 'databases' are actually derived data/model systems that depend on a diverse suite of physical property measurements, calorimetric data, and experimental phase equilibrium brackets. In addition, such databases are calibrated with the adoption of various models for extrapolation of heat capacities and volumetric equations of state to elevated temperature and pressure conditions. Finally, these databases require specification of thermochemical models for the mixing properties of solid, liquid, and fluid solutions, which are often rooted in physical theory and, in turn, depend on additional experimental observations. The process of 'calibrating' a thermochemical database involves considerable effort and an extensive computational infrastructure. Because of these complexities, the community tends to rely on a small number of thermochemical databases, generated by a few researchers; these databases often have limited longevity and are universally difficult to maintain. ThermoFit is a software framework and user interface whose aim is to provide a modeling environment that facilitates creation, maintenance and distribution of thermodynamic data/model collections. Underlying ThermoFit are data archives of fundamental physical property, calorimetric, crystallographic, and phase equilibrium constraints that provide the essential experimental information from which thermodynamic databases are traditionally calibrated. ThermoFit standardizes schema for accessing these data archives and provides web services for data mining these collections. Beyond simple data management and interoperability, ThermoFit provides a collection of visualization and software modeling tools that streamline the model/database generation process. Most notably, ThermoFit facilitates the rapid visualization of predicted model outcomes and permits the user to modify these outcomes using tactile- or mouse-based GUI interaction, permitting real-time updates that reflect users choices, preferences, and priorities involving derived model results. This ability permits some resolution of the problem of correlated model parameters in the common situation where thermodynamic models must be calibrated from inadequate data resources. The ability also allows modeling constraints to be imposed using natural data and observations (i.e. petrologic or geochemical intuition). Once formulated, ThermoFit facilitates deployment of data/model collections by automated creation of web services. Users consume these services via web-, excel-, or desktop-clients. ThermoFit is currently under active development and not yet generally available; a limited capability prototype system has been coded for Macintosh computers and utilized to construct thermochemical models for H2O-CO2 mixed fluid saturation in silicate liquids. The longer term goal is to release ThermoFit as a web portal application client with server-based cloud computations supporting the modeling environment.
47 CFR 68.610 - Database of terminal equipment.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 3 2011-10-01 2011-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet the...
47 CFR 68.610 - Database of terminal equipment.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet the...
47 CFR 68.610 - Database of terminal equipment.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 3 2013-10-01 2013-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet the...
47 CFR 68.610 - Database of terminal equipment.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 3 2014-10-01 2014-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet the...
47 CFR 68.610 - Database of terminal equipment.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 3 2012-10-01 2012-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet the...
Computational Chemistry Comparison and Benchmark Database
National Institute of Standards and Technology Data Gateway
SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access) The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.
O'Neill, M A; Hilgetag, C C
2001-08-29
Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user-defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI-C under the POSIX.1 standard and is to a great extent architecture- and operating-system independent. The software is supported by systems libraries that allow multi-threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open-source software under the GNU license agreement.
O'Neill, M A; Hilgetag, C C
2001-01-01
Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user-defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI-C under the POSIX.1 standard and is to a great extent architecture- and operating-system independent. The software is supported by systems libraries that allow multi-threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open-source software under the GNU license agreement. PMID:11545702
Imaged Document Optical Correlation and Conversion System (IDOCCS)
NASA Astrophysics Data System (ADS)
Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.
1999-03-01
Today, the paper document is fast becoming a thing of the past. With the rapid development of fast, inexpensive computing and storage devices, many government and private organizations are archiving their documents in electronic form (e.g., personnel records, medical records, patents, etc.). In addition, many organizations are converting their paper archives to electronic images, which are stored in a computer database. Because of this, there is a need to efficiently organize this data into comprehensive and accessible information resources. The Imaged Document Optical Correlation and Conversion System (IDOCCS) provides a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provides the search and retrieval capability of document images. The IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives and can even determine the types of languages contained within a document. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited, e.g., imaged documents containing an agency's seal or logo, or documents with a particular individual's signature block, can be singled out. With this dual capability, IDOCCS outperforms systems that rely on optical character recognition as a basis for indexing and storing only the textual content of documents for later retrieval.
Optimal Embedding for Shape Indexing in Medical Image Databases
Qian, Xiaoning; Tagare, Hemant D.; Fulbright, Robert K.; Long, Rodney; Antani, Sameer
2010-01-01
This paper addresses the problem of indexing shapes in medical image databases. Shapes of organs are often indicative of disease, making shape similarity queries important in medical image databases. Mathematically, shapes with landmarks belong to shape spaces which are curved manifolds with a well defined metric. The challenge in shape indexing is to index data in such curved spaces. One natural indexing scheme is to use metric trees, but metric trees are prone to inefficiency. This paper proposes a more efficient alternative. We show that it is possible to optimally embed finite sets of shapes in shape space into a Euclidean space. After embedding, classical coordinate-based trees can be used for efficient shape retrieval. The embedding proposed in the paper is optimal in the sense that it least distorts the partial Procrustes shape distance. The proposed indexing technique is used to retrieve images by vertebral shape from the NHANES II database of cervical and lumbar spine x-ray images maintained at the National Library of Medicine. Vertebral shape strongly correlates with the presence of osteophytes, and shape similarity retrieval is proposed as a tool for retrieval by osteophyte presence and severity. Experimental results included in the paper evaluate (1) the usefulness of shape-similarity as a proxy for osteophytes, (2) the computational and disk access efficiency of the new indexing scheme, (3) the relative performance of indexing with embedding to the performance of indexing without embedding, and (4) the computational cost of indexing using the proposed embedding versus the cost of an alternate embedding. The experimental results clearly show the relevance of shape indexing and the advantage of using the proposed embedding. PMID:20163981
Optimal embedding for shape indexing in medical image databases.
Qian, Xiaoning; Tagare, Hemant D; Fulbright, Robert K; Long, Rodney; Antani, Sameer
2010-06-01
This paper addresses the problem of indexing shapes in medical image databases. Shapes of organs are often indicative of disease, making shape similarity queries important in medical image databases. Mathematically, shapes with landmarks belong to shape spaces which are curved manifolds with a well defined metric. The challenge in shape indexing is to index data in such curved spaces. One natural indexing scheme is to use metric trees, but metric trees are prone to inefficiency. This paper proposes a more efficient alternative. We show that it is possible to optimally embed finite sets of shapes in shape space into a Euclidean space. After embedding, classical coordinate-based trees can be used for efficient shape retrieval. The embedding proposed in the paper is optimal in the sense that it least distorts the partial Procrustes shape distance. The proposed indexing technique is used to retrieve images by vertebral shape from the NHANES II database of cervical and lumbar spine X-ray images maintained at the National Library of Medicine. Vertebral shape strongly correlates with the presence of osteophytes, and shape similarity retrieval is proposed as a tool for retrieval by osteophyte presence and severity. Experimental results included in the paper evaluate (1) the usefulness of shape similarity as a proxy for osteophytes, (2) the computational and disk access efficiency of the new indexing scheme, (3) the relative performance of indexing with embedding to the performance of indexing without embedding, and (4) the computational cost of indexing using the proposed embedding versus the cost of an alternate embedding. The experimental results clearly show the relevance of shape indexing and the advantage of using the proposed embedding. Copyright (c) 2010 Elsevier B.V. All rights reserved.
You, Leiming; Wu, Jiexin; Feng, Yuchao; Fu, Yonggui; Guo, Yanan; Long, Liyuan; Zhang, Hui; Luan, Yijie; Tian, Peng; Chen, Liangfu; Huang, Guangrui; Huang, Shengfeng; Li, Yuxin; Li, Jie; Chen, Chengyong; Zhang, Yaqing; Chen, Shangwu; Xu, Anlong
2015-01-01
Increasing amounts of genes have been shown to utilize alternative polyadenylation (APA) 3′-processing sites depending on the cell and tissue type and/or physiological and pathological conditions at the time of processing, and the construction of genome-wide database regarding APA is urgently needed for better understanding poly(A) site selection and APA-directed gene expression regulation for a given biology. Here we present a web-accessible database, named APASdb (http://mosas.sysu.edu.cn/utr), which can visualize the precise map and usage quantification of different APA isoforms for all genes. The datasets are deeply profiled by the sequencing alternative polyadenylation sites (SAPAS) method capable of high-throughput sequencing 3′-ends of polyadenylated transcripts. Thus, APASdb details all the heterogeneous cleavage sites downstream of poly(A) signals, and maintains near complete coverage for APA sites, much better than the previous databases using conventional methods. Furthermore, APASdb provides the quantification of a given APA variant among transcripts with different APA sites by computing their corresponding normalized-reads, making our database more useful. In addition, APASdb supports URL-based retrieval, browsing and display of exon-intron structure, poly(A) signals, poly(A) sites location and usage reads, and 3′-untranslated regions (3′-UTRs). Currently, APASdb involves APA in various biological processes and diseases in human, mouse and zebrafish. PMID:25378337
A Systems Model for Power Technology Assessment
NASA Technical Reports Server (NTRS)
Hoffman, David J.
2002-01-01
A computer model is under continuing development at NASA Glenn Research Center that enables first-order assessments of space power technology. The model, an evolution of NASA Glenn's Array Design Assessment Model (ADAM), is an Excel workbook that consists of numerous spreadsheets containing power technology performance data and sizing algorithms. Underlying the model is a number of databases that contain default values for various power generation, energy storage and power management and distribution component parameters. These databases are actively maintained by a team of systems analysts so that they contain state-of-art data as well as the most recent technology performance projections. Sizing of the power subsystems can be accomplished either by using an assumed mass specific power (W/kg) or energy (Wh/kg) or by a bottoms-up calculation that accounts for individual component performance and masses. The power generation, energy storage and power management and distribution subsystems are sized for given mission requirements for a baseline case and up to three alternatives. This allows four different power systems to be sized and compared using consistent assumptions and sizing algorithms. The component sizing models contained in the workbook are modular so that they can be easily maintained and updated. All significant input values have default values loaded from the databases that can be over-written by the user. The default data and sizing algorithms for each of the power subsystems are described in some detail. The user interface and workbook navigational features are also discussed. Finally, an example study case that illustrates the model's capability is presented.
Aerodynamic Performance of an Active Flow Control Configuration Using Unstructured-Grid RANS
NASA Technical Reports Server (NTRS)
Joslin, Ronald D.; Viken, Sally A.
2001-01-01
This research is focused on assessing the value of the Reynolds-Averaged Navier-Stokes (RANS) methodology for active flow control applications. An experimental flow control database exists for a TAU0015 airfoil, which is a modification of a NACA0015 airfoil. The airfoil has discontinuities at the leading edge due to the implementation of a fluidic actuator and aft of mid chord on the upper surface. This paper documents two- and three-dimensional computational results for the baseline wing configuration (no control) with tile experimental results. The two-dimensional results suggest that the mid-chord discontinuity does not effect the aerodynamics of the wing and can be ignored for more efficient computations. The leading-edge discontinuity significantly affects tile lift and drag; hence, the integrity of the leading-edge notch discontinuity must be maintained in the computations to achieve a good match with the experimental data. The three-dimensional integrated performance results are in good agreement with the experiments inspite of some convergence and grid resolution issues.
KeyWare: an open wireless distributed computing environment
NASA Astrophysics Data System (ADS)
Shpantzer, Isaac; Schoenfeld, Larry; Grindahl, Merv; Kelman, Vladimir
1995-12-01
Deployment of distributed applications in the wireless domain lack equivalent tools, methodologies, architectures, and network management that exist in LAN based applications. A wireless distributed computing environment (KeyWareTM) based on intelligent agents within a multiple client multiple server scheme was developed to resolve this problem. KeyWare renders concurrent application services to wireline and wireless client nodes encapsulated in multiple paradigms such as message delivery, database access, e-mail, and file transfer. These services and paradigms are optimized to cope with temporal and spatial radio coverage, high latency, limited throughput and transmission costs. A unified network management paradigm for both wireless and wireline facilitates seamless extensions of LAN- based management tools to include wireless nodes. A set of object oriented tools and methodologies enables direct asynchronous invocation of agent-based services supplemented by tool-sets matched to supported KeyWare paradigms. The open architecture embodiment of KeyWare enables a wide selection of client node computing platforms, operating systems, transport protocols, radio modems and infrastructures while maintaining application portability.
An Algorithm for Building an Electronic Database.
Cohen, Wess A; Gayle, Lloyd B; Patel, Nima P
2016-01-01
We propose an algorithm on how to create a prospectively maintained database, which can then be used to analyze prospective data in a retrospective fashion. Our algorithm provides future researchers a road map on how to set up, maintain, and use an electronic database to improve evidence-based care and future clinical outcomes. The database was created using Microsoft Access and included demographic information, socioeconomic information, and intraoperative and postoperative details via standardized drop-down menus. A printed out form from the Microsoft Access template was given to each surgeon to be completed after each case and a member of the health care team then entered the case information into the database. By utilizing straightforward, HIPAA-compliant data input fields, we permitted data collection and transcription to be easy and efficient. Collecting a wide variety of data allowed us the freedom to evolve our clinical interests, while the platform also permitted new categories to be added at will. We have proposed a reproducible method for institutions to create a database, which will then allow senior and junior surgeons to analyze their outcomes and compare them with others in an effort to improve patient care and outcomes. This is a cost-efficient way to create and maintain a database without additional software.
78 FR 60861 - Native American Tribal Insignia Database
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-02
... Database ACTION: Proposed collection; comment request. SUMMARY: The United States Patent and Trademark... the report was that the USPTO create and maintain an accurate and comprehensive database containing... this recommendation, the Senate Committee on Appropriations directed the USPTO to create this database...
Chen, Mingyang; Stott, Amanda C; Li, Shenggang; Dixon, David A
2012-04-01
A robust metadata database called the Collaborative Chemistry Database Tool (CCDBT) for massive amounts of computational chemistry raw data has been designed and implemented. It performs data synchronization and simultaneously extracts the metadata. Computational chemistry data in various formats from different computing sources, software packages, and users can be parsed into uniform metadata for storage in a MySQL database. Parsing is performed by a parsing pyramid, including parsers written for different levels of data types and sets created by the parser loader after loading parser engines and configurations. Copyright © 2011 Elsevier Inc. All rights reserved.
Implementation of a computer database testing and analysis program.
Rouse, Deborah P
2007-01-01
The author is the coordinator of a computer software database testing and analysis program implemented in an associate degree nursing program. Computer software database programs help support the testing development and analysis process. Critical thinking is measurable and promoted with their use. The reader of this article will learn what is involved in procuring and implementing a computer database testing and analysis program in an academic nursing program. The use of the computerized database for testing and analysis will be approached as a method to promote and evaluate the nursing student's critical thinking skills and to prepare the nursing student for the National Council Licensure Examination.
US Gateway to SIMBAD Astronomical Database
NASA Technical Reports Server (NTRS)
Eichhorn, G.; Oliversen, R. (Technical Monitor)
1999-01-01
During the last year the US SIMBAD Gateway Project continued to provide services like user registration to the US users of the SIMBAD database in France. Currently there are over 3400 US users registered. We also provide user support by answering questions from users and handling requests for lost passwords when still necessary. We have implemented in cooperation with the CDS SIMBAD project access to the SIMBAD database for US users on an Internet address basis. This allows most US users to access SIMBAD without having to enter passwords. We have maintained the mirror copy of the SIMBAD database on a server at SAO. This has allowed much faster access for the US users. We also supported a demonstration of the SIMBAD database at the meeting of the American Astronomical Society in January. We shipped computer equipment to the meeting and provided support for the demonstration activities at the SIMBAD booth. We continued to improve the cross-linking between the SIMBAD project and the Astrophysics Data System. This cross-linking between these systems is very much appreciated by the users of both the SIMBAD database and the ADS Abstract Service. The mirror of the SIMBAD database at SAO makes this connection faster for the US astronomers. We exchange information between the ADS and SIMBAD on a daily basis. The close cooperation between the CDS in Strasbourg and SAO, facilitated by this project, is an important part of the astronomy-wide digital library initiative called Urania. It has proven to be a model in how different data centers can collaborate and enhance the value of their products by linking with other data centers.
An image database management system for conducting CAD research
NASA Astrophysics Data System (ADS)
Gruszauskas, Nicholas; Drukker, Karen; Giger, Maryellen L.
2007-03-01
The development of image databases for CAD research is not a trivial task. The collection and management of images and their related metadata from multiple sources is a time-consuming but necessary process. By standardizing and centralizing the methods in which these data are maintained, one can generate subsets of a larger database that match the specific criteria needed for a particular research project in a quick and efficient manner. A research-oriented management system of this type is highly desirable in a multi-modality CAD research environment. An online, webbased database system for the storage and management of research-specific medical image metadata was designed for use with four modalities of breast imaging: screen-film mammography, full-field digital mammography, breast ultrasound and breast MRI. The system was designed to consolidate data from multiple clinical sources and provide the user with the ability to anonymize the data. Input concerning the type of data to be stored as well as desired searchable parameters was solicited from researchers in each modality. The backbone of the database was created using MySQL. A robust and easy-to-use interface for entering, removing, modifying and searching information in the database was created using HTML and PHP. This standardized system can be accessed using any modern web-browsing software and is fundamental for our various research projects on computer-aided detection, diagnosis, cancer risk assessment, multimodality lesion assessment, and prognosis. Our CAD database system stores large amounts of research-related metadata and successfully generates subsets of cases that match the user's desired search criteria.
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
Requirements, Verification, and Compliance (RVC) Database Tool
NASA Technical Reports Server (NTRS)
Rainwater, Neil E., II; McDuffee, Patrick B.; Thomas, L. Dale
2001-01-01
This paper describes the development, design, and implementation of the Requirements, Verification, and Compliance (RVC) database used on the International Space Welding Experiment (ISWE) project managed at Marshall Space Flight Center. The RVC is a systems engineer's tool for automating and managing the following information: requirements; requirements traceability; verification requirements; verification planning; verification success criteria; and compliance status. This information normally contained within documents (e.g. specifications, plans) is contained in an electronic database that allows the project team members to access, query, and status the requirements, verification, and compliance information from their individual desktop computers. Using commercial-off-the-shelf (COTS) database software that contains networking capabilities, the RVC was developed not only with cost savings in mind but primarily for the purpose of providing a more efficient and effective automated method of maintaining and distributing the systems engineering information. In addition, the RVC approach provides the systems engineer the capability to develop and tailor various reports containing the requirements, verification, and compliance information that meets the needs of the project team members. The automated approach of the RVC for capturing and distributing the information improves the productivity of the systems engineer by allowing that person to concentrate more on the job of developing good requirements and verification programs and not on the effort of being a "document developer".
Site Partitioning for Redundant Arrays of Distributed Disks
NASA Technical Reports Server (NTRS)
Mourad, Antoine N.; Fuchs, W. Kent; Saab, Daniel G.
1996-01-01
Redundant arrays of distributed disks (RADD) can be used in a distributed computing system or database system to provide recovery in the presence of disk crashes and temporary and permanent failures of single sites. In this paper, we look at the problem of partitioning the sites of a distributed storage system into redundant arrays in such a way that the communication costs for maintaining the parity information are minimized. We show that the partitioning problem is NP-hard. We then propose and evaluate several heuristic algorithms for finding approximate solutions. Simulation results show that significant reduction in remote parity update costs can be achieved by optimizing the site partitioning scheme.
Improved Frame Mode Selection for AMR-WB+ Based on Decision Tree
NASA Astrophysics Data System (ADS)
Kim, Jong Kyu; Kim, Nam Soo
In this letter, we propose a coding mode selection method for the AMR-WB+ audio coder based on a decision tree. In order to reduce computation while maintaining good performance, decision tree classifier is adopted with the closed loop mode selection results as the target classification labels. The size of the decision tree is controlled by pruning, so the proposed method does not increase the memory requirement significantly. Through an evaluation test on a database covering both speech and music materials, the proposed method is found to achieve a much better mode selection accuracy compared with the open loop mode selection module in the AMR-WB+.
A Dynamic Approach to Make CDS/ISIS Databases Interoperable over the Internet Using the OAI Protocol
ERIC Educational Resources Information Center
Jayakanth, F.; Maly, K.; Zubair, M.; Aswath, L.
2006-01-01
Purpose: A dynamic approach to making legacy databases, like CDS/ISIS, interoperable with OAI-compliant digital libraries (DLs). Design/methodology/approach: There are many bibliographic databases that are being maintained using legacy database systems. CDS/ISIS is one such legacy database system. It was designed and developed specifically for…
NREL: U.S. Life Cycle Inventory Database - About the LCI Database Project
About the LCI Database Project The U.S. Life Cycle Inventory (LCI) Database is a publicly available data collection and analysis methods. Finding consistent and transparent LCI data for life cycle and maintain the database. The 2009 U.S. Life Cycle Inventory (LCI) Data Stakeholder meeting was an
Guidelines for establishing and maintaining construction quality databases : tech brief.
DOT National Transportation Integrated Search
2006-12-01
Construction quality databases contain a variety of construction-related data that characterize the quality of materials and workmanship. The primary purpose of construction quality databases is to help State highway agencies (SHAs) assess the qualit...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 6 Domestic Security 1 2012-01-01 2012-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and identification...
41 CFR 60-1.12 - Record retention.
Code of Federal Regulations, 2012 CFR
2012-07-01
... individual for a particular position, such as on-line resumes or internal resume databases, records... recordkeeping with respect to internal resume databases, the contractor must maintain a record of each resume added to the database, a record of the date each resume was added to the database, the position for...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 6 Domestic Security 1 2010-01-01 2010-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and identification...
41 CFR 60-1.12 - Record retention.
Code of Federal Regulations, 2010 CFR
2010-07-01
... individual for a particular position, such as on-line resumes or internal resume databases, records... recordkeeping with respect to internal resume databases, the contractor must maintain a record of each resume added to the database, a record of the date each resume was added to the database, the position for...
41 CFR 60-1.12 - Record retention.
Code of Federal Regulations, 2011 CFR
2011-07-01
... individual for a particular position, such as on-line resumes or internal resume databases, records... recordkeeping with respect to internal resume databases, the contractor must maintain a record of each resume added to the database, a record of the date each resume was added to the database, the position for...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 6 Domestic Security 1 2014-01-01 2014-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and identification...
41 CFR 60-1.12 - Record retention.
Code of Federal Regulations, 2013 CFR
2013-07-01
... individual for a particular position, such as on-line resumes or internal resume databases, records... recordkeeping with respect to internal resume databases, the contractor must maintain a record of each resume added to the database, a record of the date each resume was added to the database, the position for...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 6 Domestic Security 1 2013-01-01 2013-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and identification...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 6 Domestic Security 1 2011-01-01 2011-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and identification...
Database Management Systems: New Homes for Migrating Bibliographic Records.
ERIC Educational Resources Information Center
Brooks, Terrence A.; Bierbaum, Esther G.
1987-01-01
Assesses bibliographic databases as part of visionary text systems such as hypertext and scholars' workstations. Downloading is discussed in terms of the capability to search records and to maintain unique bibliographic descriptions, and relational database management systems, file managers, and text databases are reviewed as possible hosts for…
Distributed data mining on grids: services, tools, and applications.
Cannataro, Mario; Congiusta, Antonio; Pugliese, Andrea; Talia, Domenico; Trunfio, Paolo
2004-12-01
Data mining algorithms are widely used today for the analysis of large corporate and scientific datasets stored in databases and data archives. Industry, science, and commerce fields often need to analyze very large datasets maintained over geographically distributed sites by using the computational power of distributed and parallel systems. The grid can play a significant role in providing an effective computational support for distributed knowledge discovery applications. For the development of data mining applications on grids we designed a system called Knowledge Grid. This paper describes the Knowledge Grid framework and presents the toolset provided by the Knowledge Grid for implementing distributed knowledge discovery. The paper discusses how to design and implement data mining applications by using the Knowledge Grid tools starting from searching grid resources, composing software and data components, and executing the resulting data mining process on a grid. Some performance results are also discussed.
Grethe, Jeffrey S; Ross, Edward; Little, David; Sanders, Brian; Gupta, Amarnath; Astakhov, Vadim
2009-01-01
This paper presents current progress in the development of semantic data integration environment which is a part of the Biomedical Informatics Research Network (BIRN; http://www.nbirn.net) project. BIRN is sponsored by the National Center for Research Resources (NCRR), a component of the National Institutes of Health (NIH). A goal is the development of a cyberinfrastructure for biomedical research that supports advance data acquisition, data storage, data management, data integration, data mining, data visualization, and other computing and information processing services over the Internet. Each participating institution maintains storage of their experimental or computationally derived data. Mediator-based data integration system performs semantic integration over the databases to enable researchers to perform analyses based on larger and broader datasets than would be available from any single institution's data. This paper describes recent revision of the system architecture, implementation, and capabilities of the semantically based data integration environment for BIRN.
NASA Astrophysics Data System (ADS)
Luminari, Nicola; Airiau, Christophe; Bottaro, Alessandro
2017-11-01
In the description of the homogenized flow through a porous medium saturated by a fluid, the apparent permeability tensor is one of the most important parameters to evaluate. In this work we compute numerically the apparent permeability tensor for a 3D porous medium constituted by rigid cylinder using the VANS (Volume-Averaged Navier-Stokes) theory. Such a tensor varies with the Reynolds number, the mean pressure gradient orientation and the porosity. A database is created exploring the space of the above parameters. Including the two Euler angles that define the mean pressure gradient is extremely important to capture well possible 3D effects. Based on the database, a kriging interpolation metamodel is used to obtain an estimate of all the tensor components for any input parameters. Preliminary results of the flow in a porous channel based on the metamodel and the VANS closure are shown; the use of such a reduced order model together with a numerical code based on the equations at the macroscopic scale permit to maintain the computational times to within reasonable levels. The authors acknowledge the IDEX Foundation of the University of Toulouse 570 for the financial support Granted to the last author under the project Attractivity Chairs.
Drowning in Data: Sorting through CD ROM and Computer Databases.
ERIC Educational Resources Information Center
Cates, Carl M.; Kaye, Barbara K.
This paper identifies the bibliographic and numeric databases on CD-ROM and computer diskette that should be most useful for investigators in communication, marketing, and communication education. Bibliographic databases are usually found in three formats: citations only, citations and abstracts, and full-text articles. Numeric databases are…
Code of Federal Regulations, 2013 CFR
2013-10-01
... providers as necessary to maintain the viability of the PAS system. 5. Maintain a database for PAS related... NSEP PAS database only to those having a need-to-know or who will not use the information for economic... selected for this priority should be responsible for ensuring the viability or reconstruction of the basic...
Code of Federal Regulations, 2011 CFR
2011-10-01
... providers as necessary to maintain the viability of the PAS system. 5. Maintain a database for PAS related... NSEP PAS database only to those having a need-to-know or who will not use the information for economic... selected for this priority should be responsible for ensuring the viability or reconstruction of the basic...
Code of Federal Regulations, 2014 CFR
2014-10-01
... providers as necessary to maintain the viability of the PAS system. 5. Maintain a database for PAS related... NSEP PAS database only to those having a need-to-know or who will not use the information for economic... selected for this priority should be responsible for ensuring the viability or reconstruction of the basic...
Code of Federal Regulations, 2012 CFR
2012-10-01
... providers as necessary to maintain the viability of the PAS system. 5. Maintain a database for PAS related... NSEP PAS database only to those having a need-to-know or who will not use the information for economic... selected for this priority should be responsible for ensuring the viability or reconstruction of the basic...
Foerster, Hartmut; Bombarely, Aureliano; Battey, James N D; Sierro, Nicolas; Ivanov, Nikolai V; Mueller, Lukas A
2018-01-01
Abstract SolCyc is the entry portal to pathway/genome databases (PGDBs) for major species of the Solanaceae family hosted at the Sol Genomics Network. Currently, SolCyc comprises six organism-specific PGDBs for tomato, potato, pepper, petunia, tobacco and one Rubiaceae, coffee. The metabolic networks of those PGDBs have been computationally predicted by the pathologic component of the pathway tools software using the manually curated multi-domain database MetaCyc (http://www.metacyc.org/) as reference. SolCyc has been recently extended by taxon-specific databases, i.e. the family-specific SolanaCyc database, containing only curated data pertinent to species of the nightshade family, and NicotianaCyc, a genus-specific database that stores all relevant metabolic data of the Nicotiana genus. Through manual curation of the published literature, new metabolic pathways have been created in those databases, which are complemented by the continuously updated, relevant species-specific pathways from MetaCyc. At present, SolanaCyc comprises 199 pathways and 29 superpathways and NicotianaCyc accounts for 72 pathways and 13 superpathways. Curator-maintained, taxon-specific databases such as SolanaCyc and NicotianaCyc are characterized by an enrichment of data specific to these taxa and free of falsely predicted pathways. Both databases have been used to update recently created Nicotiana-specific databases for Nicotiana tabacum, Nicotiana benthamiana, Nicotiana sylvestris and Nicotiana tomentosiformis by propagating verifiable data into those PGDBs. In addition, in-depth curation of the pathways in N.tabacum has been carried out which resulted in the elimination of 156 pathways from the 569 pathways predicted by pathway tools. Together, in-depth curation of the predicted pathway network and the supplementation with curated data from taxon-specific databases has substantially improved the curation status of the species–specific N.tabacum PGDB. The implementation of this strategy will significantly advance the curation status of all organism-specific databases in SolCyc resulting in the improvement on database accuracy, data analysis and visualization of biochemical networks in those species. Database URL https://solgenomics.net/tools/solcyc/ PMID:29762652
NASA Astrophysics Data System (ADS)
Wang, Lusheng; Yang, Yong; Lin, Guohui
Finding the closest object for a query in a database is a classical problem in computer science. For some modern biological applications, computing the similarity between two objects might be very time consuming. For example, it takes a long time to compute the edit distance between two whole chromosomes and the alignment cost of two 3D protein structures. In this paper, we study the nearest neighbor search problem in metric space, where the pair-wise distance between two objects in the database is known and we want to minimize the number of distances computed on-line between the query and objects in the database in order to find the closest object. We have designed two randomized approaches for indexing metric space databases, where objects are purely described by their distances with each other. Analysis and experiments show that our approaches only need to compute O(logn) objects in order to find the closest object, where n is the total number of objects in the database.
Data entry module and manuals for the Land Treatment Digital Library
Welty, Justin L.; Pilliod, David S.
2013-01-01
Across the country, public land managers make decisions each year that influence landscapes and ecosystems within their jurisdictions. Many of these decisions involve vegetation manipulations, which often are referred to as land treatments. These treatments include removal or alteration of plant biomass, seeding of burned areas, application of herbicides, and other activities. Data documenting these land treatments usually are stored at local management offices in various formats. Therefore, anyone interested in the types and effects of land treatments across multiple jurisdictions must first assemble the information, which can be difficult if data discovery and organization involve multiple local offices. A centralized system for storing and accessing the data helps inform land managers when making policy and management considerations and assists scientists in developing sampling designs and studies. The Land Treatment Digital Library (LTDL) was created by the U.S. Geological Survey (USGS) as a comprehensive database incorporating tabular data, documentation, photographs, and spatial data about land treatments in a single system. It was developed over a period of several years and refined based on feedback from partner agencies and stakeholders. Currently, Bureau of Land Management (BLM) land treatment data are being entered by USGS personnel as part of a memorandum of understanding between the USGS and BLM. The LTDL has a website maintained by the USGS Forest and Rangeland Ecosystem Science Center where LTDL data can be viewed http://ltdl.wr.usgs.gov/. The resources and information provided in this data series allow other agencies, organizations, and individuals to download an empty, stand-alone LTDL database to individual or networked computers. Data entered in these databases may be submitted to the USGS for possible inclusion in the online LTDL. Multiple computer programs are used to accomplish the objective of the LTDL. The support of an information-technology specialist or professionals familiar with Microsoft Access™, ESRI’s ArcGIS™, Python, Adobe Acrobat Professional™, and computer settings is essential when installing and operating the LTDL. After the program is operational, a critical element for successful data entry is an understanding of the difference between database tables and forms, and how to edit data in both formats. Complete instructions accompany the program, and they should be followed carefully to ensure the setup and operation of the database goes smoothly.
Maintaining Multimedia Data in a Geospatial Database
2012-09-01
at PostgreSQL and MySQL as spatial databases was offered. Given their results, as each database produced result sets from zero to 100,000, it was...excelled given multiple conditions. A different look at PostgreSQL and MySQL as spatial databases was offered. Given their results, as each database... MySQL ................................................................................................14 B. BENCHMARKING DATA RETRIEVED FROM TABLE
The Mouse Genome Database (MGD): facilitating mouse as a model for human biology and disease.
Eppig, Janan T; Blake, Judith A; Bult, Carol J; Kadin, James A; Richardson, Joel E
2015-01-01
The Mouse Genome Database (MGD, http://www.informatics.jax.org) serves the international biomedical research community as the central resource for integrated genomic, genetic and biological data on the laboratory mouse. To facilitate use of mouse as a model in translational studies, MGD maintains a core of high-quality curated data and integrates experimentally and computationally generated data sets. MGD maintains a unified catalog of genes and genome features, including functional RNAs, QTL and phenotypic loci. MGD curates and provides functional and phenotype annotations for mouse genes using the Gene Ontology and Mammalian Phenotype Ontology. MGD integrates phenotype data and associates mouse genotypes to human diseases, providing critical mouse-human relationships and access to repositories holding mouse models. MGD is the authoritative source of nomenclature for genes, genome features, alleles and strains following guidelines of the International Committee on Standardized Genetic Nomenclature for Mice. A new addition to MGD, the Human-Mouse: Disease Connection, allows users to explore gene-phenotype-disease relationships between human and mouse. MGD has also updated search paradigms for phenotypic allele attributes, incorporated incidental mutation data, added a module for display and exploration of genes and microRNA interactions and adopted the JBrowse genome browser. MGD resources are freely available to the scientific community. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Tellman, B.; Sullivan, J.; Kettner, A.; Brakenridge, G. R.; Slayback, D. A.; Kuhn, C.; Doyle, C.
2016-12-01
There is an increasing need to understand flood vulnerability as the societal and economic effects of flooding increases. Risk models from insurance companies and flood models from hydrologists must be calibrated based on flood observations in order to make future predictions that can improve planning and help societies reduce future disasters. Specifically, to improve these models both traditional methods of flood prediction from physically based models as well as data-driven techniques, such as machine learning, require spatial flood observation to validate model outputs and quantify uncertainty. A key dataset that is missing for flood model validation is a global historical geo-database of flood event extents. Currently, the most advanced database of historical flood extent is hosted and maintained at the Dartmouth Flood Observatory (DFO) that has catalogued 4320 floods (1985-2015) but has only mapped 5% of these floods. We are addressing this data gap by mapping the inventory of floods in the DFO database to create a first-of- its-kind, comprehensive, global and historical geospatial database of flood events. To do so, we combine water detection algorithms on MODIS and Landsat 5,7 and 8 imagery in Google Earth Engine to map discrete flood events. The created database will be available in the Earth Engine Catalogue for download by country, region, or time period. This dataset can be leveraged for new data-driven hydrologic modeling using machine learning algorithms in Earth Engine's highly parallelized computing environment, and we will show examples for New York and Senegal.
75 FR 41180 - Notice of Order: Revisions to Enterprise Public Use Database
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-15
... Database AGENCY: Federal Housing Finance Agency. ACTION: Notice of order. SUMMARY: Section 1323(a)(1) of.... This responsibility to maintain a public use database (PUDB) for such mortgage data was transferred to... purpose of loan data field in these two databases. 4. Single-family Data Field 27 and Multifamily Data...
A Firefly Algorithm-based Approach for Pseudo-Relevance Feedback: Application to Medical Database.
Khennak, Ilyes; Drias, Habiba
2016-11-01
The difficulty of disambiguating the sense of the incomplete and imprecise keywords that are extensively used in the search queries has caused the failure of search systems to retrieve the desired information. One of the most powerful and promising method to overcome this shortcoming and improve the performance of search engines is Query Expansion, whereby the user's original query is augmented by new keywords that best characterize the user's information needs and produce more useful query. In this paper, a new Firefly Algorithm-based approach is proposed to enhance the retrieval effectiveness of query expansion while maintaining low computational complexity. In contrast to the existing literature, the proposed approach uses a Firefly Algorithm to find the best expanded query among a set of expanded query candidates. Moreover, this new approach allows the determination of the length of the expanded query empirically. Experimental results on MEDLINE, the on-line medical information database, show that our proposed approach is more effective and efficient compared to the state-of-the-art.
CDD/SPARCLE: functional classification of proteins via subfamily domain architectures.
Marchler-Bauer, Aron; Bo, Yu; Han, Lianyi; He, Jane; Lanczycki, Christopher J; Lu, Shennan; Chitsaz, Farideh; Derbyshire, Myra K; Geer, Renata C; Gonzales, Noreen R; Gwadz, Marc; Hurwitz, David I; Lu, Fu; Marchler, Gabriele H; Song, James S; Thanki, Narmada; Wang, Zhouxi; Yamashita, Roxanne A; Zhang, Dachuan; Zheng, Chanjuan; Geer, Lewis Y; Bryant, Stephen H
2017-01-04
NCBI's Conserved Domain Database (CDD) aims at annotating biomolecular sequences with the location of evolutionarily conserved protein domain footprints, and functional sites inferred from such footprints. An archive of pre-computed domain annotation is maintained for proteins tracked by NCBI's Entrez database, and live search services are offered as well. CDD curation staff supplements a comprehensive collection of protein domain and protein family models, which have been imported from external providers, with representations of selected domain families that are curated in-house and organized into hierarchical classifications of functionally distinct families and sub-families. CDD also supports comparative analyses of protein families via conserved domain architectures, and a recent curation effort focuses on providing functional characterizations of distinct subfamily architectures using SPARCLE: Subfamily Protein Architecture Labeling Engine. CDD can be accessed at https://www.ncbi.nlm.nih.gov/Structure/cdd/cdd.shtml. Published by Oxford University Press on behalf of Nucleic Acids Research 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.
A User's Applications of Imaging Techniques: The University of Maryland Historic Textile Database.
ERIC Educational Resources Information Center
Anderson, Clarita S.
1991-01-01
Describes the incorporation of textile images into the University of Maryland Historic Textile Database by a computer user rather than a computer expert. Selection of a database management system is discussed, and PICTUREPOWER, a system that integrates photographic quality images with text and numeric information in databases, is described. (three…
Everett, Kay D.; Conway, Claire; Desany, Gerard J.; Baker, Brian L.; Choi, Gilwoo; Taylor, Charles A.; Edelman, Elazer R.
2016-01-01
Endovascular stents are the mainstay of interventional cardiovascular medicine. Technological advances have reduced biological and clinical complications but not mechanical failure. Stent strut fracture is increasingly recognized as of paramount clinical importance. Though consensus reigns that fractures can result from material fatigue, how fracture is induced and the mechanisms underlying its clinical sequelae remain ill-defined. In this study, strut fractures were identified in the prospectively maintained Food and Drug Administration's (FDA) Manufacturer and User Facility Device Experience Database (MAUDE), covering years 2006–2011, and differentiated based on specific coronary artery implantation site and device configuration. These data, and knowledge of the extent of dynamic arterial deformations obtained from patient CT images and published data, were used to define boundary conditions for 3D finite element models incorporating multimodal, multi-cycle deformation. The structural response for a range of stent designs and configurations was predicted by computational models and included estimation of maximum principal, minimum principal and equivalent plastic strains. Fatigue assessment was performed with Goodman diagrams and safe/unsafe regions defined for different stent designs. Von Mises stress and maximum principal strain increased with multimodal, fully reversed deformation. Spatial maps of unsafe locations corresponded to the identified locations of fracture in different coronary arteries in the clinical database. These findings, for the first time, provide insight into a potential link between patient adverse events and computational modeling of stent deformation. Understanding of the mechanical forces imposed under different implantation conditions may assist in rational design and optimal placement of these devices. PMID:26467552
Imaged document information location and extraction using an optical correlator
NASA Astrophysics Data System (ADS)
Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.
1999-12-01
Today, the paper document is fast becoming a thing of the past. With the rapid development of fast, inexpensive computing and storage devices, many government and private organizations are archiving their documents in electronic form (e.g., personnel records, medical records, patents, etc.). Many of these organizations are converting their paper archives to electronic images, which are then stored in a computer database. Because of this, there is a need to efficiently organize this data into comprehensive and accessible information resources and provide for rapid access to the information contained within these imaged documents. To meet this need, Litton PRC and Litton Data Systems Division are developing a system, the Imaged Document Optical Correlation and Conversion System (IDOCCS), to provide a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provide a means for the search and retrieval of information from imaged documents. IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives and has the potential to determine the types of languages contained within a document. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited, e.g., imaged documents containing an agency's seal or logo can be singled out. In this paper, we present a description of IDOCCS as well as preliminary performance results and theoretical projections.
Everett, Kay D; Conway, Claire; Desany, Gerard J; Baker, Brian L; Choi, Gilwoo; Taylor, Charles A; Edelman, Elazer R
2016-02-01
Endovascular stents are the mainstay of interventional cardiovascular medicine. Technological advances have reduced biological and clinical complications but not mechanical failure. Stent strut fracture is increasingly recognized as of paramount clinical importance. Though consensus reigns that fractures can result from material fatigue, how fracture is induced and the mechanisms underlying its clinical sequelae remain ill-defined. In this study, strut fractures were identified in the prospectively maintained Food and Drug Administration's (FDA) Manufacturer and User Facility Device Experience Database (MAUDE), covering years 2006-2011, and differentiated based on specific coronary artery implantation site and device configuration. These data, and knowledge of the extent of dynamic arterial deformations obtained from patient CT images and published data, were used to define boundary conditions for 3D finite element models incorporating multimodal, multi-cycle deformation. The structural response for a range of stent designs and configurations was predicted by computational models and included estimation of maximum principal, minimum principal and equivalent plastic strains. Fatigue assessment was performed with Goodman diagrams and safe/unsafe regions defined for different stent designs. Von Mises stress and maximum principal strain increased with multimodal, fully reversed deformation. Spatial maps of unsafe locations corresponded to the identified locations of fracture in different coronary arteries in the clinical database. These findings, for the first time, provide insight into a potential link between patient adverse events and computational modeling of stent deformation. Understanding of the mechanical forces imposed under different implantation conditions may assist in rational design and optimal placement of these devices.
Bovine Genome Database: supporting community annotation and analysis of the Bos taurus genome
2010-01-01
Background A goal of the Bovine Genome Database (BGD; http://BovineGenome.org) has been to support the Bovine Genome Sequencing and Analysis Consortium (BGSAC) in the annotation and analysis of the bovine genome. We were faced with several challenges, including the need to maintain consistent quality despite diversity in annotation expertise in the research community, the need to maintain consistent data formats, and the need to minimize the potential duplication of annotation effort. With new sequencing technologies allowing many more eukaryotic genomes to be sequenced, the demand for collaborative annotation is likely to increase. Here we present our approach, challenges and solutions facilitating a large distributed annotation project. Results and Discussion BGD has provided annotation tools that supported 147 members of the BGSAC in contributing 3,871 gene models over a fifteen-week period, and these annotations have been integrated into the bovine Official Gene Set. Our approach has been to provide an annotation system, which includes a BLAST site, multiple genome browsers, an annotation portal, and the Apollo Annotation Editor configured to connect directly to our Chado database. In addition to implementing and integrating components of the annotation system, we have performed computational analyses to create gene evidence tracks and a consensus gene set, which can be viewed on individual gene pages at BGD. Conclusions We have provided annotation tools that alleviate challenges associated with distributed annotation. Our system provides a consistent set of data to all annotators and eliminates the need for annotators to format data. Involving the bovine research community in genome annotation has allowed us to leverage expertise in various areas of bovine biology to provide biological insight into the genome sequence. PMID:21092105
Palm-Vein Classification Based on Principal Orientation Features
Zhou, Yujia; Liu, Yaqin; Feng, Qianjin; Yang, Feng; Huang, Jing; Nie, Yixiao
2014-01-01
Personal recognition using palm–vein patterns has emerged as a promising alternative for human recognition because of its uniqueness, stability, live body identification, flexibility, and difficulty to cheat. With the expanding application of palm–vein pattern recognition, the corresponding growth of the database has resulted in a long response time. To shorten the response time of identification, this paper proposes a simple and useful classification for palm–vein identification based on principal direction features. In the registration process, the Gaussian-Radon transform is adopted to extract the orientation matrix and then compute the principal direction of a palm–vein image based on the orientation matrix. The database can be classified into six bins based on the value of the principal direction. In the identification process, the principal direction of the test sample is first extracted to ascertain the corresponding bin. One-by-one matching with the training samples is then performed in the bin. To improve recognition efficiency while maintaining better recognition accuracy, two neighborhood bins of the corresponding bin are continuously searched to identify the input palm–vein image. Evaluation experiments are conducted on three different databases, namely, PolyU, CASIA, and the database of this study. Experimental results show that the searching range of one test sample in PolyU, CASIA and our database by the proposed method for palm–vein identification can be reduced to 14.29%, 14.50%, and 14.28%, with retrieval accuracy of 96.67%, 96.00%, and 97.71%, respectively. With 10,000 training samples in the database, the execution time of the identification process by the traditional method is 18.56 s, while that by the proposed approach is 3.16 s. The experimental results confirm that the proposed approach is more efficient than the traditional method, especially for a large database. PMID:25383715
The National Nonindigenous Aquatic Species Database
Neilson, Matthew E.; Fuller, Pamela L.
2012-01-01
The U.S. Geological Survey (USGS) Nonindigenous Aquatic Species (NAS) Program maintains a database that monitors, records, and analyzes sightings of nonindigenous aquatic plant and animal species throughout the United States. The program is based at the USGS Wetland and Aquatic Research Center in Gainesville, Florida.The initiative to maintain scientific information on nationwide occurrences of nonindigenous aquatic species began with the Aquatic Nuisance Species Task Force, created by Congress in 1990 to provide timely information to natural resource managers. Since then, the NAS database has been a clearinghouse of information for confirmed sightings of nonindigenous, also known as nonnative, aquatic species throughout the Nation. The database is used to produce email alerts, maps, summary graphs, publications, and other information products to support natural resource managers.
Privacy-preserving search for chemical compound databases.
Shimizu, Kana; Nuida, Koji; Arai, Hiromi; Mitsunari, Shigeo; Attrapadung, Nuttapong; Hamada, Michiaki; Tsuda, Koji; Hirokawa, Takatsugu; Sakuma, Jun; Hanaoka, Goichiro; Asai, Kiyoshi
2015-01-01
Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information.
Privacy-preserving search for chemical compound databases
2015-01-01
Background Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. Results In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. Conclusion We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information. PMID:26678650
NASA Technical Reports Server (NTRS)
Bohnhoff-Hlavacek, Gail
1992-01-01
One of the objectives of the team supporting the LDEF Systems and Materials Special Investigative Groups is to develop databases of experimental findings. These databases identify the hardware flown, summarize results and conclusions, and provide a system for acknowledging investigators, tracing sources of data, and future design suggestions. To date, databases covering the optical experiments, and thermal control materials (chromic acid anodized aluminum, silverized Teflon blankets, and paints) have been developed at Boeing. We used the Filemaker Pro software, the database manager for the Macintosh computer produced by the Claris Corporation. It is a flat, text-retrievable database that provides access to the data via an intuitive user interface, without tedious programming. Though this software is available only for the Macintosh computer at this time, copies of the databases can be saved to a format that is readable on a personal computer as well. Further, the data can be exported to more powerful relational databases, capabilities, and use of the LDEF databases and describe how to get copies of the database for your own research.
Code of Federal Regulations, 2012 CFR
2012-04-01
... participate in a nationwide mortgage licensing system and registry database of residential mortgage loan... charged with establishing and maintaining a licensing and registry database for loan originators. (b...
Computer Databases as an Educational Tool in the Basic Sciences.
ERIC Educational Resources Information Center
Friedman, Charles P.; And Others
1990-01-01
The University of North Carolina School of Medicine developed a computer database, INQUIRER, containing scientific information in bacteriology, and then integrated the database into routine educational activities for first-year medical students in their microbiology course. (Author/MLW)
Michel-Sendis, F.; Gauld, I.; Martinez, J. S.; ...
2017-08-02
SFCOMPO-2.0 is the new release of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) database of experimental assay measurements. These measurements are isotopic concentrations from destructive radiochemical analyses of spent nuclear fuel (SNF) samples. We supplement the measurements with design information for the fuel assembly and fuel rod from which each sample was taken, as well as with relevant information on operating conditions and characteristics of the host reactors. These data are necessary for modeling and simulation of the isotopic evolution of the fuel during irradiation. SFCOMPO-2.0 has been developed and is maintained by the OECDmore » NEA under the guidance of the Expert Group on Assay Data of Spent Nuclear Fuel (EGADSNF), which is part of the NEA Working Party on Nuclear Criticality Safety (WPNCS). Significant efforts aimed at establishing a thorough, reliable, publicly available resource for code validation and safety applications have led to the capture and standardization of experimental data from 750 SNF samples from more than 40 reactors. These efforts have resulted in the creation of the SFCOMPO-2.0 database, which is publicly available from the NEA Data Bank. Our paper describes the new database, and applications of SFCOMPO-2.0 for computer code validation, integral nuclear data benchmarking, and uncertainty analysis in nuclear waste package analysis are briefly illustrated.« less
Sehgal, Manika; Singh, Tiratha Raj
2014-04-01
We present DR-GAS(1), a unique, consolidated and comprehensive DNA repair genetic association studies database of human DNA repair system. It presents information on repair genes, assorted mechanisms of DNA repair, linkage disequilibrium, haplotype blocks, nsSNPs, phosphorylation sites, associated diseases, and pathways involved in repair systems. DNA repair is an intricate process which plays an essential role in maintaining the integrity of the genome by eradicating the damaging effect of internal and external changes in the genome. Hence, it is crucial to extensively understand the intact process of DNA repair, genes involved, non-synonymous SNPs which perhaps affect the function, phosphorylated residues and other related genetic parameters. All the corresponding entries for DNA repair genes, such as proteins, OMIM IDs, literature references and pathways are cross-referenced to their respective primary databases. DNA repair genes and their associated parameters are either represented in tabular or in graphical form through images elucidated by computational and statistical analyses. It is believed that the database will assist molecular biologists, biotechnologists, therapeutic developers and other scientific community to encounter biologically meaningful information, and meticulous contribution of genetic level information towards treacherous diseases in human DNA repair systems. DR-GAS is freely available for academic and research purposes at: http://www.bioinfoindia.org/drgas. Copyright © 2014 Elsevier B.V. All rights reserved.
The Amordad database engine for metagenomics.
Behnam, Ehsan; Smith, Andrew D
2014-10-15
Several technical challenges in metagenomic data analysis, including assembling metagenomic sequence data or identifying operational taxonomic units, are both significant and well known. These forms of analysis are increasingly cited as conceptually flawed, given the extreme variation within traditionally defined species and rampant horizontal gene transfer. Furthermore, computational requirements of such analysis have hindered content-based organization of metagenomic data at large scale. In this article, we introduce the Amordad database engine for alignment-free, content-based indexing of metagenomic datasets. Amordad places the metagenome comparison problem in a geometric context, and uses an indexing strategy that combines random hashing with a regular nearest neighbor graph. This framework allows refinement of the database over time by continual application of random hash functions, with the effect of each hash function encoded in the nearest neighbor graph. This eliminates the need to explicitly maintain the hash functions in order for query efficiency to benefit from the accumulated randomness. Results on real and simulated data show that Amordad can support logarithmic query time for identifying similar metagenomes even as the database size reaches into the millions. Source code, licensed under the GNU general public license (version 3) is freely available for download from http://smithlabresearch.org/amordad andrewds@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michel-Sendis, F.; Gauld, I.; Martinez, J. S.
SFCOMPO-2.0 is the new release of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) database of experimental assay measurements. These measurements are isotopic concentrations from destructive radiochemical analyses of spent nuclear fuel (SNF) samples. We supplement the measurements with design information for the fuel assembly and fuel rod from which each sample was taken, as well as with relevant information on operating conditions and characteristics of the host reactors. These data are necessary for modeling and simulation of the isotopic evolution of the fuel during irradiation. SFCOMPO-2.0 has been developed and is maintained by the OECDmore » NEA under the guidance of the Expert Group on Assay Data of Spent Nuclear Fuel (EGADSNF), which is part of the NEA Working Party on Nuclear Criticality Safety (WPNCS). Significant efforts aimed at establishing a thorough, reliable, publicly available resource for code validation and safety applications have led to the capture and standardization of experimental data from 750 SNF samples from more than 40 reactors. These efforts have resulted in the creation of the SFCOMPO-2.0 database, which is publicly available from the NEA Data Bank. Our paper describes the new database, and applications of SFCOMPO-2.0 for computer code validation, integral nuclear data benchmarking, and uncertainty analysis in nuclear waste package analysis are briefly illustrated.« less
Cordova: Web-based management of genetic variation data
Ephraim, Sean S.; Anand, Nikhil; DeLuca, Adam P.; Taylor, Kyle R.; Kolbe, Diana L.; Simpson, Allen C.; Azaiez, Hela; Sloan, Christina M.; Shearer, A. Eliot; Hallier, Andrea R.; Casavant, Thomas L.; Scheetz, Todd E.; Smith, Richard J. H.; Braun, Terry A.
2014-01-01
Summary: Cordova is an out-of-the-box solution for building and maintaining an online database of genetic variations integrated with pathogenicity prediction results from popular algorithms. Our primary motivation for developing this system is to aid researchers and clinician–scientists in determining the clinical significance of genetic variations. To achieve this goal, Cordova provides an interface to review and manually or computationally curate genetic variation data as well as share it for clinical diagnostics and the advancement of research. Availability and implementation: Cordova is open source under the MIT license and is freely available for download at https://github.com/clcg/cordova. Contact: sean.ephraim@gmail.com or terry-braun@uiowa.edu PMID:25123904
Pedretti, Kevin
2008-11-18
A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL LANDS HIGHWAYS FISH AND... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
ERIC Educational Resources Information Center
Barker, Philip
1986-01-01
Discussion of developments in information storage technology likely to have significant impact upon library utilization focuses on hardware (videodisc technology) and software developments (knowledge databases; computer networks; database management systems; interactive video, computer, and multimedia user interfaces). Three generic computer-based…
DockScreen: A database of in silico biomolecular interactions to support computational toxicology
We have developed DockScreen, a database of in silico biomolecular interactions designed to enable rational molecular toxicological insight within a computational toxicology framework. This database is composed of chemical/target (receptor and enzyme) binding scores calculated by...
Designing integrated computational biology pipelines visually.
Jamil, Hasan M
2013-01-01
The long-term cost of developing and maintaining a computational pipeline that depends upon data integration and sophisticated workflow logic is too high to even contemplate "what if" or ad hoc type queries. In this paper, we introduce a novel application building interface for computational biology research, called VizBuilder, by leveraging a recent query language called BioFlow for life sciences databases. Using VizBuilder, it is now possible to develop ad hoc complex computational biology applications at throw away costs. The underlying query language supports data integration and workflow construction almost transparently and fully automatically, using a best effort approach. Users express their application by drawing it with VizBuilder icons and connecting them in a meaningful way. Completed applications are compiled and translated as BioFlow queries for execution by the data management system LifeDB, for which VizBuilder serves as a front end. We discuss VizBuilder features and functionalities in the context of a real life application after we briefly introduce BioFlow. The architecture and design principles of VizBuilder are also discussed. Finally, we outline future extensions of VizBuilder. To our knowledge, VizBuilder is a unique system that allows visually designing computational biology pipelines involving distributed and heterogeneous resources in an ad hoc manner.
DESPIC: Detecting Early Signatures of Persuasion in Information Cascades
2015-08-27
over NoSQL Databases, Proceedings of the 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2014). 26-MAY-14, . : , P...over NoSQL Databases. Proceedings of the 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid 2014). Chicago, IL, USA...distributed NoSQL databases including HBase and Riak, we finalized the requirements of the optimal computational architecture to support our framework
High-Order Methods for Computational Physics
1999-03-01
computation is running in 278 Ronald D. Henderson parallel. Instead we use the concept of a voxel database (VDB) of geometric positions in the mesh [85...processor 0 Fig. 4.19. Connectivity and communications axe established by building a voxel database (VDB) of positions. A VDB maps each position to a...studies such as the highly accurate stability computations considered help expand the database for this benchmark problem. The two-dimensional linear
Simple re-instantiation of small databases using cloud computing.
Tan, Tin Wee; Xie, Chao; De Silva, Mark; Lim, Kuan Siong; Patro, C Pawan K; Lim, Shen Jean; Govindarajan, Kunde Ramamoorthy; Tong, Joo Chuan; Choo, Khar Heng; Ranganathan, Shoba; Khan, Asif M
2013-01-01
Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear.
Simple re-instantiation of small databases using cloud computing
2013-01-01
Background Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. Results We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Conclusions Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear. PMID:24564380
Virus taxonomy: the database of the International Committee on Taxonomy of Viruses (ICTV)
Dempsey, Donald M; Hendrickson, Robert Curtis; Orton, Richard J; Siddell, Stuart G; Smith, Donald B
2018-01-01
Abstract The International Committee on Taxonomy of Viruses (ICTV) is charged with the task of developing, refining, and maintaining a universal virus taxonomy. This task encompasses the classification of virus species and higher-level taxa according to the genetic and biological properties of their members; naming virus taxa; maintaining a database detailing the currently approved taxonomy; and providing the database, supporting proposals, and other virus-related information from an open-access, public web site. The ICTV web site (http://ictv.global) provides access to the current taxonomy database in online and downloadable formats, and maintains a complete history of virus taxa back to the first release in 1971. The ICTV has also published the ICTV Report on Virus Taxonomy starting in 1971. This Report provides a comprehensive description of all virus taxa covering virus structure, genome structure, biology and phylogenetics. The ninth ICTV report, published in 2012, is available as an open-access online publication from the ICTV web site. The current, 10th report (http://ictv.global/report/), is being published online, and is replacing the previous hard-copy edition with a completely open access, continuously updated publication. No other database or resource exists that provides such a comprehensive, fully annotated compendium of information on virus taxa and taxonomy. PMID:29040670
Partitioning medical image databases for content-based queries on a Grid.
Montagnat, J; Breton, V; E Magnin, I
2005-01-01
In this paper we study the impact of executing a medical image database query application on the grid. For lowering the total computation time, the image database is partitioned into subsets to be processed on different grid nodes. A theoretical model of the application complexity and estimates of the grid execution overhead are used to efficiently partition the database. We show results demonstrating that smart partitioning of the database can lead to significant improvements in terms of total computation time. Grids are promising for content-based image retrieval in medical databases.
High-Performance Secure Database Access Technologies for HEP Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthew Vranicar; John Weicher
2006-04-17
The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysismore » capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.« less
32 CFR 1900.21 - Processing of requests for records.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Information Act Amendments of 1996. (b) Database of “officially released information.” As an alternative to extensive tasking and as an accommodation to many requesters, the Agency maintains a database of “officially released information” which contains copies of documents released by this Agency. Searches of this database...
3 CFR - Enhancing Payment Accuracy Through a “Do Not Pay List”
Code of Federal Regulations, 2011 CFR
2011-01-01
... are not made. Agencies maintain many databases containing information on a recipient's eligibility to... databases before making payments or awards, agencies can identify ineligible recipients and prevent certain... pre-payment and pre-award procedures and ensure that a thorough review of available databases with...
Glycemic Index Diet: What's Behind the Claims
... choices for people with diabetes. An international GI database is maintained by Sydney University Glycemic Index Research Services in Sydney, Australia. The database contains the results of studies conducted there and ...
Component, Context and Manufacturing Model Library (C2M2L)
2013-03-01
Penn State team were stored in a relational database for easy access, storage and maintainability. The relational database consisted of a PostGres ...file into a format that can be imported into the PostGres database. This same custom application was used to generate Microsoft Excel templates...Press Break Forming Equipment 4.14 Manufacturing Model Library Database Structure The data storage mechanism for the ARL PSU MML was a PostGres database
Scofield, Patricia A.; Smith, Linda Lenell; Johnson, David N.
2017-07-01
The U.S. Environmental Protection Agency promulgated national emission standards for emissions of radionuclides other than radon from US Department of Energy facilities in Chapter 40 of the Code of Federal Regulations (CFR) 61, Subpart H. This regulatory standard limits the annual effective dose that any member of the public can receive from Department of Energy facilities to 0.1 mSv. As defined in the preamble of the final rule, all of the facilities on the Oak Ridge Reservation, i.e., the Y–12 National Security Complex, Oak Ridge National Laboratory, East Tennessee Technology Park, and any other U.S. Department of Energy operations onmore » Oak Ridge Reservation, combined, must meet the annual dose limit of 0.1 mSv. At Oak Ridge National Laboratory, there are monitored sources and numerous unmonitored sources. To maintain radiological source and inventory information for these unmonitored sources, e.g., laboratory hoods, equipment exhausts, and room exhausts not currently venting to monitored stacks on the Oak Ridge National Laboratory campus, the Environmental Protection Rad NESHAPs Inventory Web Database was developed. This database is updated annually and is used to compile emissions data for the annual Radionuclide National Emission Standards for Hazardous Air Pollutants (Rad NESHAPs) report required by 40 CFR 61.94. It also provides supporting documentation for facility compliance audits. In addition, a Rad NESHAPs source and dose database was developed to import the source and dose summary data from Clean Air Act Assessment Package—1988 computer model files. As a result, this database provides Oak Ridge Reservation and facility-specific source inventory; doses associated with each source and facility; and total doses for the Oak Ridge Reservation dose.« less
Murphy, Elizabeth A.; Ishii, Audrey L.
2006-01-01
The U.S. Geological Survey (USGS), in cooperation with DuPage County Department of Engineering, Stormwater Management Division, maintains a database of hourly meteorologic and hydrologic data for use in a near real-time streamflow simulation system, which assists in the management and operation of reservoirs and other flood-control structures in the Salt Creek watershed in DuPage County, Illinois. The majority of the precipitation data are collected from a tipping-bucket rain-gage network located in and near DuPage County. The other meteorologic data (wind speed, solar radiation, air temperature, and dewpoint temperature) are collected at Argonne National Laboratory in Argonne, Illinois. Potential evapotranspiration is computed from the meteorologic data. The hydrologic data (discharge and stage) are collected at USGS streamflow-gaging stations in DuPage County. These data are stored in a Watershed Data Management (WDM) database. This report describes a version of the WDM database that was quality-assured and quality-controlled annually to ensure the datasets were complete and accurate. This version of the WDM database contains data from January 1, 1997, through September 30, 2004, and is named SEP04.WDM. This report provides a record of time periods of poor data for each precipitation dataset and describes methods used to estimate the data for the periods when data were missing, flawed, or snowfall-affected. The precipitation dataset data-filling process was changed in 2001, and both processes are described. The other meteorologic and hydrologic datasets in the database are fully described in the annual U.S. Geological Survey Water Data Report for Illinois and, therefore, are described in less detail than the precipitation datasets in this report.
Adaptive Data-based Predictive Control for Short Take-off and Landing (STOL) Aircraft
NASA Technical Reports Server (NTRS)
Barlow, Jonathan Spencer; Acosta, Diana Michelle; Phan, Minh Q.
2010-01-01
Data-based Predictive Control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. The characteristics of adaptive data-based predictive control are particularly appropriate for the control of nonlinear and time-varying systems, such as Short Take-off and Landing (STOL) aircraft. STOL is a capability of interest to NASA because conceptual Cruise Efficient Short Take-off and Landing (CESTOL) transport aircraft offer the ability to reduce congestion in the terminal area by utilizing existing shorter runways at airports, as well as to lower community noise by flying steep approach and climb-out patterns that reduce the noise footprint of the aircraft. In this study, adaptive data-based predictive control is implemented as an integrated flight-propulsion controller for the outer-loop control of a CESTOL-type aircraft. Results show that the controller successfully tracks velocity while attempting to maintain a constant flight path angle, using longitudinal command, thrust and flap setting as the control inputs.
Scofield, Patricia A; Smith, Linda L; Johnson, David N
2017-07-01
The U.S. Environmental Protection Agency promulgated national emission standards for emissions of radionuclides other than radon from US Department of Energy facilities in Chapter 40 of the Code of Federal Regulations (CFR) 61, Subpart H. This regulatory standard limits the annual effective dose that any member of the public can receive from Department of Energy facilities to 0.1 mSv. As defined in the preamble of the final rule, all of the facilities on the Oak Ridge Reservation, i.e., the Y-12 National Security Complex, Oak Ridge National Laboratory, East Tennessee Technology Park, and any other U.S. Department of Energy operations on Oak Ridge Reservation, combined, must meet the annual dose limit of 0.1 mSv. At Oak Ridge National Laboratory, there are monitored sources and numerous unmonitored sources. To maintain radiological source and inventory information for these unmonitored sources, e.g., laboratory hoods, equipment exhausts, and room exhausts not currently venting to monitored stacks on the Oak Ridge National Laboratory campus, the Environmental Protection Rad NESHAPs Inventory Web Database was developed. This database is updated annually and is used to compile emissions data for the annual Radionuclide National Emission Standards for Hazardous Air Pollutants (Rad NESHAPs) report required by 40 CFR 61.94. It also provides supporting documentation for facility compliance audits. In addition, a Rad NESHAPs source and dose database was developed to import the source and dose summary data from Clean Air Act Assessment Package-1988 computer model files. This database provides Oak Ridge Reservation and facility-specific source inventory; doses associated with each source and facility; and total doses for the Oak Ridge Reservation dose.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scofield, Patricia A.; Smith, Linda Lenell; Johnson, David N.
The U.S. Environmental Protection Agency promulgated national emission standards for emissions of radionuclides other than radon from US Department of Energy facilities in Chapter 40 of the Code of Federal Regulations (CFR) 61, Subpart H. This regulatory standard limits the annual effective dose that any member of the public can receive from Department of Energy facilities to 0.1 mSv. As defined in the preamble of the final rule, all of the facilities on the Oak Ridge Reservation, i.e., the Y–12 National Security Complex, Oak Ridge National Laboratory, East Tennessee Technology Park, and any other U.S. Department of Energy operations onmore » Oak Ridge Reservation, combined, must meet the annual dose limit of 0.1 mSv. At Oak Ridge National Laboratory, there are monitored sources and numerous unmonitored sources. To maintain radiological source and inventory information for these unmonitored sources, e.g., laboratory hoods, equipment exhausts, and room exhausts not currently venting to monitored stacks on the Oak Ridge National Laboratory campus, the Environmental Protection Rad NESHAPs Inventory Web Database was developed. This database is updated annually and is used to compile emissions data for the annual Radionuclide National Emission Standards for Hazardous Air Pollutants (Rad NESHAPs) report required by 40 CFR 61.94. It also provides supporting documentation for facility compliance audits. In addition, a Rad NESHAPs source and dose database was developed to import the source and dose summary data from Clean Air Act Assessment Package—1988 computer model files. As a result, this database provides Oak Ridge Reservation and facility-specific source inventory; doses associated with each source and facility; and total doses for the Oak Ridge Reservation dose.« less
Ferreira Junior, José Raniery; Oliveira, Marcelo Costa; de Azevedo-Marques, Paulo Mazzoncini
2016-12-01
Lung cancer is the leading cause of cancer-related deaths in the world, and its main manifestation is pulmonary nodules. Detection and classification of pulmonary nodules are challenging tasks that must be done by qualified specialists, but image interpretation errors make those tasks difficult. In order to aid radiologists on those hard tasks, it is important to integrate the computer-based tools with the lesion detection, pathology diagnosis, and image interpretation processes. However, computer-aided diagnosis research faces the problem of not having enough shared medical reference data for the development, testing, and evaluation of computational methods for diagnosis. In order to minimize this problem, this paper presents a public nonrelational document-oriented cloud-based database of pulmonary nodules characterized by 3D texture attributes, identified by experienced radiologists and classified in nine different subjective characteristics by the same specialists. Our goal with the development of this database is to improve computer-aided lung cancer diagnosis and pulmonary nodule detection and classification research through the deployment of this database in a cloud Database as a Service framework. Pulmonary nodule data was provided by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), image descriptors were acquired by a volumetric texture analysis, and database schema was developed using a document-oriented Not only Structured Query Language (NoSQL) approach. The proposed database is now with 379 exams, 838 nodules, and 8237 images, 4029 of them are CT scans and 4208 manually segmented nodules, and it is allocated in a MongoDB instance on a cloud infrastructure.
Zhang, Yaoyang; Xu, Tao; Shan, Bing; Hart, Jonathan; Aslanian, Aaron; Han, Xuemei; Zong, Nobel; Li, Haomin; Choi, Howard; Wang, Dong; Acharya, Lipi; Du, Lisa; Vogt, Peter K; Ping, Peipei; Yates, John R
2015-11-03
Shotgun proteomics generates valuable information from large-scale and target protein characterizations, including protein expression, protein quantification, protein post-translational modifications (PTMs), protein localization, and protein-protein interactions. Typically, peptides derived from proteolytic digestion, rather than intact proteins, are analyzed by mass spectrometers because peptides are more readily separated, ionized and fragmented. The amino acid sequences of peptides can be interpreted by matching the observed tandem mass spectra to theoretical spectra derived from a protein sequence database. Identified peptides serve as surrogates for their proteins and are often used to establish what proteins were present in the original mixture and to quantify protein abundance. Two major issues exist for assigning peptides to their originating protein. The first issue is maintaining a desired false discovery rate (FDR) when comparing or combining multiple large datasets generated by shotgun analysis and the second issue is properly assigning peptides to proteins when homologous proteins are present in the database. Herein we demonstrate a new computational tool, ProteinInferencer, which can be used for protein inference with both small- or large-scale data sets to produce a well-controlled protein FDR. In addition, ProteinInferencer introduces confidence scoring for individual proteins, which makes protein identifications evaluable. This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015. Published by Elsevier B.V.
The Computerized Laboratory Notebook concept for genetic toxicology experimentation and testing.
Strauss, G H; Stanford, W L; Berkowitz, S J
1989-03-01
We describe a microcomputer system utilizing the Computerized Laboratory Notebook (CLN) concept developed in our laboratory for the purpose of automating the Battery of Leukocyte Tests (BLT). The BLT was designed to evaluate blood specimens for toxic, immunotoxic, and genotoxic effects after in vivo exposure to putative mutagens. A system was developed with the advantages of low cost, limited spatial requirements, ease of use for personnel inexperienced with computers, and applicability to specific testing yet flexibility for experimentation. This system eliminates cumbersome record keeping and repetitive analysis inherent in genetic toxicology bioassays. Statistical analysis of the vast quantity of data produced by the BLT would not be feasible without a central database. Our central database is maintained by an integrated package which we have adapted to develop the CLN. The clonal assay of lymphocyte mutagenesis (CALM) section of the CLN is demonstrated. PC-Slaves expand the microcomputer to multiple workstations so that our computerized notebook can be used next to a hood while other work is done in an office and instrument room simultaneously. Communication with peripheral instruments is an indispensable part of many laboratory operations, and we present a representative program, written to acquire and analyze CALM data, for communicating with both a liquid scintillation counter and an ELISA plate reader. In conclusion we discuss how our computer system could easily be adapted to the needs of other laboratories.
Bera, Maitreyee
2014-01-01
The U.S. Geological Survey (USGS), in cooperation with DuPage County Stormwater Management Division, maintains a USGS database of hourly meteorologic and hydrologic data for use in a near real-time streamflow simulation system, which assists in the management and operation of reservoirs and other flood-control structures in the Salt Creek watershed in DuPage County, Illinois. Most of the precipitation data are collected from a tipping-bucket rain-gage network located in and near DuPage County. The other meteorologic data (wind speed, solar radiation, air temperature, and dewpoint temperature) are collected at Argonne National Laboratory in Argonne, Ill. Potential evapotranspiration is computed from the meteorologic data. The hydrologic data (discharge and stage) are collected at USGS streamflow-gaging stations in DuPage County. These data are stored in a Watershed Data Management (WDM) database. An earlier report describes in detail the WDM database development including the processing of data from January 1, 1997, through September 30, 2004, in SEP04.WDM database. SEP04.WDM is updated with the appended data from October 1, 2004, through September 30, 2011, water years 2005–11 and renamed as SEP11.WDM. This report details the processing of meteorologic and hydrologic data in SEP11.WDM. This report provides a record of snow affected periods and the data used to fill missing-record periods for each precipitation site during water years 2005–11. The meteorologic data filling methods are described in detail in Over and others (2010), and an update is provided in this report.
Toward unification of taxonomy databases in a distributed computer environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kitakami, Hajime; Tateno, Yoshio; Gojobori, Takashi
1994-12-31
All the taxonomy databases constructed with the DNA databases of the international DNA data banks are powerful electronic dictionaries which aid in biological research by computer. The taxonomy databases are, however not consistently unified with a relational format. If we can achieve consistent unification of the taxonomy databases, it will be useful in comparing many research results, and investigating future research directions from existent research results. In particular, it will be useful in comparing relationships between phylogenetic trees inferred from molecular data and those constructed from morphological data. The goal of the present study is to unify the existent taxonomymore » databases and eliminate inconsistencies (errors) that are present in them. Inconsistencies occur particularly in the restructuring of the existent taxonomy databases, since classification rules for constructing the taxonomy have rapidly changed with biological advancements. A repair system is needed to remove inconsistencies in each data bank and mismatches among data banks. This paper describes a new methodology for removing both inconsistencies and mismatches from the databases on a distributed computer environment. The methodology is implemented in a relational database management system, SYBASE.« less
A design for the geoinformatics system
NASA Astrophysics Data System (ADS)
Allison, M. L.
2002-12-01
Informatics integrates and applies information technologies with scientific and technical disciplines. A geoinformatics system targets the spatially based sciences. The system is not a master database, but will collect pertinent information from disparate databases distributed around the world. Seamless interoperability of databases promises quantum leaps in productivity not only for scientific researchers but also for many areas of society including business and government. The system will incorporate: acquisition of analog and digital legacy data; efficient information and data retrieval mechanisms (via data mining and web services); accessibility to and application of visualization, analysis, and modeling capabilities; online workspace, software, and tutorials; GIS; integration with online scientific journal aggregates and digital libraries; access to real time data collection and dissemination; user-defined automatic notification and quality control filtering for selection of new resources; and application to field techniques such as mapping. In practical terms, such a system will provide the ability to gather data over the Web from a variety of distributed sources, regardless of computer operating systems, database formats, and servers. Search engines will gather data about any geographic location, above, on, or below ground, covering any geologic time, and at any scale or detail. A distributed network of digital geolibraries can archive permanent copies of databases at risk of being discontinued and those that continue to be maintained by the data authors. The geoinformatics system will generate results from widely distributed sources to function as a dynamic data network. Instead of posting a variety of pre-made tables, charts, or maps based on static databases, the interactive dynamic system creates these products on the fly, each time an inquiry is made, using the latest information in the appropriate databases. Thus, in the dynamic system, a map generated today may differ from one created yesterday and one to be created tomorrow, because the databases used to make it are constantly (and sometimes automatically) being updated.
48 CFR 32.1110 - Solicitation provision and contract clauses.
Code of Federal Regulations, 2013 CFR
2013-10-01
... System for Award Management (SAM) database and maintain registration until final payment, unless— (i..., or a similar agency clause that requires the contractor to be registered in the SAM database. (ii)(A...
48 CFR 32.1110 - Solicitation provision and contract clauses.
Code of Federal Regulations, 2014 CFR
2014-10-01
... System for Award Management (SAM) database and maintain registration until final payment, unless— (i..., or a similar agency clause that requires the contractor to be registered in the SAM database. (ii)(A...
Virginia Bridge Information Systems Laboratory.
DOT National Transportation Integrated Search
2014-06-01
This report presents the results of applied data mining of legacy bridge databases, focusing on the Pontis and : National Bridge Inventory databases maintained by the Virginia Department of Transportation (VDOT). Data : analysis was performed using a...
75 FR 28024 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-19
... the data-capturing process. SAMHSA will place Web site registration information into a Knowledge Management database and will place email subscription information into a database maintained by a third-party...
Data Structures in Natural Computing: Databases as Weak or Strong Anticipatory Systems
NASA Astrophysics Data System (ADS)
Rossiter, B. N.; Heather, M. A.
2004-08-01
Information systems anticipate the real world. Classical databases store, organise and search collections of data of that real world but only as weak anticipatory information systems. This is because of the reductionism and normalisation needed to map the structuralism of natural data on to idealised machines with von Neumann architectures consisting of fixed instructions. Category theory developed as a formalism to explore the theoretical concept of naturality shows that methods like sketches arising from graph theory as only non-natural models of naturality cannot capture real-world structures for strong anticipatory information systems. Databases need a schema of the natural world. Natural computing databases need the schema itself to be also natural. Natural computing methods including neural computers, evolutionary automata, molecular and nanocomputing and quantum computation have the potential to be strong. At present they are mainly at the stage of weak anticipatory systems.
The Fabric for Frontier Experiments Project at Fermilab
NASA Astrophysics Data System (ADS)
Kirby, Michael
2014-06-01
The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere; 2) an extensive data management system for managing local and remote caches, cataloging, querying, moving, and tracking the use of data; 3) custom and generic database applications for calibrations, beam information, and other purposes; 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-06
... Terrorist Screening Database System of Records AGENCY: Privacy Office, DHS. ACTION: Notice of proposed... Use of the Terrorist Screening Database System of Records'' and this proposed rulemaking. In this... Use of the Terrorist Screening Database (TSDB) System of Records.'' DHS is maintaining a mirror copy...
Specification and Enforcement of Semantic Integrity Constraints in Microsoft Access
ERIC Educational Resources Information Center
Dadashzadeh, Mohammad
2007-01-01
Semantic integrity constraints are business-specific rules that limit the permissible values in a database. For example, a university rule dictating that an "incomplete" grade cannot be changed to an A constrains the possible states of the database. To maintain database integrity, business rules should be identified in the course of database…
Atomic Spectroscopic Databases at NIST
NASA Technical Reports Server (NTRS)
Reader, J.; Kramida, A. E.; Ralchenko, Yu.
2006-01-01
We describe recent work at NIST to develop and maintain databases for spectra, transition probabilities, and energy levels of atoms that are astrophysically important. Our programs to critically compile these data as well as to develop a new database to compare plasma calculations for atoms that are not in local thermodynamic equilibrium are also summarized.
MIPS plant genome information resources.
Spannagl, Manuel; Haberer, Georg; Ernst, Rebecca; Schoof, Heiko; Mayer, Klaus F X
2007-01-01
The Munich Institute for Protein Sequences (MIPS) has been involved in maintaining plant genome databases since the Arabidopsis thaliana genome project. Genome databases and analysis resources have focused on individual genomes and aim to provide flexible and maintainable data sets for model plant genomes as a backbone against which experimental data, for example from high-throughput functional genomics, can be organized and evaluated. In addition, model genomes also form a scaffold for comparative genomics, and much can be learned from genome-wide evolutionary studies.
Research on computer virus database management system
NASA Astrophysics Data System (ADS)
Qi, Guoquan
2011-12-01
The growing proliferation of computer viruses becomes the lethal threat and research focus of the security of network information. While new virus is emerging, the number of viruses is growing, virus classification increasing complex. Virus naming because of agencies' capture time differences can not be unified. Although each agency has its own virus database, the communication between each other lacks, or virus information is incomplete, or a small number of sample information. This paper introduces the current construction status of the virus database at home and abroad, analyzes how to standardize and complete description of virus characteristics, and then gives the information integrity, storage security and manageable computer virus database design scheme.
Method for acquiring, storing and analyzing crystal images
NASA Technical Reports Server (NTRS)
Gester, Thomas E. (Inventor); Rosenblum, William M. (Inventor); Christopher, Gayle K. (Inventor); Hamrick, David T. (Inventor); Delucas, Lawrence J. (Inventor); Tillotson, Brian (Inventor)
2003-01-01
A system utilizing a digital computer for acquiring, storing and evaluating crystal images. The system includes a video camera (12) which produces a digital output signal representative of a crystal specimen positioned within its focal window (16). The digitized output from the camera (12) is then stored on data storage media (32) together with other parameters inputted by a technician and relevant to the crystal specimen. Preferably, the digitized images are stored on removable media (32) while the parameters for different crystal specimens are maintained in a database (40) with indices to the digitized optical images on the other data storage media (32). Computer software is then utilized to identify not only the presence and number of crystals and the edges of the crystal specimens from the optical image, but to also rate the crystal specimens by various parameters, such as edge straightness, polygon formation, aspect ratio, surface clarity, crystal cracks and other defects or lack thereof, and other parameters relevant to the quality of the crystals.
Cybersim: geographic, temporal, and organizational dynamics of malware propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santhi, Nandakishore; Yan, Guanhua; Eidenbenz, Stephan
2010-01-01
Cyber-infractions into a nation's strategic security envelope pose a constant and daunting challenge. We present the modular CyberSim tool which has been developed in response to the need to realistically simulate at a national level, software vulnerabilities and resulting mal ware propagation in online social networks. CyberSim suite (a) can generate realistic scale-free networks from a database of geocoordinated computers to closely model social networks arising from personal and business email contacts and online communities; (b) maintains for each,bost a list of installed software, along with the latest published vulnerabilities; (d) allows designated initial nodes where malware gets introduced; (e)more » simulates, using distributed discrete event-driven technology, the spread of malware exploiting a specific vulnerability, with packet delay and user online behavior models; (f) provides a graphical visualization of spread of infection, its severity, businesses affected etc to the analyst. We present sample simulations on a national level network with millions of computers.« less
NASA Astrophysics Data System (ADS)
Aharonov, Dorit
In the last few years, theoretical study of quantum systems serving as computational devices has achieved tremendous progress. We now have strong theoretical evidence that quantum computers, if built, might be used as a dramatically powerful computational tool, capable of performing tasks which seem intractable for classical computers. This review is about to tell the story of theoretical quantum computation. I l out the developing topic of experimental realizations of the model, and neglected other closely related topics which are quantum information and quantum communication. As a result of narrowing the scope of this paper, I hope it has gained the benefit of being an almost self contained introduction to the exciting field of quantum computation. The review begins with background on theoretical computer science, Turing machines and Boolean circuits. In light of these models, I define quantum computers, and discuss the issue of universal quantum gates. Quantum algorithms, including Shor's factorization algorithm and Grover's algorithm for searching databases, are explained. I will devote much attention to understanding what the origins of the quantum computational power are, and what the limits of this power are. Finally, I describe the recent theoretical results which show that quantum computers maintain their complexity power even in the presence of noise, inaccuracies and finite precision. This question cannot be separated from that of quantum complexity because any realistic model will inevitably be subjected to such inaccuracies. I tried to put all results in their context, asking what the implications to other issues in computer science and physics are. In the end of this review, I make these connections explicit by discussing the possible implications of quantum computation on fundamental physical questions such as the transition from quantum to classical physics.
77 FR 72335 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-05
... computer networks, systems, or databases. The records contain the individual's name; social security number... control and track access to DLA-controlled networks, computer systems, and databases. The records may also...
Definition and maintenance of a telemetry database dictionary
NASA Technical Reports Server (NTRS)
Knopf, William P. (Inventor)
2007-01-01
A telemetry dictionary database includes a component for receiving spreadsheet workbooks of telemetry data over a web-based interface from other computer devices. Another component routes the spreadsheet workbooks to a specified directory on the host processing device. A process then checks the received spreadsheet workbooks for errors, and if no errors are detected the spreadsheet workbooks are routed to another directory to await initiation of a remote database loading process. The loading process first converts the spreadsheet workbooks to comma separated value (CSV) files. Next, a network connection with the computer system that hosts the telemetry dictionary database is established and the CSV files are ported to the computer system that hosts the telemetry dictionary database. This is followed by a remote initiation of a database loading program. Upon completion of loading a flatfile generation program is manually initiated to generate a flatfile to be used in a mission operations environment by the core ground system.
Cordova: web-based management of genetic variation data.
Ephraim, Sean S; Anand, Nikhil; DeLuca, Adam P; Taylor, Kyle R; Kolbe, Diana L; Simpson, Allen C; Azaiez, Hela; Sloan, Christina M; Shearer, A Eliot; Hallier, Andrea R; Casavant, Thomas L; Scheetz, Todd E; Smith, Richard J H; Braun, Terry A
2014-12-01
Cordova is an out-of-the-box solution for building and maintaining an online database of genetic variations integrated with pathogenicity prediction results from popular algorithms. Our primary motivation for developing this system is to aid researchers and clinician-scientists in determining the clinical significance of genetic variations. To achieve this goal, Cordova provides an interface to review and manually or computationally curate genetic variation data as well as share it for clinical diagnostics and the advancement of research. Cordova is open source under the MIT license and is freely available for download at https://github.com/clcg/cordova. Published by Oxford University Press. This work is written by US Government employees and is in the public domain in the US.
SARS: Safeguards Accounting and Reporting Software
NASA Astrophysics Data System (ADS)
Mohammedi, B.; Saadi, S.; Ait-Mohamed, S.
In order to satisfy the requirements of the SSAC (State System for Accounting and Control of nuclear materials), for recording and reporting objectives; this computer program comes to bridge the gape between nuclear facilities operators and national inspection verifying records and delivering reports. The SARS maintains and generates at-facility safeguards accounting records and generates International Atomic Energy Agency (IAEA) safeguards reports based on accounting data input by the user at any nuclear facility. A database structure is built and BORLAND DELPHI programming language has been used. The software is designed to be user-friendly, to make extensive and flexible management of menus and graphs. SARS functions include basic physical inventory tacking, transaction histories and reporting. Access controls are made by different passwords.
Building a medical image processing algorithm verification database
NASA Astrophysics Data System (ADS)
Brown, C. Wayne
2000-06-01
The design of a database containing head Computed Tomography (CT) studies is presented, along with a justification for the database's composition. The database will be used to validate software algorithms that screen normal head CT studies from studies that contain pathology. The database is designed to have the following major properties: (1) a size sufficient for statistical viability, (2) inclusion of both normal (no pathology) and abnormal scans, (3) inclusion of scans due to equipment malfunction, technologist error, and uncooperative patients, (4) inclusion of data sets from multiple scanner manufacturers, (5) inclusion of data sets from different gender and age groups, and (6) three independent diagnosis of each data set. Designed correctly, the database will provide a partial basis for FDA (United States Food and Drug Administration) approval of image processing algorithms for clinical use. Our goal for the database is the proof of viability of screening head CT's for normal anatomy using computer algorithms. To put this work into context, a classification scheme for 'computer aided diagnosis' systems is proposed.
interoperability emerging infrastructure for data management on computational grids Software Packages Services : ATLAS: Management and Steering: Computing Management Board Software Project Management Board Database Model Group Computing TDR: 4.5 Event Data 4.8 Database and Data Management Services 6.3.4 Production and
The SQL Server Database for Non Computer Professional Teaching Reform
ERIC Educational Resources Information Center
Liu, Xiangwei
2012-01-01
A summary of the teaching methods of the non-computer professional SQL Server database, analyzes the current situation of the teaching course. According to non computer professional curriculum teaching characteristic, put forward some teaching reform methods, and put it into practice, improve the students' analysis ability, practice ability and…
NASA Technical Reports Server (NTRS)
Finley, Gail T.
1988-01-01
This report covers the study of the relational database implementation in the NASCAD computer program system. The existing system is used primarily for computer aided design. Attention is also directed to a hidden-surface algorithm for final drawing output.
Charlebois, Kathleen; Palmour, Nicole; Knoppers, Bartha Maria
2016-01-01
This study aims to understand the influence of the ethical and legal issues on cloud computing adoption in the field of genomics research. To do so, we adapted Diffusion of Innovation (DoI) theory to enable understanding of how key stakeholders manage the various ethical and legal issues they encounter when adopting cloud computing. Twenty semi-structured interviews were conducted with genomics researchers, patient advocates and cloud service providers. Thematic analysis generated five major themes: 1) Getting comfortable with cloud computing; 2) Weighing the advantages and the risks of cloud computing; 3) Reconciling cloud computing with data privacy; 4) Maintaining trust and 5) Anticipating the cloud by creating the conditions for cloud adoption. Our analysis highlights the tendency among genomics researchers to gradually adopt cloud technology. Efforts made by cloud service providers to promote cloud computing adoption are confronted by researchers’ perpetual cost and security concerns, along with a lack of familiarity with the technology. Further underlying those fears are researchers’ legal responsibility with respect to the data that is stored on the cloud. Alternative consent mechanisms aimed at increasing patients’ control over the use of their data also provide a means to circumvent various institutional and jurisdictional hurdles that restrict access by creating siloed databases. However, the risk of creating new, cloud-based silos may run counter to the goal in genomics research to increase data sharing on a global scale. PMID:27755563
Charlebois, Kathleen; Palmour, Nicole; Knoppers, Bartha Maria
2016-01-01
This study aims to understand the influence of the ethical and legal issues on cloud computing adoption in the field of genomics research. To do so, we adapted Diffusion of Innovation (DoI) theory to enable understanding of how key stakeholders manage the various ethical and legal issues they encounter when adopting cloud computing. Twenty semi-structured interviews were conducted with genomics researchers, patient advocates and cloud service providers. Thematic analysis generated five major themes: 1) Getting comfortable with cloud computing; 2) Weighing the advantages and the risks of cloud computing; 3) Reconciling cloud computing with data privacy; 4) Maintaining trust and 5) Anticipating the cloud by creating the conditions for cloud adoption. Our analysis highlights the tendency among genomics researchers to gradually adopt cloud technology. Efforts made by cloud service providers to promote cloud computing adoption are confronted by researchers' perpetual cost and security concerns, along with a lack of familiarity with the technology. Further underlying those fears are researchers' legal responsibility with respect to the data that is stored on the cloud. Alternative consent mechanisms aimed at increasing patients' control over the use of their data also provide a means to circumvent various institutional and jurisdictional hurdles that restrict access by creating siloed databases. However, the risk of creating new, cloud-based silos may run counter to the goal in genomics research to increase data sharing on a global scale.
O'Leary, Nuala A; Wright, Mathew W; Brister, J Rodney; Ciufo, Stacy; Haddad, Diana; McVeigh, Rich; Rajput, Bhanu; Robbertse, Barbara; Smith-White, Brian; Ako-Adjei, Danso; Astashyn, Alexander; Badretdin, Azat; Bao, Yiming; Blinkova, Olga; Brover, Vyacheslav; Chetvernin, Vyacheslav; Choi, Jinna; Cox, Eric; Ermolaeva, Olga; Farrell, Catherine M; Goldfarb, Tamara; Gupta, Tripti; Haft, Daniel; Hatcher, Eneida; Hlavina, Wratko; Joardar, Vinita S; Kodali, Vamsi K; Li, Wenjun; Maglott, Donna; Masterson, Patrick; McGarvey, Kelly M; Murphy, Michael R; O'Neill, Kathleen; Pujar, Shashikant; Rangwala, Sanjida H; Rausch, Daniel; Riddick, Lillian D; Schoch, Conrad; Shkeda, Andrei; Storz, Susan S; Sun, Hanzhen; Thibaud-Nissen, Francoise; Tolstoy, Igor; Tully, Raymond E; Vatsan, Anjana R; Wallin, Craig; Webb, David; Wu, Wendy; Landrum, Melissa J; Kimchi, Avi; Tatusova, Tatiana; DiCuccio, Michael; Kitts, Paul; Murphy, Terence D; Pruitt, Kim D
2016-01-04
The RefSeq project at the National Center for Biotechnology Information (NCBI) maintains and curates a publicly available database of annotated genomic, transcript, and protein sequence records (http://www.ncbi.nlm.nih.gov/refseq/). The RefSeq project leverages the data submitted to the International Nucleotide Sequence Database Collaboration (INSDC) against a combination of computation, manual curation, and collaboration to produce a standard set of stable, non-redundant reference sequences. The RefSeq project augments these reference sequences with current knowledge including publications, functional features and informative nomenclature. The database currently represents sequences from more than 55,000 organisms (>4800 viruses, >40,000 prokaryotes and >10,000 eukaryotes; RefSeq release 71), ranging from a single record to complete genomes. This paper summarizes the current status of the viral, prokaryotic, and eukaryotic branches of the RefSeq project, reports on improvements to data access and details efforts to further expand the taxonomic representation of the collection. We also highlight diverse functional curation initiatives that support multiple uses of RefSeq data including taxonomic validation, genome annotation, comparative genomics, and clinical testing. We summarize our approach to utilizing available RNA-Seq and other data types in our manual curation process for vertebrate, plant, and other species, and describe a new direction for prokaryotic genomes and protein name management. Published by Oxford University Press on behalf of Nucleic Acids Research 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Virus taxonomy: the database of the International Committee on Taxonomy of Viruses (ICTV).
Lefkowitz, Elliot J; Dempsey, Donald M; Hendrickson, Robert Curtis; Orton, Richard J; Siddell, Stuart G; Smith, Donald B
2018-01-04
The International Committee on Taxonomy of Viruses (ICTV) is charged with the task of developing, refining, and maintaining a universal virus taxonomy. This task encompasses the classification of virus species and higher-level taxa according to the genetic and biological properties of their members; naming virus taxa; maintaining a database detailing the currently approved taxonomy; and providing the database, supporting proposals, and other virus-related information from an open-access, public web site. The ICTV web site (http://ictv.global) provides access to the current taxonomy database in online and downloadable formats, and maintains a complete history of virus taxa back to the first release in 1971. The ICTV has also published the ICTV Report on Virus Taxonomy starting in 1971. This Report provides a comprehensive description of all virus taxa covering virus structure, genome structure, biology and phylogenetics. The ninth ICTV report, published in 2012, is available as an open-access online publication from the ICTV web site. The current, 10th report (http://ictv.global/report/), is being published online, and is replacing the previous hard-copy edition with a completely open access, continuously updated publication. No other database or resource exists that provides such a comprehensive, fully annotated compendium of information on virus taxa and taxonomy. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
A Web Server and Mobile App for Computing Hemolytic Potency of Peptides.
Chaudhary, Kumardeep; Kumar, Ritesh; Singh, Sandeep; Tuknait, Abhishek; Gautam, Ankur; Mathur, Deepika; Anand, Priya; Varshney, Grish C; Raghava, Gajendra P S
2016-03-08
Numerous therapeutic peptides do not enter the clinical trials just because of their high hemolytic activity. Recently, we developed a database, Hemolytik, for maintaining experimentally validated hemolytic and non-hemolytic peptides. The present study describes a web server and mobile app developed for predicting, and screening of peptides having hemolytic potency. Firstly, we generated a dataset HemoPI-1 that contains 552 hemolytic peptides extracted from Hemolytik database and 552 random non-hemolytic peptides (from Swiss-Prot). The sequence analysis of these peptides revealed that certain residues (e.g., L, K, F, W) and motifs (e.g., "FKK", "LKL", "KKLL", "KWK", "VLK", "CYCR", "CRR", "RFC", "RRR", "LKKL") are more abundant in hemolytic peptides. Therefore, we developed models for discriminating hemolytic and non-hemolytic peptides using various machine learning techniques and achieved more than 95% accuracy. We also developed models for discriminating peptides having high and low hemolytic potential on different datasets called HemoPI-2 and HemoPI-3. In order to serve the scientific community, we developed a web server, mobile app and JAVA-based standalone software (http://crdd.osdd.net/raghava/hemopi/).
The Use of a Relational Database in Qualitative Research on Educational Computing.
ERIC Educational Resources Information Center
Winer, Laura R.; Carriere, Mario
1990-01-01
Discusses the use of a relational database as a data management and analysis tool for nonexperimental qualitative research, and describes the use of the Reflex Plus database in the Vitrine 2001 project in Quebec to study computer-based learning environments. Information systems are also discussed, and the use of a conceptual model is explained.…
ERIC Educational Resources Information Center
Li, Yiu-On; Leung, Shirley W.
2001-01-01
Discussion of aggregator databases focuses on a project at the Hong Kong Baptist University library to integrate full-text electronic journal titles from three unstable aggregator databases into its online public access catalog (OPAC). Explains the development of the electronic journal computer program (EJCOP) to generate MARC records for…
ECOTOX database; new additions and future direction
The ECOTOXicology database (ECOTOX) is a comprehensive, publicly available knowledgebase developed and maintained by ORD/NHEERL. It is used for environmental toxicity data on aquatic life, terrestrial plants and wildlife. Publications are identified for potential applicability af...
Development of the Connecticut product evaluation database application : Phase 1B.
DOT National Transportation Integrated Search
2010-12-01
The Federal Highway Administration (FHWA), the American Association of State Highway : Transportation Officials (AASHTO) and the Transportation Research Board (TRB), a : division of the National Research Council (NRC), maintain databases to store nat...
Data mining and visualization of the Alabama accident database
DOT National Transportation Integrated Search
2000-08-01
The Alabama Department of Public Safety has developed and maintains a centralized database that contain traffic accident data collected from crash report completed by local police officers and state troopers. The Critical Analysis Reporting Environme...
Exploring Discretization Error in Simulation-Based Aerodynamic Databases
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Nemec, Marian
2010-01-01
This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.
Classroom Laboratory Report: Using an Image Database System in Engineering Education.
ERIC Educational Resources Information Center
Alam, Javed; And Others
1991-01-01
Describes an image database system assembled using separate computer components that was developed to overcome text-only computer hardware storage and retrieval limitations for a pavement design class. (JJK)
16 CFR 1102.20 - Transmission of reports of harm to the identified manufacturer or private labeler.
Code of Federal Regulations, 2014 CFR
2014-01-01
... INFORMATION DATABASE Procedural Requirements § 1102.20 Transmission of reports of harm to the identified..., provided such report meets the minimum requirements for publication in the Database, to the manufacturer or... harm, or otherwise, then it will not post the report of harm on the Database but will maintain the...
16 CFR 1102.20 - Transmission of reports of harm to the identified manufacturer or private labeler.
Code of Federal Regulations, 2012 CFR
2012-01-01
... INFORMATION DATABASE Procedural Requirements § 1102.20 Transmission of reports of harm to the identified..., provided such report meets the minimum requirements for publication in the Database, to the manufacturer or... harm, or otherwise, then it will not post the report of harm on the Database but will maintain the...
76 FR 42677 - Notice of Intent To Seek Approval To Collect Information
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-19
... and maintains an on-line recipe database, the Recipe Finder, as a popular feature to the SNAP-Ed Connection Web site. The purpose of the Recipe Finder database is to provide SNAP-Ed providers with low-cost... inclusion in the database. SNAP-Ed staff and providers benefit from collecting and posting feedback on...
A review on quantum search algorithms
NASA Astrophysics Data System (ADS)
Giri, Pulak Ranjan; Korepin, Vladimir E.
2017-12-01
The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.
32 CFR 34.42 - Retention and access requirements for records.
Code of Federal Regulations, 2014 CFR
2014-07-01
... procedures shall maintain the integrity, reliability, and security of the original computer data. Recipients... (such as documents related to computer usage chargeback rates), along with their supporting records... this section is maintained on a computer, recipients shall retain the computer data on a reliable...
32 CFR 34.42 - Retention and access requirements for records.
Code of Federal Regulations, 2013 CFR
2013-07-01
... procedures shall maintain the integrity, reliability, and security of the original computer data. Recipients... (such as documents related to computer usage chargeback rates), along with their supporting records... this section is maintained on a computer, recipients shall retain the computer data on a reliable...
32 CFR 34.42 - Retention and access requirements for records.
Code of Federal Regulations, 2012 CFR
2012-07-01
... procedures shall maintain the integrity, reliability, and security of the original computer data. Recipients... (such as documents related to computer usage chargeback rates), along with their supporting records... this section is maintained on a computer, recipients shall retain the computer data on a reliable...
An evaluation of FIA's stand age variable
John D. Shaw
2015-01-01
The Forest Inventory and Analysis Database (FIADB) includes a large number of measured and computed variables. The definitions of measured variables are usually well-documented in FIA field and database manuals. Some computed variables, such as live basal area of the condition, are equally straightforward. Other computed variables, such as individual tree volume,...
The Fabric for Frontier Experiments Project at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirby, Michael
2014-01-01
The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere, 2) an extensive data management system for managing local and remote caches, cataloging, querying,more » moving, and tracking the use of data, 3) custom and generic database applications for calibrations, beam information, and other purposes, 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.« less
Singh, Anushikha; Dutta, Malay Kishore
2017-12-01
The authentication and integrity verification of medical images is a critical and growing issue for patients in e-health services. Accurate identification of medical images and patient verification is an essential requirement to prevent error in medical diagnosis. The proposed work presents an imperceptible watermarking system to address the security issue of medical fundus images for tele-ophthalmology applications and computer aided automated diagnosis of retinal diseases. In the proposed work, patient identity is embedded in fundus image in singular value decomposition domain with adaptive quantization parameter to maintain perceptual transparency for variety of fundus images like healthy fundus or disease affected image. In the proposed method insertion of watermark in fundus image does not affect the automatic image processing diagnosis of retinal objects & pathologies which ensure uncompromised computer-based diagnosis associated with fundus image. Patient ID is correctly recovered from watermarked fundus image for integrity verification of fundus image at the diagnosis centre. The proposed watermarking system is tested in a comprehensive database of fundus images and results are convincing. results indicate that proposed watermarking method is imperceptible and it does not affect computer vision based automated diagnosis of retinal diseases. Correct recovery of patient ID from watermarked fundus image makes the proposed watermarking system applicable for authentication of fundus images for computer aided diagnosis and Tele-ophthalmology applications. Copyright © 2017 Elsevier B.V. All rights reserved.
Pinthong, Watthanai; Muangruen, Panya
2016-01-01
Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC) is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC) as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST) to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software. PMID:27547555
32 CFR 32.53 - Retention and access requirements for records.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., reliability, and security of the original computer data. Recipients shall also maintain an audit trail... group of costs is chargeable (such as computer usage chargeback rates or composite fringe benefit rates... maintained on a computer, recipients shall retain the computer data on a reliable medium for the time periods...
32 CFR 32.53 - Retention and access requirements for records.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., reliability, and security of the original computer data. Recipients shall also maintain an audit trail... group of costs is chargeable (such as computer usage chargeback rates or composite fringe benefit rates... maintained on a computer, recipients shall retain the computer data on a reliable medium for the time periods...
32 CFR 32.53 - Retention and access requirements for records.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., reliability, and security of the original computer data. Recipients shall also maintain an audit trail... group of costs is chargeable (such as computer usage chargeback rates or composite fringe benefit rates... maintained on a computer, recipients shall retain the computer data on a reliable medium for the time periods...
Description of the process used to create 1992 Hanford Morality Study database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, E.S.; Buchanan, J.A.; Holter, N.A.
1992-12-01
An updated and expanded database for the Hanford Mortality Study has been developed by PNL`s Epidemiology and Biometry Department. The purpose of this report is to document this process. The primary sources of data were the Occupational Health History (OHH) files maintained by the Hanford Environmental Health Foundation (HEHF) and including demographic data and job histories; the Hanford Mortality (HMO) files also maintained by HEHF and including information of deaths of Hanford workers; the Occupational Radiation Exposure (ORE) files maintained by PNL`s Health Physics Department and containing data on external dosimetry; and a file of workers with confirmed internal depositionsmore » of radionuclides also maintained by PNL`s Health Physics Department. This report describes each of these files in detail, and also describes the many edits that were performed to address the consistency and accuracy of data within and between these files.« less
Description of the process used to create 1992 Hanford Morality Study database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, E. S.; Buchanan, J. A.; Holter, N. A.
1992-12-01
An updated and expanded database for the Hanford Mortality Study has been developed by PNL's Epidemiology and Biometry Department. The purpose of this report is to document this process. The primary sources of data were the Occupational Health History (OHH) files maintained by the Hanford Environmental Health Foundation (HEHF) and including demographic data and job histories; the Hanford Mortality (HMO) files also maintained by HEHF and including information of deaths of Hanford workers; the Occupational Radiation Exposure (ORE) files maintained by PNL's Health Physics Department and containing data on external dosimetry; and a file of workers with confirmed internal depositionsmore » of radionuclides also maintained by PNL's Health Physics Department. This report describes each of these files in detail, and also describes the many edits that were performed to address the consistency and accuracy of data within and between these files.« less
Code of Federal Regulations, 2014 CFR
2014-04-01
... participate in a nationwide mortgage licensing system and registry database of residential mortgage loan... charged with establishing and maintaining a licensing and registry database for loan originators. (b.... Subpart D provides minimum requirements for the administration of the Nationwide Mortgage Licensing System...
Code of Federal Regulations, 2013 CFR
2013-04-01
... participate in a nationwide mortgage licensing system and registry database of residential mortgage loan... charged with establishing and maintaining a licensing and registry database for loan originators. (b.... Subpart D provides minimum requirements for the administration of the Nationwide Mortgage Licensing System...
THE NASA AMES POLYCYCLIC AROMATIC HYDROCARBON INFRARED SPECTROSCOPIC DATABASE: THE COMPUTED SPECTRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauschlicher, C. W.; Ricca, A.; Boersma, C.
The astronomical emission features, formerly known as the unidentified infrared bands, are now commonly ascribed to polycyclic aromatic hydrocarbons (PAHs). The laboratory experiments and computational modeling done at the NASA Ames Research Center to create a collection of PAH IR spectra relevant to test and refine the PAH hypothesis have been assembled into a spectroscopic database. This database now contains over 800 PAH spectra spanning 2-2000 {mu}m (5000-5 cm{sup -1}). These data are now available on the World Wide Web at www.astrochem.org/pahdb. This paper presents an overview of the computational spectra in the database and the tools developed to analyzemore » and interpret astronomical spectra using the database. A description of the online and offline user tools available on the Web site is also presented.« less
BioModels.net Web Services, a free and integrated toolkit for computational modelling software.
Li, Chen; Courtot, Mélanie; Le Novère, Nicolas; Laibe, Camille
2010-05-01
Exchanging and sharing scientific results are essential for researchers in the field of computational modelling. BioModels.net defines agreed-upon standards for model curation. A fundamental one, MIRIAM (Minimum Information Requested in the Annotation of Models), standardises the annotation and curation process of quantitative models in biology. To support this standard, MIRIAM Resources maintains a set of standard data types for annotating models, and provides services for manipulating these annotations. Furthermore, BioModels.net creates controlled vocabularies, such as SBO (Systems Biology Ontology) which strictly indexes, defines and links terms used in Systems Biology. Finally, BioModels Database provides a free, centralised, publicly accessible database for storing, searching and retrieving curated and annotated computational models. Each resource provides a web interface to submit, search, retrieve and display its data. In addition, the BioModels.net team provides a set of Web Services which allows the community to programmatically access the resources. A user is then able to perform remote queries, such as retrieving a model and resolving all its MIRIAM Annotations, as well as getting the details about the associated SBO terms. These web services use established standards. Communications rely on SOAP (Simple Object Access Protocol) messages and the available queries are described in a WSDL (Web Services Description Language) file. Several libraries are provided in order to simplify the development of client software. BioModels.net Web Services make one step further for the researchers to simulate and understand the entirety of a biological system, by allowing them to retrieve biological models in their own tool, combine queries in workflows and efficiently analyse models.
ERIC Educational Resources Information Center
Li, Rui; Liu, Min
2007-01-01
The purpose of this study is to examine the potential of using computer databases as cognitive tools to share learners' cognitive load and facilitate learning in a multimedia problem-based learning (PBL) environment designed for sixth graders. Two research questions were: (a) can the computer database tool share sixth-graders' cognitive load? and…
Software for Sharing and Management of Information
NASA Technical Reports Server (NTRS)
Chen, James R.; Wolfe, Shawn R.; Wragg, Stephen D.
2003-01-01
DIAMS is a set of computer programs that implements a system of collaborative agents that serve multiple, geographically distributed users communicating via the Internet. DIAMS provides a user interface as a Java applet that runs on each user s computer and that works within the context of the user s Internet-browser software. DIAMS helps all its users to manage, gain access to, share, and exchange information in databases that they maintain on their computers. One of the DIAMS agents is a personal agent that helps its owner find information most relevant to current needs. It provides software tools and utilities for users to manage their information repositories with dynamic organization and virtual views. Capabilities for generating flexible hierarchical displays are integrated with capabilities for indexed- query searching to support effective access to information. Automatic indexing methods are employed to support users queries and communication between agents. The catalog of a repository is kept in object-oriented storage to facilitate sharing of information. Collaboration between users is aided by matchmaker agents and by automated exchange of information. The matchmaker agents are designed to establish connections between users who have similar interests and expertise.
Digital data storage systems, computers, and data verification methods
Groeneveld, Bennett J.; Austad, Wayne E.; Walsh, Stuart C.; Herring, Catherine A.
2005-12-27
Digital data storage systems, computers, and data verification methods are provided. According to a first aspect of the invention, a computer includes an interface adapted to couple with a dynamic database; and processing circuitry configured to provide a first hash from digital data stored within a portion of the dynamic database at an initial moment in time, to provide a second hash from digital data stored within the portion of the dynamic database at a subsequent moment in time, and to compare the first hash and the second hash.
Geodata Modeling and Query in Geographic Information Systems
NASA Technical Reports Server (NTRS)
Adam, Nabil
1996-01-01
Geographic information systems (GIS) deal with collecting, modeling, man- aging, analyzing, and integrating spatial (locational) and non-spatial (attribute) data required for geographic applications. Examples of spatial data are digital maps, administrative boundaries, road networks, and those of non-spatial data are census counts, land elevations and soil characteristics. GIS shares common areas with a number of other disciplines such as computer- aided design, computer cartography, database management, and remote sensing. None of these disciplines however, can by themselves fully meet the requirements of a GIS application. Examples of such requirements include: the ability to use locational data to produce high quality plots, perform complex operations such as network analysis, enable spatial searching and overlay operations, support spatial analysis and modeling, and provide data management functions such as efficient storage, retrieval, and modification of large datasets; independence, integrity, and security of data; and concurrent access to multiple users. It is on the data management issues that we devote our discussions in this monograph. Traditionally, database management technology have been developed for business applications. Such applications require, among other things, capturing the data requirements of high-level business functions and developing machine- level implementations; supporting multiple views of data and yet providing integration that would minimize redundancy and maintain data integrity and security; providing a high-level language for data definition and manipulation; allowing concurrent access to multiple users; and processing user transactions in an efficient manner. The demands on database management systems have been for speed, reliability, efficiency, cost effectiveness, and user-friendliness. Significant progress have been made in all of these areas over the last two decades to the point that many generalized database platforms are now available for developing data intensive applications that run in real-time. While continuous improvement is still being made at a very fast-paced and competitive rate, new application areas such as computer aided design, image processing, VLSI design, and GIS have been identified by many as the next generation of database applications. These new application areas pose serious challenges to the currently available database technology. At the core of these challenges is the nature of data that is manipulated. In traditional database applications, the database objects do not have any spatial dimension, and as such, can be thought of as point data in a multi-dimensional space. For example, each instance of an entity EMPLOYEE will have a unique value corresponding to every attribute such as employee id, employee name, employee address and so on. Thus, every Employee instance can be thought of as a point in a multi-dimensional space where each dimension is represented by an attribute. Furthermore, all operations on such data are one-dimensional. Thus, users may retrieve all entities satisfying one or more constraints. Examples of such constraints include employees with addresses in a certain area code, or salaries within a certain range. Even though constraints can be specified on multiple attributes (dimensions), the search for such data is essentially orthogonal across these dimensions.
DASTCOM5: A Portable and Current Database of Asteroid and Comet Orbit Solutions
NASA Astrophysics Data System (ADS)
Giorgini, Jon D.; Chamberlin, Alan B.
2014-11-01
A portable direct-access database containing all NASA/JPL asteroid and comet orbit solutions, with the software to access it, is available for download (ftp://ssd.jpl.nasa.gov/pub/xfr/dastcom5.zip; unzip -ao dastcom5.zip). DASTCOM5 contains the latest heliocentric IAU76/J2000 ecliptic osculating orbital elements for all known asteroids and comets as determined by a least-squares best-fit to ground-based optical, spacecraft, and radar astrometric measurements. Other physical, dynamical, and covariance parameters are included when known. A total of 142 parameters per object are supported within DASTCOM5. This information is suitable for initializing high-precision numerical integrations, assessing orbit geometry, computing trajectory uncertainties, visual magnitude, and summarizing physical characteristics of the body. The DASTCOM5 distribution is updated as often as hourly to include newly discovered objects or orbit solution updates. It includes an ASCII index of objects that supports look-ups based on name, current or past designation, SPK ID, MPC packed-designations, or record number. DASTCOM5 is the database used by the NASA/JPL Horizons ephemeris system. It is a subset exported from a larger MySQL-based relational Small-Body Database ("SBDB") maintained at JPL. The DASTCOM5 distribution is intended for programmers comfortable with UNIX/LINUX/MacOSX command-line usage who need to develop stand-alone applications. The goal of the implementation is to provide small, fast, portable, and flexibly programmatic access to JPL comet and asteroid orbit solutions. The supplied software library, examples, and application programs have been verified under gfortran, Lahey, Intel, and Sun 32/64-bit Linux/UNIX FORTRAN compilers. A command-line tool ("dxlook") is provided to enable database access from shell or script environments.
Code of Federal Regulations, 2013 CFR
2013-01-01
... SAFETY INFORMATION DATABASE Procedural Requirements § 1102.20 Transmission of reports of harm to the... of harm, provided such report meets the minimum requirements for publication in the Database, to the... report of harm, or otherwise, then it will not post the report of harm on the Database but will maintain...
A Sediment Testing Reference Area Database for the San Francisco Deep Ocean Disposal Site (SF-DODS)
EPA established and maintains a SF-DODS reference area database of previously-collected sediment test data. Several sets of sediment test data have been successfully collected from the SF-DODS reference area.
49 CFR 229.311 - Review of SAs.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... railroad shall maintain a database of all safety-relevant hazards encountered with the product. The database shall include all hazards identified in the SA and those that had not been previously identified...
Code of Federal Regulations, 2013 CFR
2013-01-01
... participate in a nationwide mortgage licensing system and registry database of residential mortgage loan... requirements, the Bureau is charged with establishing and maintaining a licensing and registry database for... administration of the Nationwide Mortgage Licensing System and Registry. (5) Subpart E clarifies the Bureau's...
Code of Federal Regulations, 2014 CFR
2014-01-01
... participate in a nationwide mortgage licensing system and registry database of residential mortgage loan... requirements, the Bureau is charged with establishing and maintaining a licensing and registry database for... administration of the Nationwide Mortgage Licensing System and Registry. (5) Subpart E clarifies the Bureau's...
49 CFR 229.311 - Review of SAs.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... railroad shall maintain a database of all safety-relevant hazards encountered with the product. The database shall include all hazards identified in the SA and those that had not been previously identified...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL LANDS HIGHWAYS FOREST SERVICE... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases...
Code of Federal Regulations, 2012 CFR
2012-01-01
... participate in a nationwide mortgage licensing system and registry database of residential mortgage loan... requirements, the Bureau is charged with establishing and maintaining a licensing and registry database for... administration of the Nationwide Mortgage Licensing System and Registry. (5) Subpart E clarifies the Bureau's...
49 CFR 229.311 - Review of SAs.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... railroad shall maintain a database of all safety-relevant hazards encountered with the product. The database shall include all hazards identified in the SA and those that had not been previously identified...
Domain fusion analysis by applying relational algebra to protein sequence and domain databases
Truong, Kevin; Ikura, Mitsuhiko
2003-01-01
Background Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. Results This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at . Conclusion As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time. PMID:12734020
10 CFR 600.342 - Retention and access requirements for records.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., reliability, and security of the original computer data. Recipients must also maintain an audit trail... related to computer usage chargeback rates), along with their supporting records, must be retained for a 3... maintained on a computer, recipients must retain the computer data on a reliable medium for the time periods...
10 CFR 600.342 - Retention and access requirements for records.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., reliability, and security of the original computer data. Recipients must also maintain an audit trail... related to computer usage chargeback rates), along with their supporting records, must be retained for a 3... maintained on a computer, recipients must retain the computer data on a reliable medium for the time periods...
An algorithm of discovering signatures from DNA databases on a computer cluster.
Lee, Hsiao Ping; Sheu, Tzu-Fang
2014-10-05
Signatures are short sequences that are unique and not similar to any other sequence in a database that can be used as the basis to identify different species. Even though several signature discovery algorithms have been proposed in the past, these algorithms require the entirety of databases to be loaded in the memory, thus restricting the amount of data that they can process. It makes those algorithms unable to process databases with large amounts of data. Also, those algorithms use sequential models and have slower discovery speeds, meaning that the efficiency can be improved. In this research, we are debuting the utilization of a divide-and-conquer strategy in signature discovery and have proposed a parallel signature discovery algorithm on a computer cluster. The algorithm applies the divide-and-conquer strategy to solve the problem posed to the existing algorithms where they are unable to process large databases and uses a parallel computing mechanism to effectively improve the efficiency of signature discovery. Even when run with just the memory of regular personal computers, the algorithm can still process large databases such as the human whole-genome EST database which were previously unable to be processed by the existing algorithms. The algorithm proposed in this research is not limited by the amount of usable memory and can rapidly find signatures in large databases, making it useful in applications such as Next Generation Sequencing and other large database analysis and processing. The implementation of the proposed algorithm is available at http://www.cs.pu.edu.tw/~fang/DDCSDPrograms/DDCSD.htm.
Lecumberri, Ramón; Marqués, Margarita; Díaz-Navarlaz, María Teresa; Panizo, Elena; Toledo, Jon; García-Mouriz, Alberto; Páramo, José A
2008-10-01
Despite current guidelines, venous thromboembolism (VTE) prophylaxis is underused. Computerized programs to encourage physicians to apply thromboprophylaxis have been shown to be effective in selected populations. Our aim was to analyze the impact of the implementation of a computer-alert system for VTE risk in all hospitalized patients of a teaching hospital. A computer program linked to the clinical record database was developed to assess all hospitalized patients' VTE risk daily. The physician responsible for patients at high risk was alerted, but remained free to order or withhold prophylaxis. Over 19,000 hospitalized, medical and surgical, adult patients between January to June 2005 (pre-intervention phase), January to June 2006 and January to June 2007 (post-intervention phase), were included. During the first semesters of 2006 and 2007, an electronic alert was sent to 32.8% and 32.2% of all hospitalized patients, respectively. Appropriate prophylaxis among alerted patients was ordered in 89.7% (2006) and 88.5% (2007) of surgical patients, and in 49.2% (2006) and 64.4% (2007) of medical patients. A sustained reduction of VTE during hospitalization was achieved, Odds ratio (OR): 0.53, 95% confidence interval (CI) (0.25-1.10) and OR: 0.51, 95%CI (0.24-1.05) during the first semesters of 2006 and 2007 respectively, the impact being significant (p < 0.05) among medical patients in 2007, OR: 0.36, 95%CI (0.12-0.98). The implementation of a computer-alert program helps physicians to assess each patient's thrombotic risk, leading to a better use of thromboprophylaxis, and a reduction in the incidence of VTE among hospitalized patients. For the first time, an intervention aimed to improve VTE prophylaxis shows maintained effectiveness over time.
Monitoring SLAC High Performance UNIX Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC
2005-12-15
Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia.more » Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.« less
Privacy-Aware Location Database Service for Granular Queries
NASA Astrophysics Data System (ADS)
Kiyomoto, Shinsaku; Martin, Keith M.; Fukushima, Kazuhide
Future mobile markets are expected to increasingly embrace location-based services. This paper presents a new system architecture for location-based services, which consists of a location database and distributed location anonymizers. The service is privacy-aware in the sense that the location database always maintains a degree of anonymity. The location database service permits three different levels of query and can thus be used to implement a wide range of location-based services. Furthermore, the architecture is scalable and employs simple functions that are similar to those found in general database systems.
Zhulin, Igor B.
2015-05-26
Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. Finally, the purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhulin, Igor B.
Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. Finally, the purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists.
2015-01-01
Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. The purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists. PMID:26013493
Definitions of database files and fields of the Personal Computer-Based Water Data Sources Directory
Green, J. Wayne
1991-01-01
This report describes the data-base files and fields of the personal computer-based Water Data Sources Directory (WDSD). The personal computer-based WDSD was derived from the U.S. Geological Survey (USGS) mainframe computer version. The mainframe version of the WDSD is a hierarchical data-base design. The personal computer-based WDSD is a relational data- base design. This report describes the data-base files and fields of the relational data-base design in dBASE IV (the use of brand names in this abstract is for identification purposes only and does not constitute endorsement by the U.S. Geological Survey) for the personal computer. The WDSD contains information on (1) the type of organization, (2) the major orientation of water-data activities conducted by each organization, (3) the names, addresses, and telephone numbers of offices within each organization from which water data may be obtained, (4) the types of data held by each organization and the geographic locations within which these data have been collected, (5) alternative sources of an organization's data, (6) the designation of liaison personnel in matters related to water-data acquisition and indexing, (7) the volume of water data indexed for the organization, and (8) information about other types of data and services available from the organization that are pertinent to water-resources activities.
Technology and the Modern Library.
ERIC Educational Resources Information Center
Boss, Richard W.
1984-01-01
Overview of the impact of information technology on libraries highlights turnkey vendors, bibliographic utilities, commercial suppliers of records, state and regional networks, computer-to-computer linkages, remote database searching, terminals and microcomputers, building local databases, delivery of information, digital telefacsimile,…
23 CFR 973.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL LANDS HIGHWAYS MANAGEMENT... system; (2) A process to operate and maintain the management systems and their associated databases; (3... systems shall use databases with a common or coordinated reference system that can be used to geolocate...
23 CFR 973.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL LANDS HIGHWAYS MANAGEMENT... system; (2) A process to operate and maintain the management systems and their associated databases; (3... systems shall use databases with a common or coordinated reference system that can be used to geolocate...
NASA Astrophysics Data System (ADS)
Gruyters, Willem; Verboven, Pieter; Rogge, Seppe; Vanmaercke, Simon; Ramon, Herman; Nicolai, Bart
2017-10-01
Freshly harvested horticultural produce require a proper temperature management to maintain their high economic value. Towards this end, low temperature storage is of crucial importance to maintain a high product quality. Optimizing both the package design of packed produce and the different steps in the postharvest cold chain can be achieved by numerical modelling of the relevant transport phenomena. This work presents a novel methodology to accurately model both the random filling of produce in a package and the subsequent cooling process. First, a cultivar-specific database of more than 100 realistic CAD models of apple and pear fruit is built with a validated geometrical 3D shape model generator. To have an accurate representation of a realistic picking season, the model generator also takes into account the biological variability of the produce shape. Next, a discrete element model (DEM) randomly chooses surface meshed bodies from the database to simulate the gravitational filling process of produce in a box or bin, using actual mechanical properties of the fruit. A computational fluid dynamics (CFD) model is then developed with the final stacking arrangement of the produce to study the cooling efficiency of packages under several conditions and configurations. Here, a typical precooling operation is simulated to demonstrate the large differences between using actual 3D shapes of the fruit and an equivalent spheres approach that simplifies the problem drastically. From this study, it is concluded that using a simplified representation of the actual fruit shape may lead to a severe overestimation of the cooling behaviour.
GPU-based cloud service for Smith-Waterman algorithm using frequency distance filtration scheme.
Lee, Sheng-Ta; Lin, Chun-Yuan; Hung, Che Lun
2013-01-01
As the conventional means of analyzing the similarity between a query sequence and database sequences, the Smith-Waterman algorithm is feasible for a database search owing to its high sensitivity. However, this algorithm is still quite time consuming. CUDA programming can improve computations efficiently by using the computational power of massive computing hardware as graphics processing units (GPUs). This work presents a novel Smith-Waterman algorithm with a frequency-based filtration method on GPUs rather than merely accelerating the comparisons yet expending computational resources to handle such unnecessary comparisons. A user friendly interface is also designed for potential cloud server applications with GPUs. Additionally, two data sets, H1N1 protein sequences (query sequence set) and human protein database (database set), are selected, followed by a comparison of CUDA-SW and CUDA-SW with the filtration method, referred to herein as CUDA-SWf. Experimental results indicate that reducing unnecessary sequence alignments can improve the computational time by up to 41%. Importantly, by using CUDA-SWf as a cloud service, this application can be accessed from any computing environment of a device with an Internet connection without time constraints.
Regional early flood warning system: design and implementation
NASA Astrophysics Data System (ADS)
Chang, L. C.; Yang, S. N.; Kuo, C. L.; Wang, Y. F.
2017-12-01
This study proposes a prototype of the regional early flood inundation warning system in Tainan City, Taiwan. The AI technology is used to forecast multi-step-ahead regional flood inundation maps during storm events. The computing time is only few seconds that leads to real-time regional flood inundation forecasting. A database is built to organize data and information for building real-time forecasting models, maintaining the relations of forecasted points, and displaying forecasted results, while real-time data acquisition is another key task where the model requires immediately accessing rain gauge information to provide forecast services. All programs related database are constructed in Microsoft SQL Server by using Visual C# to extracting real-time hydrological data, managing data, storing the forecasted data and providing the information to the visual map-based display. The regional early flood inundation warning system use the up-to-date Web technologies driven by the database and real-time data acquisition to display the on-line forecasting flood inundation depths in the study area. The friendly interface includes on-line sequentially showing inundation area by Google Map, maximum inundation depth and its location, and providing KMZ file download of the results which can be watched on Google Earth. The developed system can provide all the relevant information and on-line forecast results that helps city authorities to make decisions during typhoon events and make actions to mitigate the losses.
Expert system shell to reason on large amounts of data
NASA Technical Reports Server (NTRS)
Giuffrida, Gionanni
1994-01-01
The current data base management systems (DBMS's) do not provide a sophisticated environment to develop rule based expert systems applications. Some of the new DBMS's come with some sort of rule mechanism; these are active and deductive database systems. However, both of these are not featured enough to support full implementation based on rules. On the other hand, current expert system shells do not provide any link with external databases. That is, all the data are kept in the system working memory. Such working memory is maintained in main memory. For some applications the reduced size of the available working memory could represent a constraint for the development. Typically these are applications which require reasoning on huge amounts of data. All these data do not fit into the computer main memory. Moreover, in some cases these data can be already available in some database systems and continuously updated while the expert system is running. This paper proposes an architecture which employs knowledge discovering techniques to reduce the amount of data to be stored in the main memory; in this architecture a standard DBMS is coupled with a rule-based language. The data are stored into the DBMS. An interface between the two systems is responsible for inducing knowledge from the set of relations. Such induced knowledge is then transferred to the rule-based language working memory.
Asbestos Exposure Assessment Database
NASA Technical Reports Server (NTRS)
Arcot, Divya K.
2010-01-01
Exposure to particular hazardous materials in a work environment is dangerous to the employees who work directly with or around the materials as well as those who come in contact with them indirectly. In order to maintain a national standard for safe working environments and protect worker health, the Occupational Safety and Health Administration (OSHA) has set forth numerous precautionary regulations. NASA has been proactive in adhering to these regulations by implementing standards which are often stricter than regulation limits and administering frequent health risk assessments. The primary objective of this project is to create the infrastructure for an Asbestos Exposure Assessment Database specific to NASA Johnson Space Center (JSC) which will compile all of the exposure assessment data into a well-organized, navigable format. The data includes Sample Types, Samples Durations, Crafts of those from whom samples were collected, Job Performance Requirements (JPR) numbers, Phased Contrast Microscopy (PCM) and Transmission Electron Microscopy (TEM) results and qualifiers, Personal Protective Equipment (PPE), and names of industrial hygienists who performed the monitoring. This database will allow NASA to provide OSHA with specific information demonstrating that JSC s work procedures are protective enough to minimize the risk of future disease from the exposures. The data has been collected by the NASA contractors Computer Sciences Corporation (CSC) and Wyle Laboratories. The personal exposure samples were collected from devices worn by laborers working at JSC and by building occupants located in asbestos-containing buildings.
Scientific Data Services -- A High-Performance I/O System with Array Semantics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Kesheng; Byna, Surendra; Rotem, Doron
2011-09-21
As high-performance computing approaches exascale, the existing I/O system design is having trouble keeping pace in both performance and scalability. We propose to address this challenge by adopting database principles and techniques in parallel I/O systems. First, we propose to adopt an array data model because many scientific applications represent their data in arrays. This strategy follows a cardinal principle from database research, which separates the logical view from the physical layout of data. This high-level data model gives the underlying implementation more freedom to optimize the physical layout and to choose the most effective way of accessing the data.more » For example, knowing that a set of write operations is working on a single multi-dimensional array makes it possible to keep the subarrays in a log structure during the write operations and reassemble them later into another physical layout as resources permit. While maintaining the high-level view, the storage system could compress the user data to reduce the physical storage requirement, collocate data records that are frequently used together, or replicate data to increase availability and fault-tolerance. Additionally, the system could generate secondary data structures such as database indexes and summary statistics. We expect the proposed Scientific Data Services approach to create a “live” storage system that dynamically adjusts to user demands and evolves with the massively parallel storage hardware.« less
Tourassi, Georgia D; Harrawood, Brian; Singh, Swatee; Lo, Joseph Y; Floyd, Carey E
2007-01-01
The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrieval precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses.
Itri, Jason N; Jones, Lisa P; Kim, Woojin; Boonn, William W; Kolansky, Ana S; Hilton, Susan; Zafar, Hanna M
2014-04-01
Monitoring complications and diagnostic yield for image-guided procedures is an important component of maintaining high quality patient care promoted by professional societies in radiology and accreditation organizations such as the American College of Radiology (ACR) and Joint Commission. These outcome metrics can be used as part of a comprehensive quality assurance/quality improvement program to reduce variation in clinical practice, provide opportunities to engage in practice quality improvement, and contribute to developing national benchmarks and standards. The purpose of this article is to describe the development and successful implementation of an automated web-based software application to monitor procedural outcomes for US- and CT-guided procedures in an academic radiology department. The open source tools PHP: Hypertext Preprocessor (PHP) and MySQL were used to extract relevant procedural information from the Radiology Information System (RIS), auto-populate the procedure log database, and develop a user interface that generates real-time reports of complication rates and diagnostic yield by site and by operator. Utilizing structured radiology report templates resulted in significantly improved accuracy of information auto-populated from radiology reports, as well as greater compliance with manual data entry. An automated web-based procedure log database is an effective tool to reliably track complication rates and diagnostic yield for US- and CT-guided procedures performed in a radiology department.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourassi, Georgia D.; Harrawood, Brian; Singh, Swatee
The purpose of this study was to evaluate image similarity measures employed in an information-theoretic computer-assisted detection (IT-CAD) scheme. The scheme was developed for content-based retrieval and detection of masses in screening mammograms. The study is aimed toward an interactive clinical paradigm where physicians query the proposed IT-CAD scheme on mammographic locations that are either visually suspicious or indicated as suspicious by other cuing CAD systems. The IT-CAD scheme provides an evidence-based, second opinion for query mammographic locations using a knowledge database of mass and normal cases. In this study, eight entropy-based similarity measures were compared with respect to retrievalmore » precision and detection accuracy using a database of 1820 mammographic regions of interest. The IT-CAD scheme was then validated on a separate database for false positive reduction of progressively more challenging visual cues generated by an existing, in-house mass detection system. The study showed that the image similarity measures fall into one of two categories; one category is better suited to the retrieval of semantically similar cases while the second is more effective with knowledge-based decisions regarding the presence of a true mass in the query location. In addition, the IT-CAD scheme yielded a substantial reduction in false-positive detections while maintaining high detection rate for malignant masses.« less
Structure elucidation of organic compounds aided by the computer program system SCANNET
NASA Astrophysics Data System (ADS)
Guzowska-Swider, B.; Hippe, Z. S.
1992-12-01
Recognition of chemical structure is a very important problem currently solved by molecular spectroscopy, particularly IR, UV, NMR and Raman spectroscopy, and mass spectrometry. Nowadays, solution of the problem is frequently aided by the computer. SCANNET is a computer program system for structure elucidation of organic compounds, developed by our group. The structure recognition of an unknown substance is made by comparing its spectrum with successive reference spectra of standard compounds, i.e. chemical compounds of known chemical structure, stored in a spectral database. The computer program system SCANNET consists of six different spectral databases for following the analytical methods: IR, UV, 13C-NMR, 1H-NMR and Raman spectroscopy, and mass spectrometry. A chemist, to elucidate a structure, can use one of these spectral methods or a combination of them and search the appropriate databases. As the result of searching each spectral database, the user obtains a list of chemical substances whose spectra are identical and/or similar to the spectrum input into the computer. The final information obtained from searching the spectral databases is in the form of a list of chemical substances having all the examined spectra, for each type of spectroscopy, identical or simlar to those of the unknown compound.
Ray Modeling Methods for Range Dependent Ocean Environments
1983-12-01
the eikonal equation, gives rise to equations for ray paths which are perpendicular to the wave fronts. Equation II.4, the transport equation, leads... databases for use by MEDUSA. The author has assisted in the installation of MEDUSA at computer facilities which possess databases containing archives of...sound velocity profiles, bathymetry, and bottom loss data. At each computer site, programs convert the archival data retrieved by the database system
A Summary of the Naval Postgraduate School Research Program
1989-08-30
5 Fundamental Theory for Automatically Combining Changes to Software Systems ............................ 6 Database -System Approach to...Software Engineering Environments(SEE’s) .................................. 10 Multilevel Database Security .......................... 11 Temporal... Database Management and Real-Time Database Computers .................................... 12 The Multi-lingual, Multi Model, Multi-Backend Database
Dodd, Lori E; Wagner, Robert F; Armato, Samuel G; McNitt-Gray, Michael F; Beiden, Sergey; Chan, Heang-Ping; Gur, David; McLennan, Geoffrey; Metz, Charles E; Petrick, Nicholas; Sahiner, Berkman; Sayre, Jim
2004-04-01
Cancer of the lung and bronchus is the leading fatal malignancy in the United States. Five-year survival is low, but treatment of early stage disease considerably improves chances of survival. Advances in multidetector-row computed tomography technology provide detection of smaller lung nodules and offer a potentially effective screening tool. The large number of images per exam, however, requires considerable radiologist time for interpretation and is an impediment to clinical throughput. Thus, computer-aided diagnosis (CAD) methods are needed to assist radiologists with their decision making. To promote the development of CAD methods, the National Cancer Institute formed the Lung Image Database Consortium (LIDC). The LIDC is charged with developing the consensus and standards necessary to create an image database of multidetector-row computed tomography lung images as a resource for CAD researchers. To develop such a prospective database, its potential uses must be anticipated. The ultimate applications will influence the information that must be included along with the images, the relevant measures of algorithm performance, and the number of required images. In this article we outline assessment methodologies and statistical issues as they relate to several potential uses of the LIDC database. We review methods for performance assessment and discuss issues of defining "truth" as well as the complications that arise when truth information is not available. We also discuss issues about sizing and populating a database.
32 CFR 240.5 - Responsibilities.
Code of Federal Regulations, 2013 CFR
2013-07-01
... IASP and provide academic scholarships and grants in accordance with 10 U.S.C. 2200 and 7045. (3... graduation from their academic program. (C) Ensure that all students' academic eligibility is maintained... Steering Committee. (3) Maintain databases to support the analysis of performance results. (c) The...
32 CFR 240.5 - Responsibilities.
Code of Federal Regulations, 2014 CFR
2014-07-01
... IASP and provide academic scholarships and grants in accordance with 10 U.S.C. 2200 and 7045. (3... graduation from their academic program. (C) Ensure that all students' academic eligibility is maintained... Steering Committee. (3) Maintain databases to support the analysis of performance results. (c) The...
32 CFR 240.5 - Responsibilities.
Code of Federal Regulations, 2012 CFR
2012-07-01
... IASP and provide academic scholarships and grants in accordance with 10 U.S.C. 2200 and 7045. (3... graduation from their academic program. (C) Ensure that all students' academic eligibility is maintained... Steering Committee. (3) Maintain databases to support the analysis of performance results. (c) The...
75 FR 27051 - Privacy Act of 1974: System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-13
... address and appears below: DOT/FMCSA 004 SYSTEM NAME: National Consumer Complaint Database (NCCDB.... A system, database, and procedures for filing and logging consumer complaints relating to household... are stored in an automated system operated and maintained at the Volpe National Transportation Systems...
Maintaining Research Documents with Database Management Software.
ERIC Educational Resources Information Center
Harrington, Stuart A.
1999-01-01
Discusses taking notes for research projects and organizing them into card files; reviews the literature on personal filing systems; introduces the basic process of database management; and offers a plan for managing research notes. Describes field groups and field definitions, data entry, and creating reports. (LRW)
75 FR 78995 - Agency Forms Undergoing Paperwork Reduction Act Review
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-17
... fellowship applicants and alumni in one integrated database. FMS provides an efficient and effective way for processing application data, selecting qualified candidates, maintaining a current alumni database...; submission of academic transcripts and letters of recommendation; a review by selected programmatic staff and...
MaizeGDB: New tools and resource
USDA-ARS?s Scientific Manuscript database
MaizeGDB, the USDA-ARS genetics and genomics database, is a highly curated, community-oriented informatics service to researchers focused on the crop plant and model organism Zea mays. MaizeGDB facilitates maize research by curating, integrating, and maintaining a database that serves as the central...
Yanagita, Satoshi; Imahana, Masato; Suwa, Kazuaki; Sugimura, Hitomi; Nishiki, Masayuki
2016-01-01
Japanese Society of Radiological Technology (JSRT) standard digital image database contains many useful cases of chest X-ray images, and has been used in many state-of-the-art researches. However, the pixel values of all the images are simply digitized as relative density values by utilizing a scanned film digitizer. As a result, the pixel values are completely different from the standardized display system input value of digital imaging and communications in medicine (DICOM), called presentation value (P-value), which can maintain a visual consistency when observing images using different display luminance. Therefore, we converted all the images from JSRT standard digital image database to DICOM format followed by the conversion of the pixel values to P-value using an original program developed by ourselves. Consequently, JSRT standard digital image database has been modified so that the visual consistency of images is maintained among different luminance displays.
Safeguarding Databases Basic Concepts Revisited.
ERIC Educational Resources Information Center
Cardinali, Richard
1995-01-01
Discusses issues of database security and integrity, including computer crime and vandalism, human error, computer viruses, employee and user access, and personnel policies. Suggests some precautions to minimize system vulnerability such as careful personnel screening, audit systems, passwords, and building and software security systems. (JKP)
AlQuraishi, Mohammed; Tang, Shengdong; Xia, Xide
2015-11-19
Molecular interactions between proteins and DNA molecules underlie many cellular processes, including transcriptional regulation, chromosome replication, and nucleosome positioning. Computational analyses of protein-DNA interactions rely on experimental data characterizing known protein-DNA interactions structurally and biochemically. While many databases exist that contain either structural or biochemical data, few integrate these two data sources in a unified fashion. Such integration is becoming increasingly critical with the rapid growth of structural and biochemical data, and the emergence of algorithms that rely on the synthesis of multiple data types to derive computational models of molecular interactions. We have developed an integrated affinity-structure database in which the experimental and quantitative DNA binding affinities of helix-turn-helix proteins are mapped onto the crystal structures of the corresponding protein-DNA complexes. This database provides access to: (i) protein-DNA structures, (ii) quantitative summaries of protein-DNA binding affinities using position weight matrices, and (iii) raw experimental data of protein-DNA binding instances. Critically, this database establishes a correspondence between experimental structural data and quantitative binding affinity data at the single basepair level. Furthermore, we present a novel alignment algorithm that structurally aligns the protein-DNA complexes in the database and creates a unified residue-level coordinate system for comparing the physico-chemical environments at the interface between complexes. Using this unified coordinate system, we compute the statistics of atomic interactions at the protein-DNA interface of helix-turn-helix proteins. We provide an interactive website for visualization, querying, and analyzing this database, and a downloadable version to facilitate programmatic analysis. This database will facilitate the analysis of protein-DNA interactions and the development of programmatic computational methods that capitalize on integration of structural and biochemical datasets. The database can be accessed at http://ProteinDNA.hms.harvard.edu.
Impact of database quality in knowledge-based treatment planning for prostate cancer.
Wall, Phillip D H; Carver, Robert L; Fontenot, Jonas D
2018-03-13
This article investigates dose-volume prediction improvements in a common knowledge-based planning (KBP) method using a Pareto plan database compared with using a conventional, clinical plan database. Two plan databases were created using retrospective, anonymized data of 124 volumetric modulated arc therapy (VMAT) prostate cancer patients. The clinical plan database (CPD) contained planning data from each patient's clinically treated VMAT plan, which were manually optimized by various planners. The multicriteria optimization database (MCOD) contained Pareto-optimal plan data from VMAT plans created using a standardized multicriteria optimization protocol. Overlap volume histograms, incorporating fractional organ at risk volumes only within the treatment fields, were computed for each patient and used to match new patient anatomy to similar database patients. For each database patient, CPD and MCOD KBP predictions were generated for D 10 , D 30 , D 50 , D 65 , and D 80 of the bladder and rectum in a leave-one-out manner. Prediction achievability was evaluated through a replanning study on a subset of 31 randomly selected database patients using the best KBP predictions, regardless of plan database origin, as planning goals. MCOD predictions were significantly lower than CPD predictions for all 5 bladder dose-volumes and rectum D 50 (P = .004) and D 65 (P < .001), whereas CPD predictions for rectum D 10 (P = .005) and D 30 (P < .001) were significantly less than MCOD predictions. KBP predictions were statistically achievable in the replans for all predicted dose-volumes, excluding D 10 of bladder (P = .03) and rectum (P = .04). Compared with clinical plans, replans showed significant average reductions in D mean for bladder (7.8 Gy; P < .001) and rectum (9.4 Gy; P < .001), while maintaining statistically similar planning target volume, femoral head, and penile bulb dose. KBP dose-volume predictions derived from Pareto plans were more optimal overall than those resulting from manually optimized clinical plans, which significantly improved KBP-assisted plan quality. This work investigates how the plan quality of knowledge databases affects the performance and achievability of dose-volume predictions from a common knowledge-based planning approach for prostate cancer. Bladder and rectum dose-volume predictions derived from a database of standardized Pareto-optimal plans were compared with those derived from clinical plans manually designed by various planners. Dose-volume predictions from the Pareto plan database were significantly lower overall than those from the clinical plan database, without compromising achievability. Copyright © 2018 Elsevier Inc. All rights reserved.
Domain fusion analysis by applying relational algebra to protein sequence and domain databases.
Truong, Kevin; Ikura, Mitsuhiko
2003-05-06
Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at http://calcium.uhnres.utoronto.ca/pi. As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time.
NASA Astrophysics Data System (ADS)
Bauschlicher, Charles W., Jr.; Ricca, A.; Boersma, C.; Allamandola, L. J.
2018-02-01
Version 3.00 of the library of computed spectra in the NASA Ames PAH IR Spectroscopic Database (PAHdb) is described. Version 3.00 introduces the use of multiple scale factors, instead of the single scaling factor used previously, to align the theoretical harmonic frequencies with the experimental fundamentals. The use of multiple scale factors permits the use of a variety of basis sets; this allows new PAH species to be included in the database, such as those containing oxygen, and yields an improved treatment of strained species and those containing nitrogen. In addition, the computed spectra of 2439 new PAH species have been added. The impact of these changes on the analysis of an astronomical spectrum through database-fitting is considered and compared with a fit using Version 2.00 of the library of computed spectra. Finally, astronomical constraints are defined for the PAH spectral libraries in PAHdb.
Kentucky geotechnical database.
DOT National Transportation Integrated Search
2005-03-01
Development of a comprehensive dynamic, geotechnical database is described. Computer software selected to program the client/server application in windows environment, components and structure of the geotechnical database, and primary factors cons...
USDA-ARS?s Scientific Manuscript database
No comprehensive protocols exist for the collection, standardization, and storage of agronomic management information into a database that preserves privacy, maintains data uncertainty, and translates everyday decisions into quantitative values. This manuscript describes the development of a databas...
EPA's Integrated Risk Information System (IRIS) database was developed and is maintained by EPA's Office of Research and Developement, National Center for Environmental Assessment. IRIS is a database of human health effects that may result from exposure to various substances fou...
48 CFR 52.219-8 - Utilization of small business concerns.
Code of Federal Regulations, 2013 CFR
2013-10-01
... United States Small Business Administration or the awarding agency of the United States as may be... List of Qualified HUBZone Small Business Concerns maintained by the Small Business Administration... small disadvantaged business in the Dynamic Small Business Search database maintained by the Small...
48 CFR 52.219-8 - Utilization of Small Business Concerns.
Code of Federal Regulations, 2014 CFR
2014-10-01
... United States Small Business Administration or the awarding agency of the United States as may be... List of Qualified HUBZone Small Business Concerns maintained by the Small Business Administration... small disadvantaged business in the Dynamic Small Business Search database maintained by the Small...
Description of 'REQUEST-KYUSHYU' for KYUKEICHO regional data base
NASA Astrophysics Data System (ADS)
Takimoto, Shin'ichi
Kyushu Economic Research Association (a foundational juridical person) initiated the regional database services, ' REQUEST-Kyushu ' recently. It is the full scale databases compiled based on the information and know-hows which the Association has accumulated over forty years. It covers the regional information database for journal and newspaper articles, and statistical information database for economic statistics. As to the former database it is searched on a personal computer and then a search result (original text) is sent through a facsimile. As to the latter, it is also searched on a personal computer where the data is processed, edited or downloaded. This paper describes characteristics, content and the system outline of 'REQUEST-Kyushu'.
Haile, Michael; Anderson, Kim; Evans, Alex; Crawford, Angela
2012-01-01
In part 1 of this series, we outlined the rationale behind the development of a centralized electronic database used to maintain nonsterile compounding formulation records in the Mission Health System, which is a union of several independent hospitals and satellite and regional pharmacies that form the cornerstone of advanced medical care in several areas of western North Carolina. Hospital providers in many healthcare systems require compounded formulations to meet the needs of their patients (in particular, pediatric patients). Before a centralized electronic compounding database was implemented in the Mission Health System, each satellite or regional pharmacy affiliated with that system had a specific set of formulation records, but no standardized format for those records existed. In this article, we describe the quality control, database platform selection, description, implementation, and execution of our intranet database system, which is designed to maintain, manage, and disseminate nonsterile compounding formulation records in the hospitals and affiliated pharmacies of the Mission Health System. The objectives of that project were to standardize nonsterile compounding formulation records, create a centralized computerized database that would increase healthcare staff members' access to formulation records, establish beyond-use dates based on published stability studies, improve quality control, reduce the potential for medication errors related to compounding medications, and (ultimately) improve patient safety.
NASA Astrophysics Data System (ADS)
Jacquinet-Husson, Nicole; Crépeau, Laurent; Capelle, Virginie; Scott, Noëlle; Armante, Raymond; Chédin, Alain; Boonne, Cathy; Poulet-Crovisier, Nathalie
2010-05-01
The GEISA (1) (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Atmospheric Spectroscopic Information) computer-accessible database, initiated in 1976, is developed and maintained at LMD (Laboratoire de Météorologie Dynamique, France) a system comprising three independent sub-databases devoted respectively to : line transition parameters, infrared and ultraviolet/visible absorption cross-sections, microphysical and optical properties of atmospheric aerosols. The updated 2009 edition (GEISA-09) archives, in its line transition parameters sub-section, 50 molecules, corresponding to 111 isotopes, for a total of 3,807,997 entries, in the spectral range from 10-6 to 35,877.031 cm-1. Detailed description of the whole database contents will be documented. GEISA and GEISA/IASI are implemented on the CNES/CNRS Ether Products and Services Centre WEB site (http://ether.ipsl.jussieu.fr), where all archived spectroscopic data can be handled through general and user friendly associated management software facilities. These facilities will be described and widely illustrated, as well. Interactive demonstrations will be given if technical possibilities are feasible at the time of the Poster Display Session. More than 350 researchers are registered for on line use of GEISA on Ether. Currently, GEISA is involved in activities (2) related to the remote sensing of the terrestrial atmosphere thanks to the sounding performances of new generation of hyperspectral Earth' atmospheric sounders, like AIRS (Atmospheric Infrared Sounder -http://www-airs.jpl.nasa.gov/), in the USA, and IASI (Infrared Atmospheric Sounding Interferometer -http://earth-sciences.cnes.fr/IASI/) in Europe, using the 4A radiative transfer model (3) (4A/LMD http://ara.lmd.polytechnique.fr; 4A/OP co-developed by LMD and NOVELTIS -http://www.noveltis.fr/) with the support of CNES (2006). Refs: (1) Jacquinet-Husson N., N.A. Scott, A. Chédin,L. Crépeau, R. Armante, V. Capelle, J. Orphal, A. Coustenis, C. Boonne, N. Poulet-Crovisier, et al. : THE GEISA SPECTROSCOPIC DATABASE: Current and future archive for Earth and planetary atmosphere studies. JQSRT 109 (2008) 1043-1059. (2) Jacquinet-Husson N., N.A. Scott, A. Chédin, K. Garceran, R. Armante, et al. : The 2003 edition of the GEISA/IASI spectroscopic database. JQSRT, 95 (2005) 429-467. (3) Scott, N.A. and A. Chedin. A fast line-by-line method for atmospheric absorption computations: The Automatized Atmospheric Absorption Atlas. J. Appl. Meteor., 20 (1981) 556-564.
Sollet, P C; de Mol, E J; van Bemmel, J H
1987-01-01
For more than a decade the Department of Medical Informatics has offered one-week training courses on the subject of computer applications in medicine and health care. Since 1983 two courses are given at a rate of one course every two weeks. One course is on programming and problem solving and consists of three modules of increasing complexity in techniques and methods in programming and structured system development. This course focusses on only some aspects of medical informatics: the development of a medical information system, and the problems occurring in the process of automation. These aspects, however, are dealt with in detail. To this end the students are trained in using the programming system MUMPS and the fourth-generation software package AIDA. The second, introductory course is an intensive training on several distinct areas of man-machine interactions. It contains lessons in the fields of communication and recording; storage and retrieval and databases; computation and automation; recognition and diagnosis; and therapy and control. This paper describes the use of AIDA in developing and maintaining lessons for the latter course, and the assistance of AIDA for teaching purposes in the former course.
Software Supports Distributed Operations via the Internet
NASA Technical Reports Server (NTRS)
Norris, Jeffrey; Backers, Paul; Steinke, Robert
2003-01-01
Multi-mission Encrypted Communication System (MECS) is a computer program that enables authorized, geographically dispersed users to gain secure access to a common set of data files via the Internet. MECS is compatible with legacy application programs and a variety of operating systems. The MECS architecture is centered around maintaining consistent replicas of data files cached on remote computers. MECS monitors these files and, whenever one is changed, the changed file is committed to a master database as soon as network connectivity makes it possible to do so. MECS provides subscriptions for remote users to automatically receive new data as they are generated. Remote users can be producers as well as consumers of data. Whereas a prior program that provides some of the same services treats disconnection of a user from the network of users as an error from which recovery must be effected, MECS treats disconnection as a nominal state of the network: This leads to a different design that is more efficient for serving many users, each of whom typically connects and disconnects frequently and wants only a small fraction of the data at any given time.
Integration of a neuroimaging processing pipeline into a pan-canadian computing grid
NASA Astrophysics Data System (ADS)
Lavoie-Courchesne, S.; Rioux, P.; Chouinard-Decorte, F.; Sherif, T.; Rousseau, M.-E.; Das, S.; Adalat, R.; Doyon, J.; Craddock, C.; Margulies, D.; Chu, C.; Lyttelton, O.; Evans, A. C.; Bellec, P.
2012-02-01
The ethos of the neuroimaging field is quickly moving towards the open sharing of resources, including both imaging databases and processing tools. As a neuroimaging database represents a large volume of datasets and as neuroimaging processing pipelines are composed of heterogeneous, computationally intensive tools, such open sharing raises specific computational challenges. This motivates the design of novel dedicated computing infrastructures. This paper describes an interface between PSOM, a code-oriented pipeline development framework, and CBRAIN, a web-oriented platform for grid computing. This interface was used to integrate a PSOM-compliant pipeline for preprocessing of structural and functional magnetic resonance imaging into CBRAIN. We further tested the capacity of our infrastructure to handle a real large-scale project. A neuroimaging database including close to 1000 subjects was preprocessed using our interface and publicly released to help the participants of the ADHD-200 international competition. This successful experiment demonstrated that our integrated grid-computing platform is a powerful solution for high-throughput pipeline analysis in the field of neuroimaging.
Materials Databases Infrastructure Constructed by First Principles Calculations: A Review
Lin, Lianshan
2015-10-13
The First Principles calculations, especially the calculation based on High-Throughput Density Functional Theory, have been widely accepted as the major tools in atom scale materials design. The emerging super computers, along with the powerful First Principles calculations, have accumulated hundreds of thousands of crystal and compound records. The exponential growing of computational materials information urges the development of the materials databases, which not only provide unlimited storage for the daily increasing data, but still keep the efficiency in data storage, management, query, presentation and manipulation. This review covers the most cutting edge materials databases in materials design, and their hotmore » applications such as in fuel cells. By comparing the advantages and drawbacks of these high-throughput First Principles materials databases, the optimized computational framework can be identified to fit the needs of fuel cell applications. The further development of high-throughput DFT materials database, which in essence accelerates the materials innovation, is discussed in the summary as well.« less
Generalized pipeline for preview and rendering of synthetic holograms
NASA Astrophysics Data System (ADS)
Pappu, Ravikanth; Sparrell, Carlton J.; Underkoffler, John S.; Kropp, Adam B.; Chen, Benjie; Plesniak, Wendy J.
1997-04-01
We describe a general pipeline for the computation and display of either fully-computed holograms or holographic stereograms using the same 3D database. A rendering previewer on a Silicon Graphics Onyx allows a user to specify viewing geometry, database transformations, and scene lighting. The previewer then generates one of two descriptions of the object--a series of perspective views or a polygonal model--which is then used by a fringe rendering engine to compute fringes specific to hologram type. The images are viewed on the second generation MIT Holographic Video System. This allows a viewer to compare holographic stereograms with fully-computed holograms originating from the same database and comes closer to the goal of a single pipeline being able to display the same data in different formats.
NASA Astrophysics Data System (ADS)
Kale, Mandar; Mukhopadhyay, Sudipta; Dash, Jatindra K.; Garg, Mandeep; Khandelwal, Niranjan
2016-03-01
Interstitial lung disease (ILD) is complicated group of pulmonary disorders. High Resolution Computed Tomography (HRCT) considered to be best imaging technique for analysis of different pulmonary disorders. HRCT findings can be categorised in several patterns viz. Consolidation, Emphysema, Ground Glass Opacity, Nodular, Normal etc. based on their texture like appearance. Clinician often find it difficult to diagnosis these pattern because of their complex nature. In such scenario computer-aided diagnosis system could help clinician to identify patterns. Several approaches had been proposed for classification of ILD patterns. This includes computation of textural feature and training /testing of classifier such as artificial neural network (ANN), support vector machine (SVM) etc. In this paper, wavelet features are calculated from two different ILD database, publically available MedGIFT ILD database and private ILD database, followed by performance evaluation of ANN and SVM classifiers in terms of average accuracy. It is found that average classification accuracy by SVM is greater than ANN where trained and tested on same database. Investigation continued further to test variation in accuracy of classifier when training and testing is performed with alternate database and training and testing of classifier with database formed by merging samples from same class from two individual databases. The average classification accuracy drops when two independent databases used for training and testing respectively. There is significant improvement in average accuracy when classifiers are trained and tested with merged database. It infers dependency of classification accuracy on training data. It is observed that SVM outperforms ANN when same database is used for training and testing.
ERIC Educational Resources Information Center
Schlenker, Richard M.
This manual is a "how to" training device for building database files using the AppleWorks program with an Apple IIe or Apple IIGS Computer with Duodisk or two disk drives and an 80-column card. The manual provides step-by-step directions, and includes 25 figures depicting the computer screen at the various stages of the database file…
48 CFR 32.1110 - Solicitation provision and contract clauses.
Code of Federal Regulations, 2010 CFR
2010-10-01
... database and maintain registration until final payment, unless— (i) Payment will be made through a third... the contractor to be registered in the CCR database. (ii)(A) If permitted by agency procedures, the... authorized, in accordance with 32.1106, to use a nondomestic EFT mechanism, the contracting officer shall...
NASA scientific and technical information for the 1990s
NASA Technical Reports Server (NTRS)
Cotter, Gladys A.
1990-01-01
Projections for NASA scientific and technical information (STI) in the 1990s are outlined. NASA STI for the 1990s will maintain a quality bibliographic and full-text database, emphasizing electronic input and products supplemented by networked access to a wide variety of sources, particularly numeric databases.
The Missing Link: Context Loss in Online Databases
ERIC Educational Resources Information Center
Mi, Jia; Nesta, Frederick
2005-01-01
Full-text databases do not allow for the complexity of the interaction of the human eye and brain with printed matter. As a result, both content and context may be lost. The authors propose additional indexing fields that would maintain the content and context of print in electronic formats.
17 CFR 162.3 - Affiliate marketing opt out and exceptions.
Code of Federal Regulations, 2012 CFR
2012-04-01
... places that information into a common database that the covered affiliate may access. (3) Service... maintains or accesses a common database that the covered affiliate may access) receives eligibility... the notice and opt-out provisions under other privacy rules under the FCRA, the GLB Act or the CEA. ...
17 CFR 162.3 - Affiliate marketing opt out and exceptions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... places that information into a common database that the covered affiliate may access. (3) Service... maintains or accesses a common database that the covered affiliate may access) receives eligibility... the notice and opt-out provisions under other privacy rules under the FCRA, the GLB Act or the CEA. ...
17 CFR 162.3 - Affiliate marketing opt out and exceptions.
Code of Federal Regulations, 2014 CFR
2014-04-01
... places that information into a common database that the covered affiliate may access. (3) Service... maintains or accesses a common database that the covered affiliate may access) receives eligibility... the notice and opt-out provisions under other privacy rules under the FCRA, the GLB Act or the CEA. ...
Code of Federal Regulations, 2010 CFR
2010-10-01
....2 GHz to 12.7 GHz band. (a) NGSO FSS licensees shall maintain a subscriber database in a format that... database to enable the MVDDS licensee to determine whether the proposed MVDDS transmitting site meets the...
75 FR 60460 - Proposed Data Collections Submitted for Public Comment and Recommendations
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-30
... one integrated database. The mission of the SEPDPO is to prepare an applied public health workforce... candidates, maintaining a current alumni database, documenting the impact of the fellowships on alumni's... to the questions in the online application; submission of academic transcripts and letters of...
75 FR 61761 - Renewal of Charter for the Chronic Fatigue Syndrome Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-06
... professionals, and the biomedical, academic, and research communities about chronic fatigue syndrome advances... accessing the FACA database that is maintained by the Committee Management Secretariat under the General Services Administration. The Web site address for the FACA database is http://fido.gov/facadatabase . Dated...
SLIMMER--A UNIX System-Based Information Retrieval System.
ERIC Educational Resources Information Center
Waldstein, Robert K.
1988-01-01
Describes an information retrieval system developed at Bell Laboratories to create and maintain a variety of different but interrelated databases, and to provide controlled access to these databases. The components discussed include the interfaces, indexing rules, display languages, response time, and updating procedures of the system. (6 notes…
Digital mining claim density map for federal lands in Nevada: 1996
Hyndman, Paul C.; Campbell, Harry W.
1999-01-01
This report describes a digital map generated by the U.S. Geological Survey (USGS) to provide digital spatial mining claim density information for federal lands in Nevada as of March 1997. Mining claim data is earth science information deemed to be relevant to the assessment of historic, current, and future ecological, economic, and social systems. There is no paper map included in this Open-File report. In accordance with the Federal Land Policy and Management Act of 1976 (FLPMA), all unpatented mining claims, mill, and tunnel sites must be recorded at the appropriate Bureau of Land Management (BLM) State office. BLM maintains a cumulative computer listing of mining claims in the MCRS database with locations given by meridian, township, range, and section. A mining claim is considered closed when the claim is relinquished or a formal BLM decision declaring the mining claim null and void has been issued and the appeal period has expired. All other mining claims filed with BLM are considered to be open and actively held. The digital map (figure 1.) with the mining claim density database available in this report are suitable for geographic information system (GIS)-based regional assessments at a scale of 1:100,000 or smaller.
Digital mining claim density map for federal lands in Utah: 1996
Hyndman, Paul C.; Campbell, Harry W.
1999-01-01
This report describes a digital map generated by the U.S. Geological Survey (USGS) to provide digital spatial mining claim density information for federal lands in Utah as of March 1997. Mining claim data is earth science information deemed to be relevant to the assessment of historic, current, and future ecological, economic, and social systems. There is no paper map included in this Open-File report. In accordance with the Federal Land Policy and Management Act of 1976 (FLPMA), all unpatented mining claims, mill, and tunnel sites must be recorded at the appropriate BLM State office. BLM maintains a cumulative computer listing of mining claims in the MCRS database with locations given by meridian, township, range, and section. A mining claim is considered closed when the claim is relinquished or a formal BLM decision declaring the mining claim null and void has been issued and the appeal period has expired. All other mining claims filed with BLM are considered to be open and actively held. The digital map (figure 1.) with the mining claim density database available in this report are suitable for geographic information system (GIS)-based regional assessments at a scale of 1:100,000 or smaller.
Digital mining claim density map for federal lands in Wyoming: 1996
Hyndman, Paul C.; Campbell, Harry W.
1999-01-01
This report describes a digital map generated by the U.S. Geological Survey (USGS) to provide digital spatial mining claim density information for federal lands in Wyoming as of March 1997. Mining claim data is earth science information deemed to be relevant to the assessment of historic, current, and future ecological, economic, and social systems. There is no paper map included in this Open-File report. In accordance with the Federal Land Policy and Management Act of 1976 (FLPMA), all unpatented mining claims, mill, and tunnel sites must be recorded at the appropriate BLM State office. BLM maintains a cumulative computer listing of mining claims in the Mining Claim Recordation System (MCRS) database with locations given by meridian, township, range, and section. A mining claim is considered closed when the claim is relinquished or a formal BLM decision declaring the mining claim null and void has been issued and the appeal period has expired. All other mining claims filed with BLM are considered to be open and actively held. The digital map (figure 1.) with the mining claim density database available in this report are suitable for geographic information system (GIS)-based regional assessments at a scale of 1:100,000 or smaller.
Digital mining claim density map for federal lands in Colorado: 1996
Hyndman, Paul C.; Campbell, Harry W.
1999-01-01
This report describes a digital map generated by the U.S. Geological Survey (USGS) to provide digital spatial mining claim density information for federal lands in Colorado as of March 1997. Mining claim data is earth science information deemed to be relevant to the assessment of historic, current, and future ecological, economic, and social systems. There is no paper map included in this Open-File report. In accordance with the Federal Land Policy and Management Act of 1976 (FLPMA), all unpatented mining claims, mill, and tunnel sites must be recorded at the appropriate BLM State office. BLM maintains a cumulative computer listing of mining claims in the Mining Claim Recordation System (MCRS) database with locations given by meridian, township, range, and section. A mining claim is considered closed when the claim is relinquished or a formal BLM decision declaring the mining claim null and void has been issued and the appeal period has expired. All other mining claims filed with BLM are considered to be open and actively held. The digital map (figure 1.) with the mining claim density database available in this report are suitable for geographic information system (GIS)-based regional assessments at a scale of 1:100,000 or smaller.
Digital mining claim density map for federal lands in California: 1996
Hyndman, Paul C.; Campbell, Harry W.
1999-01-01
This report describes a digital map generated by the U.S. Geological Survey (USGS) to provide digital spatial mining claim density information for federal lands in California as of March 1997. Mining claim data is earth science information deemed to be relevant to the assessment of historic, current, and future ecological, economic, and social systems. There is no paper map included in this Open-File report. In accordance with the Federal Land Policy and Management Act of 1976 (FLPMA), all unpatented mining claims, mill, and tunnel sites must be recorded at the appropriate BLM State office. BLM maintains a cumulative computer listing of mining claims in the MCRS database with locations given by meridian, township, range, and section. A mining claim is considered closed when the claim is relinquished or a formal BLM decision declaring the mining claim null and void has been issued and the appeal period has expired. All other mining claims filed with BLM are considered to be open and actively held. The digital map (figure 1.) with the mining claim density database available in this report are suitable for geographic information system (GIS)-based regional assessments at a scale of 1:100,000 or smaller.
Digital mining claim density map for federal lands in New Mexico: 1996
Hyndman, Paul C.; Campbell, Harry W.
1999-01-01
This report describes a digital map generated by the U.S. Geological Survey (USGS) to provide digital spatial mining claim density information for federal lands in New Mexico as of March 1997. Mining claim data is earth science information deemed to be relevant to the assessment of historic, current, and future ecological, economic, and social systems. There is no paper map included in this Open-File report. In accordance with the Federal Land Policy and Management Act of 1976 (FLPMA), all unpatented mining claims, mill, and tunnel sites must be recorded at the appropriate BLM State office. BLM maintains a cumulative computer listing of mining claims in the MCRS database with locations given by meridian, township, range, and section. A mining claim is considered closed when the claim is relinquished or a formal BLM decision declaring the mining claim null and void has been issued and the appeal period has expired. All other mining claims filed with BLM are considered to be open and actively held. The digital map (figure 1.) with the mining claim density database available in this report are suitable for geographic information system (GIS)-based regional assessments at a scale of 1:100,000 or smaller.
Digital mining claim density map for federal lands in Washington: 1996
Hyndman, Paul C.; Campbell, Harry W.
1999-01-01
This report describes a digital map generated by the U.S. Geological Survey (USGS) to provide digital spatial mining claim density information for federal lands in Washington as of March 1997. Mining claim data is earth science information deemed to be relevant to the assessment of historic, current, and future ecological, economic, and social systems. There is no paper map included in this Open-File report. In accordance with the Federal Land Policy and Management Act of 1976 (FLPMA), all unpatented mining claims, mill, and tunnel sites must be recorded at the appropriate BLM State office. BLM maintains a cumulative computer listing of mining claims in the Mining Claim Recordation System (MCRS) database with locations given by meridian, township, range, and section. A mining claim is considered closed when the claim is relinquished or a formal BLM decision declaring the mining claim null and void has been issued and the appeal period has expired. All other mining claims filed with BLM are considered to be open and actively held. The digital map (figure 1.) with the mining claim density database available in this report are suitable for geographic information system (GIS)-based regional assessments at a scale of 1:100,000 or smaller.
Digital mining claim density map for federal lands in Arizona: 1996
Hyndman, Paul C.; Campbell, Harry W.
1999-01-01
This report describes a digital map generated by the U.S. Geological Survey (USGS) to provide digital spatial mining claim density information for federal lands in Arizona as of March 1997. Mining claim data is earth science information deemed to be relevant to the assessment of historic, current, and future ecological, economic, and social systems. There is no paper map included in this Open-File report. In accordance with the Federal Land Policy and Management Act of 1976 (FLPMA), all unpatented mining claims, mill, and tunnel sites must be recorded at the appropriate BLM State office. BLM maintains a cumulative computer listing of mining claims in the MCRS database with locations given by meridian, township, range, and section. A mining claim is considered closed when the claim is relinquished or a formal BLM decision declaring the mining claim null and void has been issued and the appeal period has expired. All other mining claims filed with BLM are considered to be open and actively held. The digital map (figure 1.) with the mining claim density database available in this report are suitable for geographic information system (GIS)-based regional assessments at a scale of 1:100,000 or smaller.
A Web Server and Mobile App for Computing Hemolytic Potency of Peptides
NASA Astrophysics Data System (ADS)
Chaudhary, Kumardeep; Kumar, Ritesh; Singh, Sandeep; Tuknait, Abhishek; Gautam, Ankur; Mathur, Deepika; Anand, Priya; Varshney, Grish C.; Raghava, Gajendra P. S.
2016-03-01
Numerous therapeutic peptides do not enter the clinical trials just because of their high hemolytic activity. Recently, we developed a database, Hemolytik, for maintaining experimentally validated hemolytic and non-hemolytic peptides. The present study describes a web server and mobile app developed for predicting, and screening of peptides having hemolytic potency. Firstly, we generated a dataset HemoPI-1 that contains 552 hemolytic peptides extracted from Hemolytik database and 552 random non-hemolytic peptides (from Swiss-Prot). The sequence analysis of these peptides revealed that certain residues (e.g., L, K, F, W) and motifs (e.g., “FKK”, “LKL”, “KKLL”, “KWK”, “VLK”, “CYCR”, “CRR”, “RFC”, “RRR”, “LKKL”) are more abundant in hemolytic peptides. Therefore, we developed models for discriminating hemolytic and non-hemolytic peptides using various machine learning techniques and achieved more than 95% accuracy. We also developed models for discriminating peptides having high and low hemolytic potential on different datasets called HemoPI-2 and HemoPI-3. In order to serve the scientific community, we developed a web server, mobile app and JAVA-based standalone software (http://crdd.osdd.net/raghava/hemopi/).
Cybersecurity in healthcare: A systematic review of modern threats and trends.
Kruse, Clemens Scott; Frederick, Benjamin; Jacobson, Taylor; Monticone, D Kyle
2017-01-01
The adoption of healthcare technology is arduous, and it requires planning and implementation time. Healthcare organizations are vulnerable to modern trends and threats because it has not kept up with threats. The objective of this systematic review is to identify cybersecurity trends, including ransomware, and identify possible solutions by querying academic literature. The reviewers conducted three separate searches through the CINAHL and PubMed (MEDLINE) and the Nursing and Allied Health Source via ProQuest databases. Using key words with Boolean operators, database filters, and hand screening, we identified 31 articles that met the objective of the review. The analysis of 31 articles showed the healthcare industry lags behind in security. Like other industries, healthcare should clearly define cybersecurity duties, establish clear procedures for upgrading software and handling a data breach, use VLANs and deauthentication and cloud-based computing, and to train their users not to open suspicious code. The healthcare industry is a prime target for medical information theft as it lags behind other leading industries in securing vital data. It is imperative that time and funding is invested in maintaining and ensuring the protection of healthcare technology and the confidentially of patient information from unauthorized access.
48 CFR 52.219-8 - Utilization of small business concerns.
Code of Federal Regulations, 2011 CFR
2011-10-01
... United States Small Business Administration or the awarding agency of the United States as may be... List of Qualified HUBZone Small Business Concerns maintained by the Small Business Administration... small disadvantaged business in the CCR Dynamic Small Business Search database maintained by the Small...
48 CFR 52.219-8 - Utilization of small business concerns.
Code of Federal Regulations, 2012 CFR
2012-10-01
... United States Small Business Administration or the awarding agency of the United States as may be... List of Qualified HUBZone Small Business Concerns maintained by the Small Business Administration... small disadvantaged business in the CCR Dynamic Small Business Search database maintained by the Small...
Communication Lower Bounds and Optimal Algorithms for Programs that Reference Arrays - Part 1
2013-05-14
include tensor contractions, the direct N-body algorithm, and database join. 1This indicates that this is the first of 5 times that matrix multiplication...and database join. Section 8 summarizes our results, and outlines the contents of Part 2 of this paper. Part 2 will discuss how to compute lower...contractions, the direct N–body algo- rithm, database join, and computing matrix powers Ak. 2 Geometric Model We begin by reviewing the geometric
Integrated Primary Care Information Database (IPCI)
The Integrated Primary Care Information Database is a longitudinal observational database that was created specifically for pharmacoepidemiological and pharmacoeconomic studies, inlcuding data from computer-based patient records supplied voluntarily by general practitioners.
The BDNYC database of low-mass stars, brown dwarfs, and planetary mass companions
NASA Astrophysics Data System (ADS)
Cruz, Kelle; Rodriguez, David; Filippazzo, Joseph; Gonzales, Eileen; Faherty, Jacqueline K.; Rice, Emily; BDNYC
2018-01-01
We present a web-interface to a database of low-mass stars, brown dwarfs, and planetary mass companions. Users can send SELECT SQL queries to the database, perform searches by coordinates or name, check the database inventory on specified objects, and even plot spectra interactively. The initial version of this database contains information for 198 objects and version 2 will contain over 1000 objects. The database currently includes photometric data from 2MASS, WISE, and Spitzer and version 2 will include a significant portion of the publicly available optical and NIR spectra for brown dwarfs. The database is maintained and curated by the BDNYC research group and we welcome contributions from other researchers via GitHub.
Hardwood log defect photographic database, software and user's guide
R. Edward Thomas
2009-01-01
Computer software and user's guide for Hardwood Log Defect Photographic Database. The database contains photographs and information on external hardwood log defects and the corresponding internal characteristics. This database allows users to search for specific defect types, sizes, and locations by tree species. For every defect, the database contains photos of...
Federal Emergency Management Information System (FEMIS) system administration guide, version 1.4.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arp, J.A.; Burnett, R.A.; Carter, R.J.
The Federal Emergency Management Information Systems (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the US Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are connected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication, data distribution, and notification functionality necessary to operate FEMIS in a networked, client/server environment. The UNIX server provides an Oracle relational database management system (RDBMS) services, ARC/INFO GIS (optional) capabilities, and basic file management services. PNNL developed utilities that reside on the server include the Notification Service, the Command Service that executes the evacuation model, and AutoRecovery. To operate FEMIS, the Application Software must have access to a site specific FEMIS emergency management database. Data that pertains to an individual EOC`s jurisdiction is stored on the EOC`s local server. Information that needs to be accessible to all EOCs is automatically distributed by the FEMIS database to the other EOCs at the site.« less
ERIC Educational Resources Information Center
Criscuolo, Chiara; Martin, Ralf
2004-01-01
The main objective of this Working Paper is to show a set of indicators on the knowledge-based economy for China, mainly compiled from databases within EAS, although data from databases maintained by other parts of the OECD are included as well. These indicators are put in context by comparison with data for the United States, Japan and the EU (or…
Gama-Castro, Socorro; Salgado, Heladia; Santos-Zavaleta, Alberto; Ledezma-Tejeida, Daniela; Muñiz-Rascado, Luis; García-Sotelo, Jair Santiago; Alquicira-Hernández, Kevin; Martínez-Flores, Irma; Pannier, Lucia; Castro-Mondragón, Jaime Abraham; Medina-Rivera, Alejandra; Solano-Lira, Hilda; Bonavides-Martínez, César; Pérez-Rueda, Ernesto; Alquicira-Hernández, Shirley; Porrón-Sotelo, Liliana; López-Fuentes, Alejandra; Hernández-Koutoucheva, Anastasia; Moral-Chávez, Víctor Del; Rinaldi, Fabio; Collado-Vides, Julio
2016-01-01
RegulonDB (http://regulondb.ccg.unam.mx) is one of the most useful and important resources on bacterial gene regulation,as it integrates the scattered scientific knowledge of the best-characterized organism, Escherichia coli K-12, in a database that organizes large amounts of data. Its electronic format enables researchers to compare their results with the legacy of previous knowledge and supports bioinformatics tools and model building. Here, we summarize our progress with RegulonDB since our last Nucleic Acids Research publication describing RegulonDB, in 2013. In addition to maintaining curation up-to-date, we report a collection of 232 interactions with small RNAs affecting 192 genes, and the complete repertoire of 189 Elementary Genetic Sensory-Response units (GENSOR units), integrating the signal, regulatory interactions, and metabolic pathways they govern. These additions represent major progress to a higher level of understanding of regulated processes. We have updated the computationally predicted transcription factors, which total 304 (184 with experimental evidence and 120 from computational predictions); we updated our position-weight matrices and have included tools for clustering them in evolutionary families. We describe our semiautomatic strategy to accelerate curation, including datasets from high-throughput experiments, a novel coexpression distance to search for ‘neighborhood’ genes to known operons and regulons, and computational developments. PMID:26527724
NASA Technical Reports Server (NTRS)
Haering, Edward A., Jr.; Murray, James E.; Purifoy, Dana D.; Graham, David H.; Meredith, Keith B.; Ashburn, Christopher E.; Stucky, Mark
2005-01-01
The Shaped Sonic Boom Demonstration project showed for the first time that by careful design of aircraft contour the resultant sonic boom can maintain a tailored shape, propagating through a real atmosphere down to ground level. In order to assess the propagation characteristics of the shaped sonic boom and to validate computational fluid dynamics codes, airborne measurements were taken of the pressure signatures in the near field by probing with an instrumented F-15B aircraft, and in the far field by overflying an instrumented L-23 sailplane. This paper describes each aircraft and their instrumentation systems, the airdata calibration, analysis of the near- and far-field airborne data, and shows the good to excellent agreement between computational fluid dynamics solutions and flight data. The flights of the Shaped Sonic Boom Demonstration aircraft occurred in two phases. Instrumentation problems were encountered during the first phase, and corrections and improvements were made to the instrumentation system for the second phase, which are documented in the paper. Piloting technique and observations are also given. These airborne measurements of the Shaped Sonic Boom Demonstration aircraft are a unique and important database that will be used to validate design tools for a new generation of quiet supersonic aircraft.
Construction of image database for newspapaer articles using CTS
NASA Astrophysics Data System (ADS)
Kamio, Tatsuo
Nihon Keizai Shimbun, Inc. developed a system of making articles' image database automatically by use of CTS (Computer Typesetting System). Besides the articles and the headlines inputted in CTS, it reproduces the image of elements of such as photography and graphs by article in accordance with information of position on the paper. So to speak, computer itself clips the articles out of the newspaper. Image database is accumulated in magnetic file and optical file and is output to the facsimile of users. With diffusion of CTS, newspaper companies which start to have structure of articles database are increased rapidly, the said system is the first attempt to make database automatically. This paper describes the device of CTS which supports this system and outline.
Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal
2013-01-01
We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql -h database.nencki-genomics.org -u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface.
Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal
2013-01-01
We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql –h database.nencki-genomics.org –u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface. Database URL: http://www.nencki-genomics.org. PMID:24089456
1993-03-25
application of Object-Oriented Programming (OOP) and Human-Computer Interface (HCI) design principles. Knowledge gained from each topic has been incorporated...through the ap- plication of Object-Oriented Programming (OOP) and Human-Computer Interface (HCI) design principles. Knowledge gained from each topic has...programming and Human-Computer Interface (HCI) design. Knowledge gained from each is applied to the design of a Form-based interface for database data
Web-Based Environment for Maintaining Legacy Software
NASA Technical Reports Server (NTRS)
Tigges, Michael; Thompson, Nelson; Orr, Mark; Fox, Richard
2007-01-01
Advanced Tool Integration Environment (ATIE) is the name of both a software system and a Web-based environment created by the system for maintaining an archive of legacy software and expertise involved in developing the legacy software. ATIE can also be used in modifying legacy software and developing new software. The information that can be encapsulated in ATIE includes experts documentation, input and output data of tests cases, source code, and compilation scripts. All of this information is available within a common environment and retained in a database for ease of access and recovery by use of powerful search engines. ATIE also accommodates the embedment of supporting software that users require for their work, and even enables access to supporting commercial-off-the-shelf (COTS) software within the flow of the experts work. The flow of work can be captured by saving the sequence of computer programs that the expert uses. A user gains access to ATIE via a Web browser. A modern Web-based graphical user interface promotes efficiency in the retrieval, execution, and modification of legacy code. Thus, ATIE saves time and money in the support of new and pre-existing programs.
The NatCarb geoportal: Linking distributed data from the Carbon Sequestration Regional Partnerships
Carr, T.R.; Rich, P.M.; Bartley, J.D.
2007-01-01
The Department of Energy (DOE) Carbon Sequestration Regional Partnerships are generating the data for a "carbon atlas" of key geospatial data (carbon sources, potential sinks, etc.) required for rapid implementation of carbon sequestration on a broad scale. The NATional CARBon Sequestration Database and Geographic Information System (NatCarb) provides Web-based, nation-wide data access. Distributed computing solutions link partnerships and other publicly accessible repositories of geological, geophysical, natural resource, infrastructure, and environmental data. Data are maintained and enhanced locally, but assembled and accessed through a single geoportal. NatCarb, as a first attempt at a national carbon cyberinfrastructure (NCCI), assembles the data required to address technical and policy challenges of carbon capture and storage. We present a path forward to design and implement a comprehensive and successful NCCI. ?? 2007 The Haworth Press, Inc. All rights reserved.
Improvements to the User Interface for LHCb's Software continuous integration system.
NASA Astrophysics Data System (ADS)
Clemencic, M.; Couturier, B.; Kyriazi, S.
2015-12-01
The purpose of this paper is to identify a set of steps leading to an improved interface for LHCb's Nightly Builds Dashboard. The goal is to have an efficient application that meets the needs of both the project developers, by providing them with a user friendly interface, as well as those of the computing team supporting the system, by providing them with a dashboard allowing for better monitoring of the build job themselves. In line with what is already used by LHCb, the web interface has been implemented with the Flask Python framework for future maintainability and code clarity. The Database chosen to host the data is the schema-less CouchDB[7], serving the purpose of flexibility in document form changes. To improve the user experience, we use JavaScript libraries such as JQuery[11].
Robotic disaster recovery efforts with ad-hoc deployable cloud computing
NASA Astrophysics Data System (ADS)
Straub, Jeremy; Marsh, Ronald; Mohammad, Atif F.
2013-06-01
Autonomous operations of search and rescue (SaR) robots is an ill posed problem, which is complexified by the dynamic disaster recovery environment. In a typical SaR response scenario, responder robots will require different levels of processing capabilities during various parts of the response effort and will need to utilize multiple algorithms. Placing these capabilities onboard the robot is a mediocre solution that precludes algorithm specific performance optimization and results in mediocre performance. Architecture for an ad-hoc, deployable cloud environment suitable for use in a disaster response scenario is presented. Under this model, each service provider is optimized for the task and maintains a database of situation-relevant information. This service-oriented architecture (SOA 3.0) compliant framework also serves as an example of the efficient use of SOA 3.0 in an actual cloud application.
Automated CFD Database Generation for a 2nd Generation Glide-Back-Booster
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.; Rogers, Stuart E.; Aftosmis, Michael J.; Pandya, Shishir A.; Ahmad, Jasim U.; Tejmil, Edward
2003-01-01
A new software tool, AeroDB, is used to compute thousands of Euler and Navier-Stokes solutions for a 2nd generation glide-back booster in one week. The solution process exploits a common job-submission grid environment using 13 computers located at 4 different geographical sites. Process automation and web-based access to the database greatly reduces the user workload, removing much of the tedium and tendency for user input errors. The database consists of forces, moments, and solution files obtained by varying the Mach number, angle of attack, and sideslip angle. The forces and moments compare well with experimental data. Stability derivatives are also computed using a monotone cubic spline procedure. Flow visualization and three-dimensional surface plots are used to interpret and characterize the nature of computed flow fields.
BIRCH: a user-oriented, locally-customizable, bioinformatics system.
Fristensky, Brian
2007-02-09
Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.
BIRCH: A user-oriented, locally-customizable, bioinformatics system
Fristensky, Brian
2007-01-01
Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere. PMID:17291351
Construction of In-house Databases in a Corporation
NASA Astrophysics Data System (ADS)
Nishikawa, Takaya
The author describes the progress in and present status of the information management system at the research laboratories as a R & D component of pharmaceutical industry. The system deals with three fundamental types of information, that is, graphic information, numeral information and textual information which includes the former two types of information. The author and others have constructed the system which enables to process these kinds of information integrally. The system is also featured by the fact that natural form of information in which Japanese words (2 byte type) and English (1 byte type) as culture of personal & word processing computers are mixed can be processed by large-size computers because Japanese language are eligible for computer processing. The system is originally for research administrators, but can be effective also for researchers. At present 7 databases are available including external databases. The system is always ready to accept other databases newly.
8 CFR 338.12 - Endorsement by clerk of court in case name is changed.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 8 Aliens and Nationality 1 2010-01-01 2010-01-01 false Endorsement by clerk of court in case name is changed. 338.12 Section 338.12 Aliens and Nationality DEPARTMENT OF HOMELAND SECURITY NATIONALITY... database for naturalization recordkeeping, the name change information will be maintained in that database...
Wind-Wildlife Impacts Literature Database (WILD)(Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Wind-Wildlife Impacts Literature Database (WILD), developed and maintained by the National Wind Technology Center (NWTC) at the National Renewable Energy Laboratory (NREL), is comprised of over 1,000 citations pertaining to the effects of land-based wind, offshore wind, marine and hydrokinetic, power lines, and communication and television towers on wildlife.
76 FR 76628 - Disclosure of Certain Credit Card Complaint Data
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-08
... collected in its central database on complaints during the preceding year.'' 12 U.S.C. 5496(c)(4). The CFPB... to mine the data for trends and patterns and to publish their conclusions would be academics and... vehicle safety complaint database that NHTSA maintains.\\10\\ \\10\\ The data is available at http://www...
76 FR 19524 - Privacy Act of 1974; Deletion of System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-07
... Affairs (VA) is deleting a system of records entitled ``PROS/KEYS User Permissions Database-VA'' (67VA30... requirement for VA to maintain this system of records no longer exists because the PROS/ KEYS Database was... DEPARTMENT OF VETERANS AFFAIRS Privacy Act of 1974; Deletion of System of Records AGENCY...
SW#db: GPU-Accelerated Exact Sequence Similarity Database Search.
Korpar, Matija; Šošić, Martin; Blažeka, Dino; Šikić, Mile
2015-01-01
In recent years we have witnessed a growth in sequencing yield, the number of samples sequenced, and as a result-the growth of publicly maintained sequence databases. The increase of data present all around has put high requirements on protein similarity search algorithms with two ever-opposite goals: how to keep the running times acceptable while maintaining a high-enough level of sensitivity. The most time consuming step of similarity search are the local alignments between query and database sequences. This step is usually performed using exact local alignment algorithms such as Smith-Waterman. Due to its quadratic time complexity, alignments of a query to the whole database are usually too slow. Therefore, the majority of the protein similarity search methods prior to doing the exact local alignment apply heuristics to reduce the number of possible candidate sequences in the database. However, there is still a need for the alignment of a query sequence to a reduced database. In this paper we present the SW#db tool and a library for fast exact similarity search. Although its running times, as a standalone tool, are comparable to the running times of BLAST, it is primarily intended to be used for exact local alignment phase in which the database of sequences has already been reduced. It uses both GPU and CPU parallelization and was 4-5 times faster than SSEARCH, 6-25 times faster than CUDASW++ and more than 20 times faster than SSW at the time of writing, using multiple queries on Swiss-prot and Uniref90 databases.
Map-Based Querying for Multimedia Database
2014-09-01
existing assets in a custom multimedia database based on an area of interest. It also describes the augmentation of an Android Tactical Assault Kit (ATAK......for Multimedia Database Somiya Metu Computational and Information Sciences Directorate, ARL
An affinity-structure database of helix-turn-helix: DNA complexes with a universal coordinate system
DOE Office of Scientific and Technical Information (OSTI.GOV)
AlQuraishi, Mohammed; Tang, Shengdong; Xia, Xide
Molecular interactions between proteins and DNA molecules underlie many cellular processes, including transcriptional regulation, chromosome replication, and nucleosome positioning. Computational analyses of protein-DNA interactions rely on experimental data characterizing known protein-DNA interactions structurally and biochemically. While many databases exist that contain either structural or biochemical data, few integrate these two data sources in a unified fashion. Such integration is becoming increasingly critical with the rapid growth of structural and biochemical data, and the emergence of algorithms that rely on the synthesis of multiple data types to derive computational models of molecular interactions. We have developed an integrated affinity-structure database inmore » which the experimental and quantitative DNA binding affinities of helix-turn-helix proteins are mapped onto the crystal structures of the corresponding protein-DNA complexes. This database provides access to: (i) protein-DNA structures, (ii) quantitative summaries of protein-DNA binding affinities using position weight matrices, and (iii) raw experimental data of protein-DNA binding instances. Critically, this database establishes a correspondence between experimental structural data and quantitative binding affinity data at the single basepair level. Furthermore, we present a novel alignment algorithm that structurally aligns the protein-DNA complexes in the database and creates a unified residue-level coordinate system for comparing the physico-chemical environments at the interface between complexes. Using this unified coordinate system, we compute the statistics of atomic interactions at the protein-DNA interface of helix-turn-helix proteins. We provide an interactive website for visualization, querying, and analyzing this database, and a downloadable version to facilitate programmatic analysis. Lastly, this database will facilitate the analysis of protein-DNA interactions and the development of programmatic computational methods that capitalize on integration of structural and biochemical datasets. The database can be accessed at http://ProteinDNA.hms.harvard.edu.« less
An affinity-structure database of helix-turn-helix: DNA complexes with a universal coordinate system
AlQuraishi, Mohammed; Tang, Shengdong; Xia, Xide
2015-11-19
Molecular interactions between proteins and DNA molecules underlie many cellular processes, including transcriptional regulation, chromosome replication, and nucleosome positioning. Computational analyses of protein-DNA interactions rely on experimental data characterizing known protein-DNA interactions structurally and biochemically. While many databases exist that contain either structural or biochemical data, few integrate these two data sources in a unified fashion. Such integration is becoming increasingly critical with the rapid growth of structural and biochemical data, and the emergence of algorithms that rely on the synthesis of multiple data types to derive computational models of molecular interactions. We have developed an integrated affinity-structure database inmore » which the experimental and quantitative DNA binding affinities of helix-turn-helix proteins are mapped onto the crystal structures of the corresponding protein-DNA complexes. This database provides access to: (i) protein-DNA structures, (ii) quantitative summaries of protein-DNA binding affinities using position weight matrices, and (iii) raw experimental data of protein-DNA binding instances. Critically, this database establishes a correspondence between experimental structural data and quantitative binding affinity data at the single basepair level. Furthermore, we present a novel alignment algorithm that structurally aligns the protein-DNA complexes in the database and creates a unified residue-level coordinate system for comparing the physico-chemical environments at the interface between complexes. Using this unified coordinate system, we compute the statistics of atomic interactions at the protein-DNA interface of helix-turn-helix proteins. We provide an interactive website for visualization, querying, and analyzing this database, and a downloadable version to facilitate programmatic analysis. Lastly, this database will facilitate the analysis of protein-DNA interactions and the development of programmatic computational methods that capitalize on integration of structural and biochemical datasets. The database can be accessed at http://ProteinDNA.hms.harvard.edu.« less
HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing
Karimi, Ramin; Hajdu, Andras
2016-01-01
Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678
HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.
Karimi, Ramin; Hajdu, Andras
2016-01-01
Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis.
Why Save Your Course as a Relational Database?
ERIC Educational Resources Information Center
Hamilton, Gregory C.; Katz, David L.; Davis, James E.
2000-01-01
Describes a system that stores course materials for computer-based training programs in a relational database called Of Course! Outlines the basic structure of the databases; explains distinctions between Of Course! and other authoring languages; and describes how data is retrieved from the database and presented to the student. (Author/LRW)
Simple Logic for Big Problems: An Inside Look at Relational Databases.
ERIC Educational Resources Information Center
Seba, Douglas B.; Smith, Pat
1982-01-01
Discusses database design concept termed "normalization" (process replacing associations between data with associations in two-dimensional tabular form) which results in formation of relational databases (they are to computers what dictionaries are to spoken languages). Applications of the database in serials control and complex systems…
Computer Assisted Learning Feature--Using Databases in Economics and Business Studies.
ERIC Educational Resources Information Center
Davies, Peter; Allison, Ron.
1989-01-01
Describes ways in which databases can be used in economics and business education classes. Explores arguments put forth by advocates for the use of databases in the classroom. Offers information on British software and discusses six online database systems listing the features of each. (KO)
47 CFR 69.120 - Line information database.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 3 2011-10-01 2011-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...
ERIC Educational Resources Information Center
Moore, Pam
2010-01-01
The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…
First Database Course--Keeping It All Organized
ERIC Educational Resources Information Center
Baugh, Jeanne M.
2015-01-01
All Computer Information Systems programs require a database course for their majors. This paper describes an approach to such a course in which real world examples, both design projects and actual database application projects are incorporated throughout the semester. Students are expected to apply the traditional database concepts to actual…
47 CFR 69.120 - Line information database.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 3 2013-10-01 2013-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...
47 CFR 69.120 - Line information database.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 3 2014-10-01 2014-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...
47 CFR 69.120 - Line information database.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...
47 CFR 69.120 - Line information database.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 3 2012-10-01 2012-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, P.Y.; Wassom, J.S.
Scientific and technological developments bring unprecedented stress to our environment. Society has to predict the results of potential health risks from technologically based actions that may have serious, far-reaching consequences. The potential for error in making such predictions or assessment is great and multiplies with the increasing size and complexity of the problem being studied. Because of this, the availability and use of reliable data is the key to any successful forecasting effort. Scientific research and development generate new data and information. Much of the scientific data being produced daily is stored in computers for subsequent analysis. This situation providesmore » both an invaluable resource and an enormous challenge. With large amounts of government funds being devoted to health and environmental research programs and with maintenance of our living environment at stake, we must make maximum use of the resulting data to forecast and avert catastrophic effects. Along with the readily available. The most efficient means of obtaining the data necessary for assessing the health effects of chemicals is to utilize applications include the toxicology databases and information files developed at ORNL. To make most efficient use of the data/information that has already been prepared, attention and resources should be directed toward projects that meticulously evaluate the available data/information and create specialized peer-reviewed value-added databases. Such projects include the National Library of Medicine`s Hazardous Substances Data Bank, and the U.S. Air Force Installation Restoration Toxicology Guide. These and similar value-added toxicology databases were developed at ORNL and are being maintained and updated. These databases and supporting information files, as well as some data evaluation techniques are discussed in this paper with special focus on how they are used to assess potential health effects of environmental agents. 19 refs., 5 tabs.« less
Bioelectric memory: modeling resting potential bistability in amphibian embryos and mammalian cells.
Law, Robert; Levin, Michael
2015-10-15
Bioelectric gradients among all cells, not just within excitable nerve and muscle, play instructive roles in developmental and regenerative pattern formation. Plasma membrane resting potential gradients regulate cell behaviors by regulating downstream transcriptional and epigenetic events. Unlike neurons, which fire rapidly and typically return to the same polarized state, developmental bioelectric signaling involves many cell types stably maintaining various levels of resting potential during morphogenetic events. It is important to begin to quantitatively model the stability of bioelectric states in cells, to understand computation and pattern maintenance during regeneration and remodeling. To facilitate the analysis of endogenous bioelectric signaling and the exploitation of voltage-based cellular controls in synthetic bioengineering applications, we sought to understand the conditions under which somatic cells can stably maintain distinct resting potential values (a type of state memory). Using the Channelpedia ion channel database, we generated an array of amphibian oocyte and mammalian membrane models for voltage evolution. These models were analyzed and searched, by simulation, for a simple dynamical property, multistability, which forms a type of voltage memory. We find that typical mammalian models and amphibian oocyte models exhibit bistability when expressing different ion channel subsets, with either persistent sodium or inward-rectifying potassium, respectively, playing a facilitative role in bistable memory formation. We illustrate this difference using fast sodium channel dynamics for which a comprehensive theory exists, where the same model exhibits bistability under mammalian conditions but not amphibian conditions. In amphibians, potassium channels from the Kv1.x and Kv2.x families tend to disrupt this bistable memory formation. We also identify some common principles under which physiological memory emerges, which suggest specific strategies for implementing memories in bioengineering contexts. Our results reveal conditions under which cells can stably maintain one of several resting voltage potential values. These models suggest testable predictions for experiments in developmental bioelectricity, and illustrate how cells can be used as versatile physiological memory elements in synthetic biology, and unconventional computation contexts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson-Teixeira, Kristina J.; DeLucia, Evan H.; Duval, Benjamin D.
2015-10-29
To advance understanding of C dynamics of forests globally, we compiled a new database, the Forest C database (ForC-db), which contains data on ground-based measurements of ecosystem-level C stocks and annual fluxes along with disturbance history. This database currently contains 18,791 records from 2009 sites, making it the largest and most comprehensive database of C stocks and flows in forest ecosystems globally. The tropical component of the database will be published in conjunction with a manuscript that is currently under review (Anderson-Teixeira et al., in review). Database development continues, and we hope to maintain a dynamic instance of the entiremore » (global) database.« less
BIOSPIDA: A Relational Database Translator for NCBI.
Hagen, Matthew S; Lee, Eva K
2010-11-13
As the volume and availability of biological databases continue widespread growth, it has become increasingly difficult for research scientists to identify all relevant information for biological entities of interest. Details of nucleotide sequences, gene expression, molecular interactions, and three-dimensional structures are maintained across many different databases. To retrieve all necessary information requires an integrated system that can query multiple databases with minimized overhead. This paper introduces a universal parser and relational schema translator that can be utilized for all NCBI databases in Abstract Syntax Notation (ASN.1). The data models for OMIM, Entrez-Gene, Pubmed, MMDB and GenBank have been successfully converted into relational databases and all are easily linkable helping to answer complex biological questions. These tools facilitate research scientists to locally integrate databases from NCBI without significant workload or development time.
Evaluation and validity of a LORETA normative EEG database.
Thatcher, R W; North, D; Biver, C
2005-04-01
To evaluate the reliability and validity of a Z-score normative EEG database for Low Resolution Electromagnetic Tomography (LORETA), EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) were acquired from 106 normal subjects, and the cross-spectrum was computed and multiplied by the Key Institute's LORETA 2,394 gray matter pixel T Matrix. After a log10 transform or a Box-Cox transform the mean and standard deviation of the *.lor files were computed for each of the 2394 gray matter pixels, from 1 to 30 Hz, for each of the subjects. Tests of Gaussianity were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of a Z-score database was computed by measuring the approximation to a Gaussian distribution. The validity of the LORETA normative database was evaluated by the degree to which confirmed brain pathologies were localized using the LORETA normative database. Log10 and Box-Cox transforms approximated Gaussian distribution in the range of 95.64% to 99.75% accuracy. The percentage of normative Z-score values at 2 standard deviations ranged from 1.21% to 3.54%, and the percentage of Z-scores at 3 standard deviations ranged from 0% to 0.83%. Left temporal lobe epilepsy, right sensory motor hematoma and a right hemisphere stroke exhibited maximum Z-score deviations in the same locations as the pathologies. We conclude: (1) Adequate approximation to a Gaussian distribution can be achieved using LORETA by using a log10 transform or a Box-Cox transform and parametric statistics, (2) a Z-Score normative database is valid with adequate sensitivity when using LORETA, and (3) the Z-score LORETA normative database also consistently localized known pathologies to the expected Brodmann areas as an hypothesis test based on the surface EEG before computing LORETA.
Whetzel, Patricia L.; Grethe, Jeffrey S.; Banks, Davis E.; Martone, Maryann E.
2015-01-01
The NIDDK Information Network (dkNET; http://dknet.org) was launched to serve the needs of basic and clinical investigators in metabolic, digestive and kidney disease by facilitating access to research resources that advance the mission of the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK). By research resources, we mean the multitude of data, software tools, materials, services, projects and organizations available to researchers in the public domain. Most of these are accessed via web-accessible databases or web portals, each developed, designed and maintained by numerous different projects, organizations and individuals. While many of the large government funded databases, maintained by agencies such as European Bioinformatics Institute and the National Center for Biotechnology Information, are well known to researchers, many more that have been developed by and for the biomedical research community are unknown or underutilized. At least part of the problem is the nature of dynamic databases, which are considered part of the “hidden” web, that is, content that is not easily accessed by search engines. dkNET was created specifically to address the challenge of connecting researchers to research resources via these types of community databases and web portals. dkNET functions as a “search engine for data”, searching across millions of database records contained in hundreds of biomedical databases developed and maintained by independent projects around the world. A primary focus of dkNET are centers and projects specifically created to provide high quality data and resources to NIDDK researchers. Through the novel data ingest process used in dkNET, additional data sources can easily be incorporated, allowing it to scale with the growth of digital data and the needs of the dkNET community. Here, we provide an overview of the dkNET portal and its functions. We show how dkNET can be used to address a variety of use cases that involve searching for research resources. PMID:26393351
Vitamin and Mineral Supplement Fact Sheets
... Dictionary of Dietary Supplement Terms Dietary Supplement Label Database (DSLD) Información en español Consumer information in Spanish ... Analytical Methods and Reference Materials Dietary Supplement Label Database (DSLD) Dietary Supplement Ingredient Database (DSID) Computer Access ...
Ontology based heterogeneous materials database integration and semantic query
NASA Astrophysics Data System (ADS)
Zhao, Shuai; Qian, Quan
2017-10-01
Materials digital data, high throughput experiments and high throughput computations are regarded as three key pillars of materials genome initiatives. With the fast growth of materials data, the integration and sharing of data is very urgent, that has gradually become a hot topic of materials informatics. Due to the lack of semantic description, it is difficult to integrate data deeply in semantic level when adopting the conventional heterogeneous database integration approaches such as federal database or data warehouse. In this paper, a semantic integration method is proposed to create the semantic ontology by extracting the database schema semi-automatically. Other heterogeneous databases are integrated to the ontology by means of relational algebra and the rooted graph. Based on integrated ontology, semantic query can be done using SPARQL. During the experiments, two world famous First Principle Computational databases, OQMD and Materials Project are used as the integration targets, which show the availability and effectiveness of our method.
21 CFR 830.360 - Records to be maintained by the labeler.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Records to be maintained by the labeler. 830.360 Section 830.360 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES UNIQUE DEVICE IDENTIFICATION Global Unique Device Identification Database § 830...
30 CFR 1227.200 - What are a State's general responsibilities if it accepts a delegation?
Code of Federal Regulations, 2011 CFR
2011-07-01
... controls and accountability; (4) Maintain a system of accounts that includes a comprehensive audit trail so... production information for royalty management purposes; (c) Assist ONRR in meeting the requirements of the... maintaining adequate reference, royalty, and production databases as provided in the Standards issued under...
Publishing Trends in Educational Computing.
ERIC Educational Resources Information Center
O'Hair, Marilyn; Johnson, D. LaMont
1989-01-01
Describes results of a survey of secondary school and college teachers that was conducted to determine subject matter that should be included in educational computing journals. Areas of interest included computer applications; artificial intelligence; computer-aided instruction; computer literacy; computer-managed instruction; databases; distance…
WMC Database Evaluation. Case Study Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palounek, Andrea P. T
The WMC Database is ultimately envisioned to hold a collection of experimental data, design information, and information from computational models. This project was a first attempt at using the Database to access experimental data and extract information from it. This evaluation shows that the Database concept is sound and robust, and that the Database, once fully populated, should remain eminently usable for future researchers.
DIALOG: An executive computer program for linking independent programs
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hague, D. S.; Watson, D. A.
1973-01-01
A very large scale computer programming procedure called the DIALOG executive system was developed for the CDC 6000 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. Each computer program maintains its individual identity and is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG executive system. The installation and uses of the DIALOG executive system are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bower, J.C.; Burford, M.J.; Downing, T.R.
The Integrated Baseline System (IBS) is an emergency management planning and analysis tool that is being developed under the direction of the US Army Nuclear and Chemical Agency (USANCA). The IBS Data Management Guide provides the background, as well as the operations and procedures needed to generate and maintain a site-specific map database. Data and system managers use this guide to manage the data files and database that support the administrative, user-environment, database management, and operational capabilities of the IBS. This document provides a description of the data files and structures necessary for running the IBS software and using themore » site map database.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancey, P.; Logg, C.
DEPOT has been developed to provide tracking for the Stanford Linear Collider (SLC) control system equipment. For each piece of equipment entered into the database, complete location, service, maintenance, modification, certification, and radiation exposure histories can be maintained. To facilitate data entry accuracy, efficiency, and consistency, barcoding technology has been used extensively. DEPOT has been an important tool in improving the reliability of the microsystems controlling SLC. This document describes the components of the DEPOT database, the elements in the database records, and the use of the supporting programs for entering data, searching the database, and producing reports from themore » information.« less
A Database of Historical Information on Landslides and Floods in Italy
NASA Astrophysics Data System (ADS)
Guzzetti, F.; Tonelli, G.
2003-04-01
For the past 12 years we have maintained and updated a database of historical information on landslides and floods in Italy, known as the National Research Council's AVI (Damaged Urban Areas) Project archive. The database was originally designed to respond to a specific request of the Minister of Civil Protection, and was aimed at helping the regional assessment of landslide and flood risk in Italy. The database was first constructed in 1991-92 to cover the period 1917 to 1990. Information of damaging landslide and flood event was collected by searching archives, by screening thousands of newspaper issues, by reviewing the existing technical and scientific literature on landslides and floods in Italy, and by interviewing landslide and flood experts. The database was then updated chiefly through the analysis of hundreds of newspaper articles, and it now covers systematically the period 1900 to 1998, and non-systematically the periods 1900 to 1916 and 1999 to 2002. Non systematic information on landslide and flood events older than 20th century is also present in the database. The database currently contains information on more than 32,000 landslide events occurred at more than 25,700 sites, and on more than 28,800 flood events occurred at more than 15,600 sites. After a brief outline of the history and evolution of the AVI Project archive, we present and discuss: (a) the present structure of the database, including the hardware and software solutions adopted to maintain, manage, use and disseminate the information stored in the database, (b) the type and amount of information stored in the database, including an estimate of its completeness, and (c) examples of recent applications of the database, including a web-based GIS systems to show the location of sites historically affected by landslides and floods, and an estimate of geo-hydrological (i.e., landslide and flood) risk in Italy based on the available historical information.
The Research of Computer Aided Farm Machinery Designing Method Based on Ergonomics
NASA Astrophysics Data System (ADS)
Gao, Xiyin; Li, Xinling; Song, Qiang; Zheng, Ying
Along with agricultural economy development, the farm machinery product type Increases gradually, the ergonomics question is also getting more and more prominent. The widespread application of computer aided machinery design makes it possible that farm machinery design is intuitive, flexible and convenient. At present, because the developed computer aided ergonomics software has not suitable human body database, which is needed in view of farm machinery design in China, the farm machinery design have deviation in ergonomics analysis. This article puts forward that using the open database interface procedure in CATIA to establish human body database which aims at the farm machinery design, and reading the human body data to ergonomics module of CATIA can product practical application virtual body, using human posture analysis and human activity analysis module to analysis the ergonomics in farm machinery, thus computer aided farm machinery designing method based on engineering can be realized.
Document creation, linking, and maintenance system
Claghorn, Ronald [Pasco, WA
2011-02-15
A document creation and citation system designed to maintain a database of reference documents. The content of a selected document may be automatically scanned and indexed by the system. The selected documents may also be manually indexed by a user prior to the upload. The indexed documents may be uploaded and stored within a database for later use. The system allows a user to generate new documents by selecting content within the reference documents stored within the database and inserting the selected content into a new document. The system allows the user to customize and augment the content of the new document. The system also generates citations to the selected content retrieved from the reference documents. The citations may be inserted into the new document in the appropriate location and format, as directed by the user. The new document may be uploaded into the database and included with the other reference documents. The system also maintains the database of reference documents so that when changes are made to a reference document, the author of a document referencing the changed document will be alerted to make appropriate changes to his document. The system also allows visual comparison of documents so that the user may see differences in the text of the documents.
Top 10 Uses for ClarisWorks in the One-Computer Classroom.
ERIC Educational Resources Information Center
Robinette, Michelle
1996-01-01
Suggests ways to use ClarisWorks to motivate students when only one computer is accessible: (1) class database; (2) grade book; (3) classroom journal; (4) ongoing story center; (5) skill-and-draw review station; (6) monthly class magazine/newspaper; (7) research base/project planner; (8) lecture and presentation enhancement; (9) database of ideas…
Enhancements to the Redmine Database Metrics Plug in
2017-08-01
management web application has been adopted within the US Army Research Laboratory’s Computational and Information Sciences Directorate as a database...Metrics Plug-in by Terry C Jameson Computational and Information Sciences Directorate, ARL Approved for public... information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
Alternative treatment technology information center computer database system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, D.
1995-10-01
The Alternative Treatment Technology Information Center (ATTIC) computer database system was developed pursuant to the 1986 Superfund law amendments. It provides up-to-date information on innovative treatment technologies to clean up hazardous waste sites. ATTIC v2.0 provides access to several independent databases as well as a mechanism for retrieving full-text documents of key literature. It can be accessed with a personal computer and modem 24 hours a day, and there are no user fees. ATTIC provides {open_quotes}one-stop shopping{close_quotes} for information on alternative treatment options by accessing several databases: (1) treatment technology database; this contains abstracts from the literature on all typesmore » of treatment technologies, including biological, chemical, physical, and thermal methods. The best literature as viewed by experts is highlighted. (2) treatability study database; this provides performance information on technologies to remove contaminants from wastewaters and soils. It is derived from treatability studies. This database is available through ATTIC or separately as a disk that can be mailed to you. (3) underground storage tank database; this presents information on underground storage tank corrective actions, surface spills, emergency response, and remedial actions. (4) oil/chemical spill database; this provides abstracts on treatment and disposal of spilled oil and chemicals. In addition to these separate databases, ATTIC allows immediate access to other disk-based systems such as the Vendor Information System for Innovative Treatment Technologies (VISITT) and the Bioremediation in the Field Search System (BFSS). The user may download these programs to their own PC via a high-speed modem. Also via modem, users are able to download entire documents through the ATTIC system. Currently, about fifty publications are available, including Superfund Innovative Technology Evaluation (SITE) program documents.« less
First Look--The Aerospace Database.
ERIC Educational Resources Information Center
Kavanagh, Stephen K.; Miller, Jay G.
1986-01-01
Presents overview prepared by producer of database newly available in 1985 that covers 10 subject categories: engineering, geosciences, chemistry and materials, space sciences, aeronautics, astronautics, mathematical and computer sciences, physics, social sciences, and life sciences. Database development, unique features, document delivery, sample…
Developing Database Files for Student Use.
ERIC Educational Resources Information Center
Warner, Michael
1988-01-01
Presents guidelines for creating student database files that supplement classroom teaching. Highlights include determining educational objectives, planning the database with computer specialists and subject area specialists, data entry, and creating student worksheets. Specific examples concerning elements of the periodic table and…
Improvements to the Ionizing Radiation Risk Assessment Program for NASA Astronauts
NASA Technical Reports Server (NTRS)
Semones, E. J.; Bahadori, A. A.; Picco, C. E.; Shavers, M. R.; Flores-McLaughlin, J.
2011-01-01
To perform dosimetry and risk assessment, NASA collects astronaut ionizing radiation exposure data from space flight, medical imaging and therapy, aviation training activities and prior occupational exposure histories. Career risk of exposure induced death (REID) from radiation is limited to 3 percent at a 95 percent confidence level. The Radiation Health Office at Johnson Space Center (JSC) is implementing a program to integrate the gathering, storage, analysis and reporting of astronaut ionizing radiation dose and risk data and records. This work has several motivations, including more efficient analyses and greater flexibility in testing and adopting new methods for evaluating risks. The foundation for these improvements is a set of software tools called the Astronaut Radiation Exposure Analysis System (AREAS). AREAS is a series of MATLAB(Registered TradeMark)-based dose and risk analysis modules that interface with an enterprise level SQL Server database by means of a secure web service. It communicates with other JSC medical and space weather databases to maintain data integrity and consistency across systems. AREAS is part of a larger NASA Space Medicine effort, the Mission Medical Integration Strategy, with the goal of collecting accurate, high-quality and detailed astronaut health data, and then securely, timely and reliably presenting it to medical support personnel. The modular approach to the AREAS design accommodates past, current, and future sources of data from active and passive detectors, space radiation transport algorithms, computational phantoms and cancer risk models. Revisions of the cancer risk model, new radiation detection equipment and improved anthropomorphic computational phantoms can be incorporated. Notable hardware updates include the Radiation Environment Monitor (which uses Medipix technology to report real-time, on-board dosimetry measurements), an updated Tissue-Equivalent Proportional Counter, and the Southwest Research Institute Radiation Assessment Detector. Also, the University of Florida hybrid phantoms, which are flexible in morphometry and positioning, are being explored as alternatives to the current NASA computational phantoms.
NASA Astrophysics Data System (ADS)
Kreinovich, Vladik; Longpre, Luc; Starks, Scott A.; Xiang, Gang; Beck, Jan; Kandathi, Raj; Nayak, Asis; Ferson, Scott; Hajagos, Janos
2007-02-01
In many areas of science and engineering, it is desirable to estimate statistical characteristics (mean, variance, covariance, etc.) under interval uncertainty. For example, we may want to use the measured values x(t) of a pollution level in a lake at different moments of time to estimate the average pollution level; however, we do not know the exact values x(t)--e.g., if one of the measurement results is 0, this simply means that the actual (unknown) value of x(t) can be anywhere between 0 and the detection limit (DL). We must, therefore, modify the existing statistical algorithms to process such interval data. Such a modification is also necessary to process data from statistical databases, where, in order to maintain privacy, we only keep interval ranges instead of the actual numeric data (e.g., a salary range instead of the actual salary). Most resulting computational problems are NP-hard--which means, crudely speaking, that in general, no computationally efficient algorithm can solve all particular cases of the corresponding problem. In this paper, we overview practical situations in which computationally efficient algorithms exist: e.g., situations when measurements are very accurate, or when all the measurements are done with one (or few) instruments. As a case study, we consider a practical problem from bioinformatics: to discover the genetic difference between the cancer cells and the healthy cells, we must process the measurements results and find the concentrations c and h of a given gene in cancer and in healthy cells. This is a particular case of a general situation in which, to estimate states or parameters which are not directly accessible by measurements, we must solve a system of equations in which coefficients are only known with interval uncertainty. We show that in general, this problem is NP-hard, and we describe new efficient algorithms for solving this problem in practically important situations.
A structured interface to the object-oriented genomics unified schema for XML-formatted data.
Clark, Terry; Jurek, Josef; Kettler, Gregory; Preuss, Daphe
2005-01-01
Data management systems are fast becoming required components in many biology laboratories as the role of computer-based information grows. Although the need for data management systems is on the rise, their inherent complexities can deter the full and routine use of their computational capabilities. The significant undertaking to implement a capable production system can be reduced in part by adapting an established data management system. In such a way, we are leveraging the Genomics Unified Schema (GUS) developed at the Computational Biology and Informatics Laboratory at the University of Pennsylvania as a foundation for managing and analysing DNA sequence data in centromere research projects around Arabidopsis thaliana and related species. Because GUS provides a core schema that includes support for genome sequences, mRNA and its expression, and annotated chromosomes, it is ideal for synthesising a variety of parameters to analyse these repetitive and highly dynamic portions of the genome. Despite this, production-strength data management frameworks are complex, requiring dedicated efforts to adapt and maintain. The work reported in this article addresses one component of such an effort, namely the pivotal task of marshalling data from various sources into GUS. In order to harness GUS for our project, and motivated by efficiency needs, we developed a structured framework for transferring data into GUS from outside sources. This technology is embodied in a GUS object-layer processor, XMLGUS. XMLGUS facilitates incorporating data into GUS by (i) formulating an XML interface that includes relational database key constraint definitions, (ii) regularising traversal through that XML, (iii) realising automatic processing of the XML with database key constraints and (iv) allowing for special processing of input data within the framework for automated processing. The application of XMLGUS to production pipeline processing for a sequencing project and inputting the Arabidopsis genome into GUS is discussed. XMLGUS is available from the Flora website (http://flora.ittc.ku.edu/).
Scientific Communication of Geochemical Data and the Use of Computer Databases.
ERIC Educational Resources Information Center
Le Bas, M. J.; Durham, J.
1989-01-01
Describes a scheme in the United Kingdom that coordinates geochemistry publications with a computerized geochemistry database. The database comprises not only data published in the journals but also the remainder of the pertinent data set. The discussion covers the database design; collection, storage and retrieval of data; and plans for future…
42 CFR 488.68 - State Agency responsibilities for OASIS collection and data base requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... operating the OASIS system: (a) Establish and maintain an OASIS database. The State agency or other entity designated by CMS must— (1) Use a standard system developed or approved by CMS to collect, store, and analyze..., system back-up, and monitoring the status of the database; and (3) Obtain CMS approval before modifying...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-25
... multiple nondramatic musical works may be submitted electronically as XML files. Electronically submitted Notices will be maintained in a database that can be searched using any of the included fields of... the Licensing Division for a search of the database during the interim period. As such, the Office...
Analysis of a virtual memory model for maintaining database views
NASA Technical Reports Server (NTRS)
Kinsley, Kathryn C.; Hughes, Charles E.
1992-01-01
This paper presents an analytical model for predicting the performance of a new support strategy for database views. This strategy, called the virtual method, is compared with traditional methods for supporting views. The analytical model's predictions of improved performance by the virtual method are then validated by comparing these results with those achieved in an experimental implementation.
Amick, G D
1999-01-01
A database containing names of mass spectral data files generated in a forensic toxicology laboratory and two Microsoft Visual Basic programs to maintain and search this database is described. The data files (approximately 0.5 KB/each) were collected from six mass spectrometers during routine casework. Data files were archived on 650 MB (74 min) recordable CD-ROMs. Each recordable CD-ROM was given a unique name, and its list of data file names was placed into the database. The present manuscript describes the use of search and maintenance programs for searching and routine upkeep of the database and creation of CD-ROMs for archiving of data files.
Computer Literacy for Teachers.
ERIC Educational Resources Information Center
Sarapin, Marvin I.; Post, Paul E.
Basic concepts of computer literacy are discussed as they relate to industrial arts/technology education. Computer hardware development is briefly examined, and major software categories are defined, including database management, computer graphics, spreadsheet programs, telecommunications and networking, word processing, and computer assisted and…
Computer Science Research in Europe.
1984-08-29
most attention, multi- database and its structure, and (3) the dependencies between databases Distributed Systems and multi- databases . Having...completed a multi- database Newcastle University, UK system for distributed data management, At the University of Newcastle the INRIA is now working on a real...communications re- INRIA quirements of distributed database A project called SIRIUS was estab- systems, protocols for checking the lished in 1977 at the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, Jennifer; Sandberg, Tami
The Wind-Wildlife Impacts Literature Database (WILD), formerly known as the Avian Literature Database, was created in 1997. The goal of the database was to begin tracking the research that detailed the potential impact of wind energy development on birds. The Avian Literature Database was originally housed on a proprietary platform called Livelink ECM from Open- Text and maintained by in-house technical staff. The initial set of records was added by library staff. A vital part of the newly launched Drupal-based WILD database is the Bibliography module. Many of the resources included in the database have digital object identifiers (DOI). Themore » bibliographic information for any item that has a DOI can be imported into the database using this module. This greatly reduces the amount of manual data entry required to add records to the database. The content available in WILD is international in scope, which can be easily discerned by looking at the tags available in the browse menu.« less
Seeing is believing: on the use of image databases for visually exploring plant organelle dynamics.
Mano, Shoji; Miwa, Tomoki; Nishikawa, Shuh-ichi; Mimura, Tetsuro; Nishimura, Mikio
2009-12-01
Organelle dynamics vary dramatically depending on cell type, developmental stage and environmental stimuli, so that various parameters, such as size, number and behavior, are required for the description of the dynamics of each organelle. Imaging techniques are superior to other techniques for describing organelle dynamics because these parameters are visually exhibited. Therefore, as the results can be seen immediately, investigators can more easily grasp organelle dynamics. At present, imaging techniques are emerging as fundamental tools in plant organelle research, and the development of new methodologies to visualize organelles and the improvement of analytical tools and equipment have allowed the large-scale generation of image and movie data. Accordingly, image databases that accumulate information on organelle dynamics are an increasingly indispensable part of modern plant organelle research. In addition, image databases are potentially rich data sources for computational analyses, as image and movie data reposited in the databases contain valuable and significant information, such as size, number, length and velocity. Computational analytical tools support image-based data mining, such as segmentation, quantification and statistical analyses, to extract biologically meaningful information from each database and combine them to construct models. In this review, we outline the image databases that are dedicated to plant organelle research and present their potential as resources for image-based computational analyses.
Surviving the Glut: The Management of Event Streams in Cyberphysical Systems
NASA Astrophysics Data System (ADS)
Buchmann, Alejandro
Alejandro Buchmann is Professor in the Department of Computer Science, Technische Universität Darmstadt, where he heads the Databases and Distributed Systems Group. He received his MS (1977) and PhD (1980) from the University of Texas at Austin. He was an Assistant/Associate Professor at the Institute for Applied Mathematics and Systems IIMAS/UNAM in Mexico, doing research on databases for CAD, geographic information systems, and objectoriented databases. At Computer Corporation of America (later Xerox Advanced Information Systems) in Cambridge, Mass., he worked in the areas of active databases and real-time databases, and at GTE Laboratories, Waltham, in the areas of distributed object systems and the integration of heterogeneous legacy systems. 1991 he returned to academia and joined T.U. Darmstadt. His current research interests are at the intersection of middleware, databases, eventbased distributed systems, ubiquitous computing, and very large distributed systems (P2P, WSN). Much of the current research is concerned with guaranteeing quality of service and reliability properties in these systems, for example, scalability, performance, transactional behaviour, consistency, and end-to-end security. Many research projects imply collaboration with industry and cover a broad spectrum of application domains. Further information can be found at http://www.dvs.tu-darmstadt.de
Use of administrative medical databases in population-based research.
Gavrielov-Yusim, Natalie; Friger, Michael
2014-03-01
Administrative medical databases are massive repositories of data collected in healthcare for various purposes. Such databases are maintained in hospitals, health maintenance organisations and health insurance organisations. Administrative databases may contain medical claims for reimbursement, records of health services, medical procedures, prescriptions, and diagnoses information. It is clear that such systems may provide a valuable variety of clinical and demographic information as well as an on-going process of data collection. In general, information gathering in these databases does not initially presume and is not planned for research purposes. Nonetheless, administrative databases may be used as a robust research tool. In this article, we address the subject of public health research that employs administrative data. We discuss the biases and the limitations of such research, as well as other important epidemiological and biostatistical key points specific to administrative database studies.
BIOSPIDA: A Relational Database Translator for NCBI
Hagen, Matthew S.; Lee, Eva K.
2010-01-01
As the volume and availability of biological databases continue widespread growth, it has become increasingly difficult for research scientists to identify all relevant information for biological entities of interest. Details of nucleotide sequences, gene expression, molecular interactions, and three-dimensional structures are maintained across many different databases. To retrieve all necessary information requires an integrated system that can query multiple databases with minimized overhead. This paper introduces a universal parser and relational schema translator that can be utilized for all NCBI databases in Abstract Syntax Notation (ASN.1). The data models for OMIM, Entrez-Gene, Pubmed, MMDB and GenBank have been successfully converted into relational databases and all are easily linkable helping to answer complex biological questions. These tools facilitate research scientists to locally integrate databases from NCBI without significant workload or development time. PMID:21347013
NASA Astrophysics Data System (ADS)
Koppers, A. A.; Minnett, R. C.; Tauxe, L.; Constable, C.; Donadini, F.
2008-12-01
The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by rock and paleomagnetic data. The goal of MagIC is to archive all measurements and derived properties for studies of paleomagnetic directions (inclination, declination) and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). Organizing data for presentation in peer-reviewed publications or for ingestion into databases is a time-consuming task, and to facilitate these activities, three tightly integrated tools have been developed: MagIC-PY, the MagIC Console Software, and the MagIC Online Database. A suite of Python scripts is available to help users port their data into the MagIC data format. They allow the user to add important metadata, perform basic interpretations, and average results at the specimen, sample and site levels. These scripts have been validated for use as Open Source software under the UNIX, Linux, PC and Macintosh© operating systems. We have also developed the MagIC Console Software program to assist in collating rock and paleomagnetic data for upload to the MagIC database. The program runs in Microsoft Excel© on both Macintosh© computers and PCs. It performs routine consistency checks on data entries, and assists users in preparing data for uploading into the online MagIC database. The MagIC website is hosted under EarthRef.org at http://earthref.org/MAGIC/ and has two search nodes, one for paleomagnetism and one for rock magnetism. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual FlashMap interface to browse and select locations. Users can also browse the database by data type (inclination, intensity, VGP, hysteresis, susceptibility) or by data compilation to view all contributions associated with previous databases, such as PINT, GMPDB or TAFI or other user-defined compilations. Query results are displayed in a digestible tabular format allowing the user to descend from locations to sites, samples, specimens and measurements. At each stage, the result set can be saved and, when supported by the data, can be visualized by plotting global location maps, equal area, XY, age, and depth plots, or typical Zijderveld, hysteresis, magnetization and remanence diagrams.
PlantCAZyme: a database for plant carbohydrate-active enzymes
Ekstrom, Alexander; Taujale, Rahil; McGinn, Nathan; Yin, Yanbin
2014-01-01
PlantCAZyme is a database built upon dbCAN (database for automated carbohydrate active enzyme annotation), aiming to provide pre-computed sequence and annotation data of carbohydrate active enzymes (CAZymes) to plant carbohydrate and bioenergy research communities. The current version contains data of 43 790 CAZymes of 159 protein families from 35 plants (including angiosperms, gymnosperms, lycophyte and bryophyte mosses) and chlorophyte algae with fully sequenced genomes. Useful features of the database include: (i) a BLAST server and a HMMER server that allow users to search against our pre-computed sequence data for annotation purpose, (ii) a download page to allow batch downloading data of a specific CAZyme family or species and (iii) protein browse pages to provide an easy access to the most comprehensive sequence and annotation data. Database URL: http://cys.bios.niu.edu/plantcazyme/ PMID:25125445
Large-Scale 1:1 Computing Initiatives: An Open Access Database
ERIC Educational Resources Information Center
Richardson, Jayson W.; McLeod, Scott; Flora, Kevin; Sauers, Nick J.; Kannan, Sathiamoorthy; Sincar, Mehmet
2013-01-01
This article details the spread and scope of large-scale 1:1 computing initiatives around the world. What follows is a review of the existing literature around 1:1 programs followed by a description of the large-scale 1:1 database. Main findings include: 1) the XO and the Classmate PC dominate large-scale 1:1 initiatives; 2) if professional…
Implementing Relational Operations in an Object-Oriented Database
1992-03-01
computer aided software engineering (CASE) and computer aided design (CAD) tools. There has been some research done in the area of combining...35 2. Prograph Database Engine .................................................................. 38 III. W HY A N R/O...in most business applications where the bulk of data being stored and manipulated is simply textual or numeric data that can be stored and manipulated
75 FR 29155 - Publicly Available Consumer Product Safety Information Database
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-24
...The Consumer Product Safety Commission (``Commission,'' ``CPSC,'' or ``we'') is issuing a notice of proposed rulemaking that would establish a publicly available consumer product safety information database (``database''). Section 212 of the Consumer Product Safety Improvement Act of 2008 (``CPSIA'') amended the Consumer Product Safety Act (``CPSA'') to require the Commission to establish and maintain a publicly available, searchable database on the safety of consumer products, and other products or substances regulated by the Commission. The proposed rule would interpret various statutory requirements pertaining to the information to be included in the database and also would establish provisions regarding submitting reports of harm; providing notice of reports of harm to manufacturers; publishing reports of harm and manufacturer comments in the database; and dealing with confidential and materially inaccurate information.
Liu, Yu; Hong, Yang; Lin, Chun-Yuan; Hung, Che-Lun
2015-01-01
The Smith-Waterman (SW) algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs) and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS) to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.
Advances in computational metabolomics and databases deepen the understanding of metabolisms.
Tsugawa, Hiroshi
2018-01-29
Mass spectrometry (MS)-based metabolomics is the popular platform for metabolome analyses. Computational techniques for the processing of MS raw data, for example, feature detection, peak alignment, and the exclusion of false-positive peaks, have been established. The next stage of untargeted metabolomics would be to decipher the mass fragmentation of small molecules for the global identification of human-, animal-, plant-, and microbiota metabolomes, resulting in a deeper understanding of metabolisms. This review is an update on the latest computational metabolomics including known/expected structure databases, chemical ontology classifications, and mass spectrometry cheminformatics for the interpretation of mass fragmentations and for the elucidation of unknown metabolites. The importance of metabolome 'databases' and 'repositories' is also discussed because novel biological discoveries are often attributable to the accumulation of data, to relational databases, and to their statistics. Lastly, a practical guide for metabolite annotations is presented as the summary of this review. Copyright © 2018 Elsevier Ltd. All rights reserved.
High Performance Descriptive Semantic Analysis of Semantic Graph Databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan
As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprisingmore » computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.« less
Physics through the 1990s: An overview
NASA Technical Reports Server (NTRS)
1986-01-01
The volume details the interaction of physics and society, and presents a short summary of the progress in the major fields of physics and a summary of the other seven volumes of the Physics through the 1990s series, issues and recommended policy changes are described regarding funding, education, industry participation, small-group university research and large facility programs, government agency programs, and computer database needs. Three supplements report in detail on international issues in physics (the US position in the field, international cooperation and competition-especially on large projects, freedom for scientists, free flow of information, and education of foreign students), the education and supply of physicists (the changes in US physics education, employment and manpower, and demographics of the field), and the organization and support of physics (government, university, and industry research; facilities; national laboratories; and decision making). An executive summary contains recommendations for maintaining excellence in physics. A glossary of scientific terms is appended.
Buu, Anne; Williams, L Keoki; Yang, James J
2018-03-01
We propose a new genome-wide association test for mixed binary and continuous phenotypes that uses an efficient numerical method to estimate the empirical distribution of the Fisher's combination statistic under the null hypothesis. Our simulation study shows that the proposed method controls the type I error rate and also maintains its power at the level of the permutation method. More importantly, the computational efficiency of the proposed method is much higher than the one of the permutation method. The simulation results also indicate that the power of the test increases when the genetic effect increases, the minor allele frequency increases, and the correlation between responses decreases. The statistical analysis on the database of the Study of Addiction: Genetics and Environment demonstrates that the proposed method combining multiple phenotypes can increase the power of identifying markers that may not be, otherwise, chosen using marginal tests.
Thriving on Chaos: The Development of a Surgical Information System
Olund, Steven R.
1988-01-01
Hospitals present unique challenges to the computer industry, generating a greater quantity and variety of data than nearly any other enterprise. This is complicated by the fact that a hospital is not one homogenous organization, but a bundle of semi-independent groups with unique data requirements. Therefore hospital information systems must be fast, flexible, reliable, easy to use and maintain, and cost-effective. The Surgical Information System at Rush Presbyterian-St. Luke's Medical Center, Chicago is such system. It uses a Sequent Balance 21000 multi-processor superminicomputer, running industry standard tools such as the Unix operating system, a 4th generation programming language (4GL), and Structured Query Language (SQL) relational database management software. This treatise illustrates a comprehensive yet generic approach which can be applied to almost any clinical situation where access to patient data is required by a variety of medical professionals.
NASA Astrophysics Data System (ADS)
Xu, Yan; Dong, Zhao Yang; Zhang, Rui; Wong, Kit Po
2014-02-01
Maintaining transient stability is a basic requirement for secure power system operations. Preventive control deals with modifying the system operating point to withstand probable contingencies. In this article, a decision tree (DT)-based on-line preventive control strategy is proposed for transient instability prevention of power systems. Given a stability database, a distance-based feature estimation algorithm is first applied to identify the critical generators, which are then used as features to develop a DT. By interpreting the splitting rules of DT, preventive control is realised by formulating the rules in a standard optimal power flow model and solving it. The proposed method is transparent in control mechanism, on-line computation compatible and convenient to deal with multi-contingency. The effectiveness and efficiency of the method has been verified on New England 10-machine 39-bus test system.
Processing sequence annotation data using the Lua programming language.
Ueno, Yutaka; Arita, Masanori; Kumagai, Toshitaka; Asai, Kiyoshi
2003-01-01
The data processing language in a graphical software tool that manages sequence annotation data from genome databases should provide flexible functions for the tasks in molecular biology research. Among currently available languages we adopted the Lua programming language. It fulfills our requirements to perform computational tasks for sequence map layouts, i.e. the handling of data containers, symbolic reference to data, and a simple programming syntax. Upon importing a foreign file, the original data are first decomposed in the Lua language while maintaining the original data schema. The converted data are parsed by the Lua interpreter and the contents are stored in our data warehouse. Then, portions of annotations are selected and arranged into our catalog format to be depicted on the sequence map. Our sequence visualization program was successfully implemented, embedding the Lua language for processing of annotation data and layout script. The program is available at http://staff.aist.go.jp/yutaka.ueno/guppy/.
Test Vehicle Forebody Wake Effects on CPAS Parachutes
NASA Technical Reports Server (NTRS)
Ray, Eric S.
2017-01-01
Parachute drag performance has been reconstructed for a large number of Capsule Parachute Assembly System (CPAS) flight tests. This allows for determining forebody wake effects indirectly through statistical means. When data are available in a "clean" wake, such as behind a slender test vehicle, the relative degradation in performance for other test vehicles can be computed as a Pressure Recovery Fraction (PRF). All four CPAS parachute types were evaluated: Forward Bay Cover Parachutes (FBCPs), Drogues, Pilots, and Mains. Many tests used the missile-shaped Parachute Compartment Drop Test Vehicle (PCDTV) to obtain data at high airspeeds. Other tests used the Orion "boilerplate" Parachute Test Vehicle (PTV) to evaluate parachute performance in a representative heatshield wake. Drag data from both vehicles are normalized to a "capsule" forebody equivalent for Orion simulations. A separate database of PCDTV-specific performance is maintained to accurately predict flight tests. Data are shared among analogous parachutes whenever possible to maximize statistical significance.
Design document for the MOODS Data Management System (MDMS), version 1.0
NASA Technical Reports Server (NTRS)
1994-01-01
The MOODS Data Management System (MDMS) provides access to the Master Oceanographic Observation Data Set (MOODS) which is maintained by the Naval Oceanographic Office (NAVOCEANO). The MDMS incorporates database technology in providing seamless access to parameter (temperature, salinity, soundspeed) vs. depth observational profile data. The MDMS is an interactive software application with a graphical user interface (GUI) that supports user control of MDMS functional capabilities. The purpose of this document is to define and describe the structural framework and logical design of the software components/units which are integrated into the major computer software configuration item (CSCI) identified as MDMS, Version 1.0. The preliminary design is based on functional specifications and requirements identified in the governing Statement of Work prepared by the Naval Oceanographic Office (NAVOCEANO) and distributed as a request for proposal by the National Aeronautics and Space Administration (NASA).
ERIC Educational Resources Information Center
Rice, Michael; Gladstone, William; Weir, Michael
2004-01-01
We discuss how relational databases constitute an ideal framework for representing and analyzing large-scale genomic data sets in biology. As a case study, we describe a Drosophila splice-site database that we recently developed at Wesleyan University for use in research and teaching. The database stores data about splice sites computed by a…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-02
... Standards and Technology's (NIST) Computer Security Division maintains a Computer Security Resource Center... Regarding Driver History Record Information Security, Continuity of Operation Planning, and Disaster... (SDLAs) to support their efforts at maintaining the security of information contained in the driver...
Harb, Omar S; Roos, David S
2015-01-01
Over the past 20 years, advances in high-throughput biological techniques and the availability of computational resources including fast Internet access have resulted in an explosion of large genome-scale data sets "big data." While such data are readily available for download and personal use and analysis from a variety of repositories, often such analysis requires access to seldom-available computational skills. As a result a number of databases have emerged to provide scientists with online tools enabling the interrogation of data without the need for sophisticated computational skills beyond basic knowledge of Internet browser utility. This chapter focuses on the Eukaryotic Pathogen Databases (EuPathDB: http://eupathdb.org) Bioinformatic Resource Center (BRC) and illustrates some of the available tools and methods.
NCI at Frederick Scientific Library Reintroduces Scientific Publications Database | Poster
A 20-year-old database of scientific publications by NCI at Frederick, FNLCR, and affiliated employees has gotten a significant facelift. Maintained by the Scientific Library, the redesigned database—which is linked from each of the Scientific Library’s web pages—offers features that were not available in previous versions, such as additional search limits and non-traditional
High Performance Semantic Factoring of Giga-Scale Semantic Graph Databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joslyn, Cliff A.; Adolf, Robert D.; Al-Saffar, Sinan
2010-10-04
As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors.« less
The analysis of control trajectories using symbolic and database computing
NASA Technical Reports Server (NTRS)
Grossman, Robert
1995-01-01
This final report comprises the formal semi-annual status reports for this grant for the periods June 30-December 31, 1993, January 1-June 30, 1994, and June 1-December 31, 1994. The research supported by this grant is broadly concerned with the symbolic computation, mixed numeric-symbolic computation, and database computation of trajectories of dynamical systems, especially control systems. A review of work during the report period covers: trajectories and approximating series, the Cayley algebra of trees, actions of differential operators, geometrically stable integration algorithms, hybrid systems, trajectory stores, PTool, and other activities. A list of publications written during the report period is attached.
Charting the expansion of strategic exploratory behavior during adolescence.
Somerville, Leah H; Sasse, Stephanie F; Garrad, Megan C; Drysdale, Andrew T; Abi Akar, Nadine; Insel, Catherine; Wilson, Robert C
2017-02-01
Although models of exploratory decision making implicate a suite of strategies that guide the pursuit of information, the developmental emergence of these strategies remains poorly understood. This study takes an interdisciplinary perspective, merging computational decision making and developmental approaches to characterize age-related shifts in exploratory strategy from adolescence to young adulthood. Participants were 149 12-28-year-olds who completed a computational explore-exploit paradigm that manipulated reward value, information value, and decision horizon (i.e., the utility that information holds for future choices). Strategic directed exploration, defined as information seeking selective for long time horizons, emerged during adolescence and maintained its level through early adulthood. This age difference was partially driven by adolescents valuing immediate reward over new information. Strategic random exploration, defined as stochastic choice behavior selective for long time horizons, was invoked at comparable levels over the age range, and predicted individual differences in attitudes toward risk taking in daily life within the adolescent portion of the sample. Collectively, these findings reveal an expansion of the diversity of strategic exploration over development, implicate distinct mechanisms for directed and random exploratory strategies, and suggest novel mechanisms for adolescent-typical shifts in decision making. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Architecture of the parallel hierarchical network for fast image recognition
NASA Astrophysics Data System (ADS)
Timchenko, Leonid; Wójcik, Waldemar; Kokriatskaia, Natalia; Kutaev, Yuriy; Ivasyuk, Igor; Kotyra, Andrzej; Smailova, Saule
2016-09-01
Multistage integration of visual information in the brain allows humans to respond quickly to most significant stimuli while maintaining their ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing includes main types of cortical multistage convergence. The input images are mapped into a flexible hierarchy that reflects complexity of image data. Procedures of the temporal image decomposition and hierarchy formation are described in mathematical expressions. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image that encapsulates a structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a quick response of the system. The result is presented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match. With regard to the forecasting method, its idea lies in the following. In the results synchronization block, network-processed data arrive to the database where a sample of most correlated data is drawn using service parameters of the parallel-hierarchical network.
Use of imagery and GIS for humanitarian demining management
NASA Astrophysics Data System (ADS)
Gentile, Jack; Gustafson, Glen C.; Kimsey, Mary; Kraenzle, Helmut; Wilson, James; Wright, Stephen
1997-11-01
In the Fall of 1996, the Center for Geographic Information Science at James Madison University became involved in a project for the Department of Defense evaluating the data needs and data management systems for humanitarian demining in the Third World. In particular, the effort focused on the information needs of demining in Cambodia and in Bosnia. In the first phase of the project one team attempted to identify all sources of unclassified country data, image data and map data. Parallel with this, another group collected information and evaluations on most of the commercial off-the-shelf computer software packages for the management of such geographic information. The result was a design for the kinds of data and the kinds of systems necessary to establish and maintain such a database as a humanitarian demining management tool. The second phase of the work involved acquiring the recommended data and systems, integrating the two, and producing a demonstration of the system. In general, the configuration involves ruggedized portable computers for field use with a greatly simplified graphical user interface, supported by a more capable central facility based on Pentium workstations and appropriate technical expertise.
Analysing and Rationalising Molecular and Materials Databases Using Machine-Learning
NASA Astrophysics Data System (ADS)
de, Sandip; Ceriotti, Michele
Computational materials design promises to greatly accelerate the process of discovering new or more performant materials. Several collaborative efforts are contributing to this goal by building databases of structures, containing between thousands and millions of distinct hypothetical compounds, whose properties are computed by high-throughput electronic-structure calculations. The complexity and sheer amount of information has made manual exploration, interpretation and maintenance of these databases a formidable challenge, making it necessary to resort to automatic analysis tools. Here we will demonstrate how, starting from a measure of (dis)similarity between database items built from a combination of local environment descriptors, it is possible to apply hierarchical clustering algorithms, as well as dimensionality reduction methods such as sketchmap, to analyse, classify and interpret trends in molecular and materials databases, as well as to detect inconsistencies and errors. Thanks to the agnostic and flexible nature of the underlying metric, we will show how our framework can be applied transparently to different kinds of systems ranging from organic molecules and oligopeptides to inorganic crystal structures as well as molecular crystals. Funded by National Center for Computational Design and Discovery of Novel Materials (MARVEL) and Swiss National Science Foundation.
Han, Guanghui; Liu, Xiabi; Han, Feifei; Santika, I Nyoman Tenaya; Zhao, Yanfeng; Zhao, Xinming; Zhou, Chunwu
2015-02-01
Lung computed tomography (CT) imaging signs play important roles in the diagnosis of lung diseases. In this paper, we review the significance of CT imaging signs in disease diagnosis and determine the inclusion criterion of CT scans and CT imaging signs of our database. We develop the software of abnormal regions annotation and design the storage scheme of CT images and annotation data. Then, we present a publicly available database of lung CT imaging signs, called LISS for short, which contains 271 CT scans and 677 abnormal regions in them. The 677 abnormal regions are divided into nine categories of common CT imaging signs of lung disease (CISLs). The ground truth of these CISLs regions and the corresponding categories are provided. Furthermore, to make the database publicly available, all private data in CT scans are eliminated or replaced with provisioned values. The main characteristic of our LISS database is that it is developed from a new perspective of CT imaging signs of lung diseases instead of commonly considered lung nodules. Thus, it is promising to apply to computer-aided detection and diagnosis research and medical education.
Blömer, Wilhelm; Steinbrück, Arnd; Schröder, Christian; Grothaus, Franz-Josef; Melsheimer, Oliver; Mannel, Henrich; Forkel, Gerhard; Eilers, Thomas; Liebs, Thoralf R; Hassenpflug, Joachim; Jansson, Volkmar
2015-07-01
Every joint registry aims to improve patient care by identifying implants that have an inferior performance. For this reason, each registry records the implant name that has been used in the individual patient. In most registries, a paper-based approach has been utilized for this purpose. However, in addition to being time-consuming, this approach does not account for the fact that failure patterns are not necessarily implant specific but can be associated with design features that are used in a number of implants. Therefore, we aimed to develop and evaluate an implant product library that allows both time saving barcode scanning on site in the hospital for the registration of the implant components and a detailed description of implant specifications. A task force consisting of representatives of the German Arthroplasty Registry, industry, and computer specialists agreed on a solution that allows barcode scanning of implant components and that also uses a detailed standardized classification describing arthroplasty components. The manufacturers classified all their components that are sold in Germany according to this classification. The implant database was analyzed regarding the completeness of components by algorithms and real-time data. The implant library could be set up successfully. At this point, the implant database includes more than 38,000 items, of which all were classified by the manufacturers according to the predefined scheme. Using patient data from the German Arthroplasty Registry, several errors in the database were detected, all of which were corrected by the respective implant manufacturers. The implant library that was developed for the German Arthroplasty Registry allows not only on-site barcode scanning for the registration of the implant components but also its classification tree allows a sophisticated analysis regarding implant characteristics, regardless of brand or manufacturer. The database is maintained by the implant manufacturers, thereby allowing registries to focus their resources on other areas of research. The database might represent a possible global model, which might encourage harmonization between joint replacement registries enabling comparisons between joint replacement registries.
1-CMDb: A Curated Database of Genomic Variations of the One-Carbon Metabolism Pathway.
Bhat, Manoj K; Gadekar, Veerendra P; Jain, Aditya; Paul, Bobby; Rai, Padmalatha S; Satyamoorthy, Kapaettu
2017-01-01
The one-carbon metabolism pathway is vital in maintaining tissue homeostasis by driving the critical reactions of folate and methionine cycles. A myriad of genetic and epigenetic events mark the rate of reactions in a tissue-specific manner. Integration of these to predict and provide personalized health management requires robust computational tools that can process multiomics data. The DNA sequences that may determine the chain of biological events and the endpoint reactions within one-carbon metabolism genes remain to be comprehensively recorded. Hence, we designed the one-carbon metabolism database (1-CMDb) as a platform to interrogate its association with a host of human disorders. DNA sequence and network information of a total of 48 genes were extracted from a literature survey and KEGG pathway that are involved in the one-carbon folate-mediated pathway. The information generated, collected, and compiled for all these genes from the UCSC genome browser included the single nucleotide polymorphisms (SNPs), CpGs, copy number variations (CNVs), and miRNAs, and a comprehensive database was created. Furthermore, a significant correlation analysis was performed for SNPs in the pathway genes. Detailed data of SNPs, CNVs, CpG islands, and miRNAs for 48 folate pathway genes were compiled. The SNPs in CNVs (9670), CpGs (984), and miRNAs (14) were also compiled for all pathway genes. The SIFT score, the prediction and PolyPhen score, as well as the prediction for each of the SNPs were tabulated and represented for folate pathway genes. Also included in the database for folate pathway genes were the links to 124 various phenotypes and disease associations as reported in the literature and from publicly available information. A comprehensive database was generated consisting of genomic elements within and among SNPs, CNVs, CpGs, and miRNAs of one-carbon metabolism pathways to facilitate (a) single source of information and (b) integration into large-genome scale network analysis to be developed in the future by the scientific community. The database can be accessed at http://slsdb.manipal.edu/ocm/. © 2017 S. Karger AG, Basel.
Information Technology Support in the 8000 Directorate
NASA Technical Reports Server (NTRS)
2004-01-01
My summer internship was spent supporting various projects within the Environmental Management Office and Glenn Safety Office. Mentored by Eli Abumeri, I was trained in areas of Information Technology such as: Servers, printers, scanners, CAD systems, Web, Programming, and Database Management, ODIN (networking, computers, and phones). I worked closely with the Chemical Sampling and Analysis Team (CSAT) to redesign a database to more efficiently manage and maintain data collected for the Drinking Water Program. This Program has been established for over fifteen years here at the Glenn Research Center. It involves the continued testing and retesting of all drinking water dispensers. The quality of the drinking water is of great importance and is determined by comparing the concentration of contaminants in the water with specifications set forth by the Environmental Protection Agency (EPA) in the Safe Drinking Water Act (SDWA) and its 1986 and 1991 amendments. The Drinking Water Program consists of periodic testing of all drinking water fountains and sinks. Each is tested at least once every 2 years for contaminants and naturally occurring species. The EPA's protocol is to collect an initial and a 5 minute draw from each dispenser. The 5 minute draw is what is used for the maximum contaminant level. However, the CS&AT has added a 30 second draw since most individuals do not run the water 5 minutes prior to drinking. This data is then entered into a relational Microsoft Access database. The database allows for the quick retrieval of any test@) done on any dispenser. The data can be queried by building number, date or test type, and test results are documented in an analytical report for employees to read. To aid with the tracking of recycled materials within the lab, my help was enlisted to create a database that could make this process less cumbersome and more efficient. The date of pickup, type of material, weight received, and unit cost per recyclable. This information could then calculate the dollar amount generated by the recycling of certain materials. This database will ultimately prove useful in determining the amounts of materials consumed by the lab and will help serve as an indicator potential overuse.
NASA Astrophysics Data System (ADS)
Michel-Sendis, Franco; Martinez-González, Jesus; Gauld, Ian
2017-09-01
SFCOMPO-2.0 is a database of experimental isotopic concentrations measured in destructive radiochemical analysis of spent nuclear fuel (SNF) samples. The database includes corresponding design description of the fuel rods and assemblies, relevant operating conditions and characteristics of the host reactors necessary for modelling and simulation. Aimed at establishing a thorough, reliable, and publicly available resource for code and data validation of safety-related applications, SFCOMPO-2.0 is developed and maintained by the OECD Nuclear Energy Agency (NEA). The SFCOMPO-2.0 database is a Java application which is downloadable from the NEA website.
Soft computing approach to 3D lung nodule segmentation in CT.
Badura, P; Pietka, E
2014-10-01
This paper presents a novel, multilevel approach to the segmentation of various types of pulmonary nodules in computed tomography studies. It is based on two branches of computational intelligence: the fuzzy connectedness (FC) and the evolutionary computation. First, the image and auxiliary data are prepared for the 3D FC analysis during the first stage of an algorithm - the masks generation. Its main goal is to process some specific types of nodules connected to the pleura or vessels. It consists of some basic image processing operations as well as dedicated routines for the specific cases of nodules. The evolutionary computation is performed on the image and seed points in order to shorten the FC analysis and improve its accuracy. After the FC application, the remaining vessels are removed during the postprocessing stage. The method has been validated using the first dataset of studies acquired and described by the Lung Image Database Consortium (LIDC) and by its latest release - the LIDC-IDRI (Image Database Resource Initiative) database. Copyright © 2014 Elsevier Ltd. All rights reserved.
Digital mining claim density map for federal lands in Idaho: 1996
Hyndman, Paul C.; Campbell, Harry W.
1999-01-01
This report describes a digital map generated by the U.S. Geological Survey (USGS) to provide digital spatial mining claim density information for federal lands in Idaho as of March 1997. Mining claim data is earth science information deemed to be relevant to the assessment of historic, current, and future ecological, economic, and social systems. There is no paper map included in this Open-File report. In accordance with the Federal Land Policy and Management Act of 1976 (FLPMA), all unpatented mining claims, mill and tunnel sites must be recorded at the appropriate Bureau of Land Management (BLM) State office. BLM maintains a cumulative computer listing of mining claims in the Mining Claim Recordation System (MCRS) database with locations given by meridian, township, range, and section. A mining claim is considered closed when the claim is relinquished or a formal BLM decision declaring the mining claim null and void has been issued and the appeal period has expired. All other mining claims filed with BLM are considered to be open and actively held. The digital map (figure 1.) with the mining claim density database available in this report are suitable for geographic information system (GIS)-based regional assessments at a scale of 1:100,000 or smaller.
Digital mining claim density map for federal lands in Oregon: 1996
Hyndman, Paul C.; Campbell, Harry W.
1999-01-01
This report describes a digital map generated by the U.S. Geological Survey (USGS) to provide digital spatial mining claim density information for federal lands in Oregon as of March 1997. Mining claim data is earth science information deemed to be relevant to the assessment of historic, current, and future ecological, economic, and social systems. There is no paper map included in this Open-File report. In accordance with the Federal Land Policy and Management Act of 1976 (FLPMA), all unpatented mining claims, mill and tunnel sites must be recorded at the appropriate Bureau of Land Management (BLM) State office. BLM maintains a cumulative computer listing of mining claims in the Mining Claim Recordation System (MCRS) database with locations given by meridian, township, range, and section. A mining claim is considered closed when the claim is relinquished or a formal BLM decision declaring the mining claim null and void has been issued and the appeal period has expired. All other mining claims filed with BLM are considered to be open and actively held. The digital map (figure 1.) with the mining claim density database available in this report are suitable for geographic information system (GIS)-based regional assessments at a scale of 1:100,000 or smaller.
UCMP and the Internet help hospital libraries share resources.
Dempsey, R; Weinstein, L
1999-07-01
The Medical Library Center of New York (MLCNY), a medical library consortium founded in 1959, has specialized in supporting resource sharing and fostering technological advances. In 1961, MLCNY developed and continues to maintain the Union Catalog of Medical Periodicals (UCMP), a resource tool including detailed data about the collections of more than 720 medical library participants. UCMP was one of the first library tools to capitalize on the benefits of computer technology and, from the beginning, invited hospital libraries to play a substantial role in its development. UCMP, beginning with products in print and later in microfiche, helped to create a new resource sharing environment. Today, UCMP continues to capitalize on new technology by providing access via the Internet and an Oracle-based search system providing subscribers with the benefits of: a database that contains serial holdings information on an issue specific level, a database that can be updated in real time, a system that provides multi-type searching and allows users to define how the results will be sorted, and an ordering function that can more precisely target libraries that have a specific issue of a medical journal. Current development of a Web-based system will ensure that UCMP continues to provide cost effective and efficient resource sharing in future years.
Wang, Huiya; Feng, Jun; Wang, Hongyu
2017-07-20
Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.
UCMP and the Internet help hospital libraries share resources.
Dempsey, R; Weinstein, L
1999-01-01
The Medical Library Center of New York (MLCNY), a medical library consortium founded in 1959, has specialized in supporting resource sharing and fostering technological advances. In 1961, MLCNY developed and continues to maintain the Union Catalog of Medical Periodicals (UCMP), a resource tool including detailed data about the collections of more than 720 medical library participants. UCMP was one of the first library tools to capitalize on the benefits of computer technology and, from the beginning, invited hospital libraries to play a substantial role in its development. UCMP, beginning with products in print and later in microfiche, helped to create a new resource sharing environment. Today, UCMP continues to capitalize on new technology by providing access via the Internet and an Oracle-based search system providing subscribers with the benefits of: a database that contains serial holdings information on an issue specific level, a database that can be updated in real time, a system that provides multi-type searching and allows users to define how the results will be sorted, and an ordering function that can more precisely target libraries that have a specific issue of a medical journal. Current development of a Web-based system will ensure that UCMP continues to provide cost effective and efficient resource sharing in future years. PMID:10427426
Loudos, George K; Papadimitroulas, Panagiotis G; Kagadis, George C
2014-01-01
Monte Carlo (MC) simulations play a crucial role in nuclear medical imaging since they can provide the ground truth for clinical acquisitions, by integrating and quantifing all physical parameters that affect image quality. The last decade a number of realistic computational anthropomorphic models have been developed to serve imaging, as well as other biomedical engineering applications. The combination of MC techniques with realistic computational phantoms can provide a powerful tool for pre and post processing in imaging, data analysis and dosimetry. This work aims to create a global database for simulated Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) exams and the methodology, as well as the first elements are presented. Simulations are performed using the well validated GATE opensource toolkit, standard anthropomorphic phantoms and activity distribution of various radiopharmaceuticals, derived from literature. The resulting images, projections and sinograms of each study are provided in the database and can be further exploited to evaluate processing and reconstruction algorithms. Patient studies using different characteristics are included in the database and different computational phantoms were tested for the same acquisitions. These include the XCAT, Zubal and the Virtual Family, which some of which are used for the first time in nuclear imaging. The created database will be freely available and our current work is towards its extension by simulating additional clinical pathologies.
Reflecting on the ethical administration of computerized medical records
NASA Astrophysics Data System (ADS)
Collmann, Jeff R.
1995-05-01
This presentation examines the ethical issues raised by computerized image management and communication systems (IMAC), the ethical principals that should guide development of policies, procedures and practices for IMACS systems, and who should be involved in developing a hospital's approach to these issues. The ready access of computerized records creates special hazards of which hospitals must beware. Hospitals must maintain confidentiality of patient's records while making records available to authorized users as efficiently as possible. The general conditions of contemporary health care undermine protecting the confidentiality of patient record. Patients may not provide health care institutions with information about themselves under conditions of informed consent. The field of information science must design sophisticated systems of computer security that stratify access, create audit trails on data changes and system use, safeguard patient data from corruption, and protect the databases from outside invasion. Radiology professionals must both work with information science experts in their own hospitals to create institutional safeguards and include the adequacy of security measures as a criterion for evaluating PACS systems. New policies and procedures on maintaining computerized patient records must be developed that obligate all members of the health care staff, not just care givers. Patients must be informed about the existence of computerized medical records, the rules and practices that govern their dissemination and given the opportunity to give or withhold consent for their use. Departmental and hospital policies on confidentiality should be reviewed to determine if revisions are necessary to manage computer-based records. Well developed discussions of the ethical principles and administrative policies on confidentiality and informed consent and of the risks posed by computer-based patient records systems should be included in initial and continuing staff system training. Administration should develop ways to monitor staff compliance with confidentiality policies and should assess diligence in maintaining patient record confidentiality as part of staff annual performance evaluations. Ethical management of IMAC systems is the business of all members of the health care team. Computerized patient records management (including IMAC) should be scrutinized as any other clinical medial ethical issue. If hospitals include these processes in their planning for RIS, IMACS, and HIS systems, they should have time to develop institutional expertise on these questions before and as systems are installed rather than only as ethical dilemmas develop during their use.
Applications of Technology to CAS Data-Base Production.
ERIC Educational Resources Information Center
Weisgerber, David W.
1984-01-01
Reviews the economic importance of applying computer technology to Chemical Abstracts Service database production from 1973 to 1983. Database building, technological applications for editorial processing (online editing, Author Index Manufacturing System), and benefits (increased staff productivity, reduced rate of increase of cost of services,…
DNA profiles, computer searches, and the Fourth Amendment.
Kimel, Catherine W
2013-01-01
Pursuant to federal statutes and to laws in all fifty states, the United States government has assembled a database containing the DNA profiles of over eleven million citizens. Without judicial authorization, the government searches each of these profiles one-hundred thousand times every day, seeking to link database subjects to crimes they are not suspected of committing. Yet, courts and scholars that have addressed DNA databasing have focused their attention almost exclusively on the constitutionality of the government's seizure of the biological samples from which the profiles are generated. This Note fills a gap in the scholarship by examining the Fourth Amendment problems that arise when the government searches its vast DNA database. This Note argues that each attempt to match two DNA profiles constitutes a Fourth Amendment search because each attempted match infringes upon database subjects' expectations of privacy in their biological relationships and physical movements. The Note further argues that database searches are unreasonable as they are currently conducted, and it suggests an adaptation of computer-search procedures to remedy the constitutional deficiency.
P43-S Computational Biology Applications Suite for High-Performance Computing (BioHPC.net)
Pillardy, J.
2007-01-01
One of the challenges of high-performance computing (HPC) is user accessibility. At the Cornell University Computational Biology Service Unit, which is also a Microsoft HPC institute, we have developed a computational biology application suite that allows researchers from biological laboratories to submit their jobs to the parallel cluster through an easy-to-use Web interface. Through this system, we are providing users with popular bioinformatics tools including BLAST, HMMER, InterproScan, and MrBayes. The system is flexible and can be easily customized to include other software. It is also scalable; the installation on our servers currently processes approximately 8500 job submissions per year, many of them requiring massively parallel computations. It also has a built-in user management system, which can limit software and/or database access to specified users. TAIR, the major database of the plant model organism Arabidopsis, and SGN, the international tomato genome database, are both using our system for storage and data analysis. The system consists of a Web server running the interface (ASP.NET C#), Microsoft SQL server (ADO.NET), compute cluster running Microsoft Windows, ftp server, and file server. Users can interact with their jobs and data via a Web browser, ftp, or e-mail. The interface is accessible at http://cbsuapps.tc.cornell.edu/.
Maintaining Pedagogical Integrity of a Computer Mediated Course Delivery in Social Foundations
ERIC Educational Resources Information Center
Stewart, Shelley; Cobb-Roberts, Deirdre; Shircliffe, Barbara J.
2013-01-01
Transforming a face to face course to a computer mediated format in social foundations (interdisciplinary field in education), while maintaining pedagogical integrity, involves strategic collaboration between instructional technologists and content area experts. This type of planned partnership requires open dialogue and a mutual respect for prior…
Brassica ASTRA: an integrated database for Brassica genomic research.
Love, Christopher G; Robinson, Andrew J; Lim, Geraldine A C; Hopkins, Clare J; Batley, Jacqueline; Barker, Gary; Spangenberg, German C; Edwards, David
2005-01-01
Brassica ASTRA is a public database for genomic information on Brassica species. The database incorporates expressed sequences with Swiss-Prot and GenBank comparative sequence annotation as well as secondary Gene Ontology (GO) annotation derived from the comparison with Arabidopsis TAIR GO annotations. Simple sequence repeat molecular markers are identified within resident sequences and mapped onto the closely related Arabidopsis genome sequence. Bacterial artificial chromosome (BAC) end sequences derived from the Multinational Brassica Genome Project are also mapped onto the Arabidopsis genome sequence enabling users to identify candidate Brassica BACs corresponding to syntenic regions of Arabidopsis. This information is maintained in a MySQL database with a web interface providing the primary means of interrogation. The database is accessible at http://hornbill.cspp.latrobe.edu.au.
A service-oriented data access control model
NASA Astrophysics Data System (ADS)
Meng, Wei; Li, Fengmin; Pan, Juchen; Song, Song; Bian, Jiali
2017-01-01
The development of mobile computing, cloud computing and distributed computing meets the growing individual service needs. Facing with complex application system, it's an urgent problem to ensure real-time, dynamic, and fine-grained data access control. By analyzing common data access control models, on the basis of mandatory access control model, the paper proposes a service-oriented access control model. By regarding system services as subject and data of databases as object, the model defines access levels and access identification of subject and object, and ensures system services securely to access databases.
Gama-Castro, Socorro; Salgado, Heladia; Santos-Zavaleta, Alberto; Ledezma-Tejeida, Daniela; Muñiz-Rascado, Luis; García-Sotelo, Jair Santiago; Alquicira-Hernández, Kevin; Martínez-Flores, Irma; Pannier, Lucia; Castro-Mondragón, Jaime Abraham; Medina-Rivera, Alejandra; Solano-Lira, Hilda; Bonavides-Martínez, César; Pérez-Rueda, Ernesto; Alquicira-Hernández, Shirley; Porrón-Sotelo, Liliana; López-Fuentes, Alejandra; Hernández-Koutoucheva, Anastasia; Del Moral-Chávez, Víctor; Rinaldi, Fabio; Collado-Vides, Julio
2016-01-04
RegulonDB (http://regulondb.ccg.unam.mx) is one of the most useful and important resources on bacterial gene regulation,as it integrates the scattered scientific knowledge of the best-characterized organism, Escherichia coli K-12, in a database that organizes large amounts of data. Its electronic format enables researchers to compare their results with the legacy of previous knowledge and supports bioinformatics tools and model building. Here, we summarize our progress with RegulonDB since our last Nucleic Acids Research publication describing RegulonDB, in 2013. In addition to maintaining curation up-to-date, we report a collection of 232 interactions with small RNAs affecting 192 genes, and the complete repertoire of 189 Elementary Genetic Sensory-Response units (GENSOR units), integrating the signal, regulatory interactions, and metabolic pathways they govern. These additions represent major progress to a higher level of understanding of regulated processes. We have updated the computationally predicted transcription factors, which total 304 (184 with experimental evidence and 120 from computational predictions); we updated our position-weight matrices and have included tools for clustering them in evolutionary families. We describe our semiautomatic strategy to accelerate curation, including datasets from high-throughput experiments, a novel coexpression distance to search for 'neighborhood' genes to known operons and regulons, and computational developments. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Computer systems and methods for the query and visualization multidimensional databases
Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick
2017-04-25
A method of generating a data visualization is performed at a computer having a display, one or more processors, and memory. The memory stores one or more programs for execution by the one or more processors. The process receives user specification of a plurality of characteristics of a data visualization. The data visualization is based on data from a multidimensional database. The characteristics specify at least x-position and y-position of data marks corresponding to tuples of data retrieved from the database. The process generates a data visualization according to the specified plurality of characteristics. The data visualization has an x-axis defined based on data for one or more first fields from the database that specify x-position of the data marks and the data visualization has a y-axis defined based on data for one or more second fields from the database that specify y-position of the data marks.
System for Performing Single Query Searches of Heterogeneous and Dispersed Databases
NASA Technical Reports Server (NTRS)
Maluf, David A. (Inventor); Okimura, Takeshi (Inventor); Gurram, Mohana M. (Inventor); Tran, Vu Hoang (Inventor); Knight, Christopher D. (Inventor); Trinh, Anh Ngoc (Inventor)
2017-01-01
The present invention is a distributed computer system of heterogeneous databases joined in an information grid and configured with an Application Programming Interface hardware which includes a search engine component for performing user-structured queries on multiple heterogeneous databases in real time. This invention reduces overhead associated with the impedance mismatch that commonly occurs in heterogeneous database queries.
ERIC Educational Resources Information Center
Gruner, Richard; Heron, Carol E.
1984-01-01
Examines usefulness of DIALOG as legal research tool through use of DIALOG's DIALINDEX database to identify those databases among almost 200 available that contain large numbers of records related to federal securities regulation. Eight databases selected for further study are detailed. Twenty-six footnotes, database statistics, and samples are…
Construction of databases: advances and significance in clinical research.
Long, Erping; Huang, Bingjie; Wang, Liming; Lin, Xiaoyu; Lin, Haotian
2015-12-01
Widely used in clinical research, the database is a new type of data management automation technology and the most efficient tool for data management. In this article, we first explain some basic concepts, such as the definition, classification, and establishment of databases. Afterward, the workflow for establishing databases, inputting data, verifying data, and managing databases is presented. Meanwhile, by discussing the application of databases in clinical research, we illuminate the important role of databases in clinical research practice. Lastly, we introduce the reanalysis of randomized controlled trials (RCTs) and cloud computing techniques, showing the most recent advancements of databases in clinical research.
Ethics across the computer science curriculum: privacy modules in an introductory database course.
Appel, Florence
2005-10-01
This paper describes the author's experience of infusing an introductory database course with privacy content, and the on-going project entitled Integrating Ethics Into the Database Curriculum, that evolved from that experience. The project, which has received funding from the National Science Foundation, involves the creation of a set of privacy modules that can be implemented systematically by database educators throughout the database design thread of an undergraduate course.