Just-in-time Database-Driven Web Applications
2003-01-01
"Just-in-time" database-driven Web applications are inexpensive, quickly-developed software that can be put to many uses within a health care organization. Database-driven Web applications garnered 73873 hits on our system-wide intranet in 2002. They enabled collaboration and communication via user-friendly Web browser-based interfaces for both mission-critical and patient-care-critical functions. Nineteen database-driven Web applications were developed. The application categories that comprised 80% of the hits were results reporting (27%), graduate medical education (26%), research (20%), and bed availability (8%). The mean number of hits per application was 3888 (SD = 5598; range, 14-19879). A model is described for just-in-time database-driven Web application development and an example given with a popular HTML editor and database program. PMID:14517109
Olier, Ivan; Springate, David A; Ashcroft, Darren M; Doran, Tim; Reeves, David; Planner, Claire; Reilly, Siobhan; Kontopantelis, Evangelos
2016-01-01
The use of Electronic Health Records databases for medical research has become mainstream. In the UK, increasing use of Primary Care Databases is largely driven by almost complete computerisation and uniform standards within the National Health Service. Electronic Health Records research often begins with the development of a list of clinical codes with which to identify cases with a specific condition. We present a methodology and accompanying Stata and R commands (pcdsearch/Rpcdsearch) to help researchers in this task. We present severe mental illness as an example. We used the Clinical Practice Research Datalink, a UK Primary Care Database in which clinical information is largely organised using Read codes, a hierarchical clinical coding system. Pcdsearch is used to identify potentially relevant clinical codes and/or product codes from word-stubs and code-stubs suggested by clinicians. The returned code-lists are reviewed and codes relevant to the condition of interest are selected. The final code-list is then used to identify patients. We identified 270 Read codes linked to SMI and used them to identify cases in the database. We observed that our approach identified cases that would have been missed with a simpler approach using SMI registers defined within the UK Quality and Outcomes Framework. We described a framework for researchers of Electronic Health Records databases, for identifying patients with a particular condition or matching certain clinical criteria. The method is invariant to coding system or database and can be used with SNOMED CT, ICD or other medical classification code-lists.
Nuclear Science References Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritychenko, B., E-mail: pritychenko@bnl.gov; Běták, E.; Singh, B.
2014-06-15
The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energymore » Agency (http://www-nds.iaea.org/nsr)« less
Event Driven Messaging with Role-Based Subscriptions
NASA Technical Reports Server (NTRS)
Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Kim, rachel; Allen, Christopher; Luong, Ivy; Chang, George; Zendejas, Silvino; Sadaqathulla, Syed
2009-01-01
Event Driven Messaging with Role-Based Subscriptions (EDM-RBS) is a framework integrated into the Service Management Database (SMDB) to allow for role-based and subscription-based delivery of synchronous and asynchronous messages over JMS (Java Messaging Service), SMTP (Simple Mail Transfer Protocol), or SMS (Short Messaging Service). This allows for 24/7 operation with users in all parts of the world. The software classifies messages by triggering data type, application source, owner of data triggering event (mission), classification, sub-classification and various other secondary classifying tags. Messages are routed to applications or users based on subscription rules using a combination of the above message attributes. This program provides a framework for identifying connected users and their applications for targeted delivery of messages over JMS to the client applications the user is logged into. EDMRBS provides the ability to send notifications over e-mail or pager rather than having to rely on a live human to do it. It is implemented as an Oracle application that uses Oracle relational database management system intrinsic functions. It is configurable to use Oracle AQ JMS API or an external JMS provider for messaging. It fully integrates into the event-logging framework of SMDB (Subnet Management Database).
Olier, Ivan; Springate, David A.; Ashcroft, Darren M.; Doran, Tim; Reeves, David; Planner, Claire; Reilly, Siobhan; Kontopantelis, Evangelos
2016-01-01
Background The use of Electronic Health Records databases for medical research has become mainstream. In the UK, increasing use of Primary Care Databases is largely driven by almost complete computerisation and uniform standards within the National Health Service. Electronic Health Records research often begins with the development of a list of clinical codes with which to identify cases with a specific condition. We present a methodology and accompanying Stata and R commands (pcdsearch/Rpcdsearch) to help researchers in this task. We present severe mental illness as an example. Methods We used the Clinical Practice Research Datalink, a UK Primary Care Database in which clinical information is largely organised using Read codes, a hierarchical clinical coding system. Pcdsearch is used to identify potentially relevant clinical codes and/or product codes from word-stubs and code-stubs suggested by clinicians. The returned code-lists are reviewed and codes relevant to the condition of interest are selected. The final code-list is then used to identify patients. Results We identified 270 Read codes linked to SMI and used them to identify cases in the database. We observed that our approach identified cases that would have been missed with a simpler approach using SMI registers defined within the UK Quality and Outcomes Framework. Conclusion We described a framework for researchers of Electronic Health Records databases, for identifying patients with a particular condition or matching certain clinical criteria. The method is invariant to coding system or database and can be used with SNOMED CT, ICD or other medical classification code-lists. PMID:26918439
Harris, Eric S J; Erickson, Sean D; Tolopko, Andrew N; Cao, Shugeng; Craycroft, Jane A; Scholten, Robert; Fu, Yanling; Wang, Wenquan; Liu, Yong; Zhao, Zhongzhen; Clardy, Jon; Shamu, Caroline E; Eisenberg, David M
2011-05-17
Ethnobotanically driven drug-discovery programs include data related to many aspects of the preparation of botanical medicines, from initial plant collection to chemical extraction and fractionation. The Traditional Medicine Collection Tracking System (TM-CTS) was created to organize and store data of this type for an international collaborative project involving the systematic evaluation of commonly used Traditional Chinese Medicinal plants. The system was developed using domain-driven design techniques, and is implemented using Java, Hibernate, PostgreSQL, Business Intelligence and Reporting Tools (BIRT), and Apache Tomcat. The TM-CTS relational database schema contains over 70 data types, comprising over 500 data fields. The system incorporates a number of unique features that are useful in the context of ethnobotanical projects such as support for information about botanical collection, method of processing, quality tests for plants with existing pharmacopoeia standards, chemical extraction and fractionation, and historical uses of the plants. The database also accommodates data provided in multiple languages and integration with a database system built to support high throughput screening based drug discovery efforts. It is accessed via a web-based application that provides extensive, multi-format reporting capabilities. This new database system was designed to support a project evaluating the bioactivity of Chinese medicinal plants. The software used to create the database is open source, freely available, and could potentially be applied to other ethnobotanically driven natural product collection and drug-discovery programs. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Harris, Eric S. J.; Erickson, Sean D.; Tolopko, Andrew N.; Cao, Shugeng; Craycroft, Jane A.; Scholten, Robert; Fu, Yanling; Wang, Wenquan; Liu, Yong; Zhao, Zhongzhen; Clardy, Jon; Shamu, Caroline E.; Eisenberg, David M.
2011-01-01
Aim of the study. Ethnobotanically-driven drug-discovery programs include data related to many aspects of the preparation of botanical medicines, from initial plant collection to chemical extraction and fractionation. The Traditional Medicine-Collection Tracking System (TM-CTS) was created to organize and store data of this type for an international collaborative project involving the systematic evaluation of commonly used Traditional Chinese Medicinal plants. Materials and Methods. The system was developed using domain-driven design techniques, and is implemented using Java, Hibernate, PostgreSQL, Business Intelligence and Reporting Tools (BIRT), and Apache Tomcat. Results. The TM-CTS relational database schema contains over 70 data types, comprising over 500 data fields. The system incorporates a number of unique features that are useful in the context of ethnobotanical projects such as support for information about botanical collection, method of processing, quality tests for plants with existing pharmacopoeia standards, chemical extraction and fractionation, and historical uses of the plants. The database also accommodates data provided in multiple languages and integration with a database system built to support high throughput screening based drug discovery efforts. It is accessed via a web-based application that provides extensive, multi-format reporting capabilities. Conclusions. This new database system was designed to support a project evaluating the bioactivity of Chinese medicinal plants. The software used to create the database is open source, freely available, and could potentially be applied to other ethnobotanically-driven natural product collection and drug-discovery programs. PMID:21420479
Fatigue Crack Growth Database for Damage Tolerance Analysis
NASA Technical Reports Server (NTRS)
Forman, R. G.; Shivakumar, V.; Cardinal, J. W.; Williams, L. C.; McKeighan, P. C.
2005-01-01
The objective of this project was to begin the process of developing a fatigue crack growth database (FCGD) of metallic materials for use in damage tolerance analysis of aircraft structure. For this initial effort, crack growth rate data in the NASGRO (Registered trademark) database, the United States Air Force Damage Tolerant Design Handbook, and other publicly available sources were examined and used to develop a database that characterizes crack growth behavior for specific applications (materials). The focus of this effort was on materials for general commercial aircraft applications, including large transport airplanes, small transport commuter airplanes, general aviation airplanes, and rotorcraft. The end products of this project are the FCGD software and this report. The specific goal of this effort was to present fatigue crack growth data in three usable formats: (1) NASGRO equation parameters, (2) Walker equation parameters, and (3) tabular data points. The development of this FCGD will begin the process of developing a consistent set of standard fatigue crack growth material properties. It is envisioned that the end product of the process will be a general repository for credible and well-documented fracture properties that may be used as a default standard in damage tolerance analyses.
Data-driven grasp synthesis using shape matching and task-based pruning.
Li, Ying; Fu, Jiaxin L; Pollard, Nancy S
2007-01-01
Human grasps, especially whole-hand grasps, are difficult to animate because of the high number of degrees of freedom of the hand and the need for the hand to conform naturally to the object surface. Captured human motion data provides us with a rich source of examples of natural grasps. However, for each new object, we are faced with the problem of selecting the best grasp from the database and adapting it to that object. This paper presents a data-driven approach to grasp synthesis. We begin with a database of captured human grasps. To identify candidate grasps for a new object, we introduce a novel shape matching algorithm that matches hand shape to object shape by identifying collections of features having similar relative placements and surface normals. This step returns many grasp candidates, which are clustered and pruned by choosing the grasp best suited for the intended task. For pruning undesirable grasps, we develop an anatomically-based grasp quality measure specific to the human hand. Examples of grasp synthesis are shown for a variety of objects not present in the original database. This algorithm should be useful both as an animator tool for posing the hand and for automatic grasp synthesis in virtual environments.
Application driven interface generation for EASIE. M.S. Thesis
NASA Technical Reports Server (NTRS)
Kao, Ya-Chen
1992-01-01
The Environment for Application Software Integration and Execution (EASIE) provides a user interface and a set of utility programs which support the rapid integration and execution of analysis programs about a central relational database. EASIE provides users with two basic modes of execution. One of them is a menu-driven execution mode, called Application-Driven Execution (ADE), which provides sufficient guidance to review data, select a menu action item, and execute an application program. The other mode of execution, called Complete Control Execution (CCE), provides an extended executive interface which allows in-depth control of the design process. Currently, the EASIE system is based on alphanumeric techniques only. It is the purpose of this project to extend the flexibility of the EASIE system in the ADE mode by implementing it in a window system. Secondly, a set of utilities will be developed to assist the experienced engineer in the generation of an ADE application.
Computer Applications Course Goals, Outlines, and Objectives.
ERIC Educational Resources Information Center
Law, Debbie; Morgan, Michele
This document contains a curriculum model that is designed to provide high school computer teachers with practical ideas for a 1-year computer applications course combining 3 quarters of instruction in keyboarding and 1 quarter of basic instruction in databases and spreadsheets. The document begins with a rationale and a 10-item list of…
Database tomography for commercial application
NASA Technical Reports Server (NTRS)
Kostoff, Ronald N.; Eberhart, Henry J.
1994-01-01
Database tomography is a method for extracting themes and their relationships from text. The algorithms, employed begin with word frequency and word proximity analysis and build upon these results. When the word 'database' is used, think of medical or police records, patents, journals, or papers, etc. (any text information that can be computer stored). Database tomography features a full text, user interactive technique enabling the user to identify areas of interest, establish relationships, and map trends for a deeper understanding of an area of interest. Database tomography concepts and applications have been reported in journals and presented at conferences. One important feature of the database tomography algorithm is that it can be used on a database of any size, and will facilitate the users ability to understand the volume of content therein. While employing the process to identify research opportunities it became obvious that this promising technology has potential applications for business, science, engineering, law, and academe. Examples include evaluating marketing trends, strategies, relationships and associations. Also, the database tomography process would be a powerful component in the area of competitive intelligence, national security intelligence and patent analysis. User interests and involvement cannot be overemphasized.
Knowledge Quality Functions for Rule Discovery
1994-09-01
Managers in many organizations finding themselves in the possession of large and rapidly growing databases are beginning to suspect the information in their...missing values (Smyth and Goodman, 1992, p. 303). Decision trees "tend to grow very large for realistic applications and are thus difficult to interpret...by humans" (Holsheimer, 1994, p. 42). Decision trees also grow excessively complicated in the presence of noisy databases (Dhar and Tuzhilin, 1993, p
Development and evaluation of a dynamic web-based application.
Hsieh, Yichuan; Brennan, Patricia Flatley
2007-10-11
Traditional consumer health informatics (CHI) applications that were developed for lay public on the Web were commonly written in a Hypertext Markup Language (HTML). As genetics knowledge rapidly advances and requires updating information in a timely fashion, a different content structure is therefore needed to facilitate information delivery. This poster will present the process of developing a dynamic database-driven Web CHI application.
1986-03-01
SRdb ... .......... .35 APPENDIX A: ABBREVIATIONS AND ACRONYMS ......... 37 " APPENDIX B: USER’S MANUAL ..... ............... 38 APPENDIX C: DATABASE...percentage of situations. The purpose of this paper is to examine and propose a software-oriented alternative to the current manual , instruction-driven...Department Customer Service Manual (Ref. 1] and the applicable NPS Comptroller instruction [Ref. 2]. Several modifications to these written quidelines
Service Management Database for DSN Equipment
NASA Technical Reports Server (NTRS)
Zendejas, Silvino; Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Wolgast, Paul; Allen, Christopher; Luong, Ivy; Chang, George; Sadaqathulla, Syed
2009-01-01
This data- and event-driven persistent storage system leverages the use of commercial software provided by Oracle for portability, ease of maintenance, scalability, and ease of integration with embedded, client-server, and multi-tiered applications. In this role, the Service Management Database (SMDB) is a key component of the overall end-to-end process involved in the scheduling, preparation, and configuration of the Deep Space Network (DSN) equipment needed to perform the various telecommunication services the DSN provides to its customers worldwide. SMDB makes efficient use of triggers, stored procedures, queuing functions, e-mail capabilities, data management, and Java integration features provided by the Oracle relational database management system. SMDB uses a third normal form schema design that allows for simple data maintenance procedures and thin layers of integration with client applications. The software provides an integrated event logging system with ability to publish events to a JMS messaging system for synchronous and asynchronous delivery to subscribed applications. It provides a structured classification of events and application-level messages stored in database tables that are accessible by monitoring applications for real-time monitoring or for troubleshooting and analysis over historical archives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, Kurt H.; McCurdy, C. William; Orlando, Thomas M.
2000-09-01
This report is based largely on presentations and discussions at two workshops and contributions from workshop participants. The workshop on Fundamental Challenges in Electron-Driven Chemistry was held in Berkeley, October 9-10, 1998, and addressed questions regarding theory, computation, and simulation. The workshop on Electron-Driven Processes: Scientific Challenges and Technological Opportunities was held at Stevens Institute of Technology, March 16-17, 2000, and focused largely on experiments. Electron-molecule and electron-atom collisions initiate and drive almost all the relevant chemical processes associated with radiation chemistry, environmental chemistry, stability of waste repositories, plasma-enhanced chemical vapor deposition, plasma processing of materials for microelectronic devices andmore » other applications, and novel light sources for research purposes (e.g. excimer lamps in the extreme ultraviolet) and in everyday lighting applications. The life sciences are a rapidly advancing field where the important role of electron-driven processes is only now beginning to be recognized. Many of the applications of electron-initiated chemical processes require results in the near term. A large-scale, multidisciplinary and collaborative effort should be mounted to solve these problems in a timely way so that their solution will have the needed impact on the urgent questions of understanding the physico-chemical processes initiated and driven by electron interactions.« less
The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system
Zerkin, V. V.; Pritychenko, B.
2018-02-04
The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ~22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented in this paper. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. Finally,more » it is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.« less
The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system
NASA Astrophysics Data System (ADS)
Zerkin, V. V.; Pritychenko, B.
2018-04-01
The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ∼22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. It is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.
The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zerkin, V. V.; Pritychenko, B.
The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ~22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented in this paper. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. Finally,more » it is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.« less
Metaphors and Meaning: Principals' Perceptions of Teacher Evaluation Implementation
ERIC Educational Resources Information Center
Derrington, Mary Lynne
2013-01-01
This Southeastern state was awarded one of the first two Race to The Top (RTTT) grants the U. S. Department of Education funded. A key piece of the state's winning application was a legislative mandate to implement an intensive, quantitative, and accountability driven teacher evaluation system beginning with the 2011-2012 school year. The new law…
NASA Astrophysics Data System (ADS)
Skotniczny, Zbigniew
1989-12-01
The Query by Forms (QbF) system is a user-oriented interactive tool for querying large relational database with minimal queries difinition cost. The system was worked out under the assumption that user's time and effort for defining needed queries is the most severe bottleneck. The system may be applied in any Rdb/VMS databases system and is recommended for specific information systems of any project where end-user queries cannot be foreseen. The tool is dedicated to specialist of an application domain who have to analyze data maintained in database from any needed point of view, who do not need to know commercial databases languages. The paper presents the system developed as a compromise between its functionality and usability. User-system communication via a menu-driven "tree-like" structure of screen-forms which produces a query difinition and execution is discussed in detail. Output of query results (printed reports and graphics) is also discussed. Finally the paper shows one application of QbF to a HERA-project.
Enhanced DIII-D Data Management Through a Relational Database
NASA Astrophysics Data System (ADS)
Burruss, J. R.; Peng, Q.; Schachter, J.; Schissel, D. P.; Terpstra, T. B.
2000-10-01
A relational database is being used to serve data about DIII-D experiments. The database is optimized for queries across multiple shots, allowing for rapid data mining by SQL-literate researchers. The relational database relates different experiments and datasets, thus providing a big picture of DIII-D operations. Users are encouraged to add their own tables to the database. Summary physics quantities about DIII-D discharges are collected and stored in the database automatically. Meta-data about code runs, MDSplus usage, and visualization tool usage are collected, stored in the database, and later analyzed to improve computing. Documentation on the database may be accessed through programming languages such as C, Java, and IDL, or through ODBC compliant applications such as Excel and Access. A database-driven web page also provides a convenient means for viewing database quantities through the World Wide Web. Demonstrations will be given at the poster.
Renard, Bernhard Y.; Xu, Buote; Kirchner, Marc; Zickmann, Franziska; Winter, Dominic; Korten, Simone; Brattig, Norbert W.; Tzur, Amit; Hamprecht, Fred A.; Steen, Hanno
2012-01-01
Currently, the reliable identification of peptides and proteins is only feasible when thoroughly annotated sequence databases are available. Although sequencing capacities continue to grow, many organisms remain without reliable, fully annotated reference genomes required for proteomic analyses. Standard database search algorithms fail to identify peptides that are not exactly contained in a protein database. De novo searches are generally hindered by their restricted reliability, and current error-tolerant search strategies are limited by global, heuristic tradeoffs between database and spectral information. We propose a Bayesian information criterion-driven error-tolerant peptide search (BICEPS) and offer an open source implementation based on this statistical criterion to automatically balance the information of each single spectrum and the database, while limiting the run time. We show that BICEPS performs as well as current database search algorithms when such algorithms are applied to sequenced organisms, whereas BICEPS only uses a remotely related organism database. For instance, we use a chicken instead of a human database corresponding to an evolutionary distance of more than 300 million years (International Chicken Genome Sequencing Consortium (2004) Sequence and comparative analysis of the chicken genome provide unique perspectives on vertebrate evolution. Nature 432, 695–716). We demonstrate the successful application to cross-species proteomics with a 33% increase in the number of identified proteins for a filarial nematode sample of Litomosoides sigmodontis. PMID:22493179
Integration of the NRL Digital Library.
ERIC Educational Resources Information Center
King, James
2001-01-01
The Naval Research Laboratory (NRL) Library has identified six primary areas that need improvement: infrastructure, InfoWeb, TORPEDO Ultra, journal data management, classified data, and linking software. It is rebuilding InfoWeb and TORPEDO Ultra as database-driven Web applications, upgrading the STILAS library catalog, and creating other support…
NASA Technical Reports Server (NTRS)
Steeman, Gerald; Connell, Christopher
2000-01-01
Many librarians may feel that dynamic Web pages are out of their reach, financially and technically. Yet we are reminded in library and Web design literature that static home pages are a thing of the past. This paper describes how librarians at the Institute for Defense Analyses (IDA) library developed a database-driven, dynamic intranet site using commercial off-the-shelf applications. Administrative issues include surveying a library users group for interest and needs evaluation; outlining metadata elements; and, committing resources from managing time to populate the database and training in Microsoft FrontPage and Web-to-database design. Technical issues covered include Microsoft Access database fundamentals, lessons learned in the Web-to-database process (including setting up Database Source Names (DSNs), redesigning queries to accommodate the Web interface, and understanding Access 97 query language vs. Standard Query Language (SQL)). This paper also offers tips on editing Active Server Pages (ASP) scripting to create desired results. A how-to annotated resource list closes out the paper.
Relational Database Design of a Shipboard Ammunition Inventory, Requisitioning, and Reporting System
1990-06-01
history of transactions effecting the status or quantity of that NI1N. Information on the current inventory balance is obtained from this section of...Number * Julian Date of Transaction * Activity Classification Code (ACC) * NALC * N1IN * Condition Code * Beginning Balance * Serial Number (if applicable...Ending Balance * Remarks As with the inventory information, ATR format varies with the type of control (Material Condition Code) applicable to that
Data-driven discovery of new Dirac semimetal materials
NASA Astrophysics Data System (ADS)
Yan, Qimin; Chen, Ru; Neaton, Jeffrey
In recent years, a significant amount of materials property data from high-throughput computations based on density functional theory (DFT) and the application of database technologies have enabled the rise of data-driven materials discovery. In this work, we initiate the extension of the data-driven materials discovery framework to the realm of topological semimetal materials and to accelerate the discovery of novel Dirac semimetals. We implement current available and develop new workflows to data-mine the Materials Project database for novel Dirac semimetals with desirable band structures and symmetry protected topological properties. This data-driven effort relies on the successful development of several automatic data generation and analysis tools, including a workflow for the automatic identification of topological invariants and pattern recognition techniques to find specific features in a massive number of computed band structures. Utilizing this approach, we successfully identified more than 15 novel Dirac point and Dirac nodal line systems that have not been theoretically predicted or experimentally identified. This work is supported by the Materials Project Predictive Modeling Center through the U.S. Department of Energy, Office of Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract No. DE-AC02-05CH11231.
Advanced instrumentation for QELS experiments
NASA Technical Reports Server (NTRS)
Tscharnuter, Walther; Weiner, Bruce; Thomas, John
1989-01-01
Quasi Elastic Light Scattering (QELS) experiments have become an important tool in both research and quality control applications during the past 25 years. From the crude beginnings employing mechanically driven spectrum analyzers, an impressive array of general purpose digital correlators and special purpose particle sizers is now commercially available. The principles of QELS experiments are reviewed, their advantages and disadvantages are discussed and new instrumentation is described.
Prototype Development: Context-Driven Dynamic XML Ophthalmologic Data Capture Application
Schwei, Kelsey M; Kadolph, Christopher; Finamore, Joseph; Cancel, Efrain; McCarty, Catherine A; Okorie, Asha; Thomas, Kate L; Allen Pacheco, Jennifer; Pathak, Jyotishman; Ellis, Stephen B; Denny, Joshua C; Rasmussen, Luke V; Tromp, Gerard; Williams, Marc S; Vrabec, Tamara R; Brilliant, Murray H
2017-01-01
Background The capture and integration of structured ophthalmologic data into electronic health records (EHRs) has historically been a challenge. However, the importance of this activity for patient care and research is critical. Objective The purpose of this study was to develop a prototype of a context-driven dynamic extensible markup language (XML) ophthalmologic data capture application for research and clinical care that could be easily integrated into an EHR system. Methods Stakeholders in the medical, research, and informatics fields were interviewed and surveyed to determine data and system requirements for ophthalmologic data capture. On the basis of these requirements, an ophthalmology data capture application was developed to collect and store discrete data elements with important graphical information. Results The context-driven data entry application supports several features, including ink-over drawing capability for documenting eye abnormalities, context-based Web controls that guide data entry based on preestablished dependencies, and an adaptable database or XML schema that stores Web form specifications and allows for immediate changes in form layout or content. The application utilizes Web services to enable data integration with a variety of EHRs for retrieval and storage of patient data. Conclusions This paper describes the development process used to create a context-driven dynamic XML data capture application for optometry and ophthalmology. The list of ophthalmologic data elements identified as important for care and research can be used as a baseline list for future ophthalmologic data collection activities. PMID:28903894
Optical components damage parameters database system
NASA Astrophysics Data System (ADS)
Tao, Yizheng; Li, Xinglan; Jin, Yuquan; Xie, Dongmei; Tang, Dingyong
2012-10-01
Optical component is the key to large-scale laser device developed by one of its load capacity is directly related to the device output capacity indicators, load capacity depends on many factors. Through the optical components will damage parameters database load capacity factors of various digital, information technology, for the load capacity of optical components to provide a scientific basis for data support; use of business processes and model-driven approach, the establishment of component damage parameter information model and database systems, system application results that meet the injury test optical components business processes and data management requirements of damage parameters, component parameters of flexible, configurable system is simple, easy to use, improve the efficiency of the optical component damage test.
Spreadsheet Applications using VisiCalc and Lotus 1-2-3 Programs.
ERIC Educational Resources Information Center
Cortland-Madison Board of Cooperative Educational Services, Cortland, NY.
The VisiCalc program is visual calculation on a computer making use of an electronic worksheet that is beneficial to the business user in dealing with numerous accounting and clerical procedures. The Lotus 1-2-3 program begins with VisiCalc and improves upon it by adding graphics and a database as well as more efficient ways to manipulate and…
Evolution of the architecture of the ATLAS Metadata Interface (AMI)
NASA Astrophysics Data System (ADS)
Odier, J.; Aidel, O.; Albrand, S.; Fulachier, J.; Lambert, F.
2015-12-01
The ATLAS Metadata Interface (AMI) is now a mature application. Over the years, the number of users and the number of provided functions has dramatically increased. It is necessary to adapt the hardware infrastructure in a seamless way so that the quality of service re - mains high. We describe the AMI evolution since its beginning being served by a single MySQL backend database server to the current state having a cluster of virtual machines at French Tier1, an Oracle database at Lyon with complementary replication to the Oracle DB at CERN and AMI back-up server.
Rendic, Slobodan P; Guengerich, Frederick P
2018-01-01
The present work describes development of offline and web-searchable metabolism databases for drugs, other chemicals, and physiological compounds using human and model species, prompted by the large amount of data published after year 1990. The intent was to provide a rapid and accurate approach to published data to be applied both in science and to assist therapy. Searches for the data were done using the Pub Med database, accessing the Medline database of references and abstracts. In addition, data presented at scientific conferences (e.g., ISSX conferences) are included covering the publishing period beginning with the year 1976. Application of the data is illustrated by the properties of benzo[a]pyrene (B[a]P) and its metabolites. Analysis show higher activity of P450 1A1 for activation of the (-)- isomer of trans-B[a]P-7,8-diol, while P4501B1 exerts higher activity for the (+)- isomer. P450 1A2 showed equally low activity in the metabolic activation of both isomers. The information collected in the databases is applicable in prediction of metabolic drug-drug and/or drug-chemical interactions in clinical and environmental studies. The data on the metabolism of searched compound (exemplified by benzo[a]pyrene and its metabolites) also indicate toxicological properties of the products of specific reactions. The offline and web-searchable databases had wide range of applications (e.g. computer assisted drug design and development, optimization of clinical therapy, toxicological applications) and adjustment in everyday life styles. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
The Application of Magnesium Alloys in Aircraft Interiors — Changing the Rules
NASA Astrophysics Data System (ADS)
Davis, Bruce
The commercial aircraft market is forecast to steadily grow over the next two decades. Part of this growth is driven by the desire of airlines to replace older models in their fleet with newer, more fuel efficient designs, to realize lower operating costs and to address the rising cost of aviation fuel. As such the aircraft OEMs are beginning to set more and more demanding mass targets on their new platforms.
Implementing a Dynamic Database-Driven Course Using LAMP
ERIC Educational Resources Information Center
Laverty, Joseph Packy; Wood, David; Turchek, John
2011-01-01
This paper documents the formulation of a database driven open source architecture web development course. The design of a web-based curriculum faces many challenges: a) relative emphasis of client and server-side technologies, b) choice of a server-side language, and c) the cost and efficient delivery of a dynamic web development, database-driven…
A Support Database System for Integrated System Health Management (ISHM)
NASA Technical Reports Server (NTRS)
Schmalzel, John; Figueroa, Jorge F.; Turowski, Mark; Morris, John
2007-01-01
The development, deployment, operation and maintenance of Integrated Systems Health Management (ISHM) applications require the storage and processing of tremendous amounts of low-level data. This data must be shared in a secure and cost-effective manner between developers, and processed within several heterogeneous architectures. Modern database technology allows this data to be organized efficiently, while ensuring the integrity and security of the data. The extensibility and interoperability of the current database technologies also allows for the creation of an associated support database system. A support database system provides additional capabilities by building applications on top of the database structure. These applications can then be used to support the various technologies in an ISHM architecture. This presentation and paper propose a detailed structure and application description for a support database system, called the Health Assessment Database System (HADS). The HADS provides a shared context for organizing and distributing data as well as a definition of the applications that provide the required data-driven support to ISHM. This approach provides another powerful tool for ISHM developers, while also enabling novel functionality. This functionality includes: automated firmware updating and deployment, algorithm development assistance and electronic datasheet generation. The architecture for the HADS has been developed as part of the ISHM toolset at Stennis Space Center for rocket engine testing. A detailed implementation has begun for the Methane Thruster Testbed Project (MTTP) in order to assist in developing health assessment and anomaly detection algorithms for ISHM. The structure of this implementation is shown in Figure 1. The database structure consists of three primary components: the system hierarchy model, the historical data archive and the firmware codebase. The system hierarchy model replicates the physical relationships between system elements to provide the logical context for the database. The historical data archive provides a common repository for sensor data that can be shared between developers and applications. The firmware codebase is used by the developer to organize the intelligent element firmware into atomic units which can be assembled into complete firmware for specific elements.
International patent analysis of water source heat pump based on orbit database
NASA Astrophysics Data System (ADS)
Li, Na
2018-02-01
Using orbit database, this paper analysed the international patents of water source heat pump (WSHP) industry with patent analysis methods such as analysis of publication tendency, geographical distribution, technology leaders and top assignees. It is found that the beginning of the 21st century is a period of rapid growth of the patent application of WSHP. Germany and the United States had done researches and development of WSHP in an early time, but now Japan and China have become important countries of patent applications. China has been developing faster and faster in recent years, but the patents are concentrated in universities and urgent to be transferred. Through an objective analysis, this paper aims to provide appropriate decision references for the development of domestic WSHP industry.
MimoSA: a system for minimotif annotation
2010-01-01
Background Minimotifs are short peptide sequences within one protein, which are recognized by other proteins or molecules. While there are now several minimotif databases, they are incomplete. There are reports of many minimotifs in the primary literature, which have yet to be annotated, while entirely novel minimotifs continue to be published on a weekly basis. Our recently proposed function and sequence syntax for minimotifs enables us to build a general tool that will facilitate structured annotation and management of minimotif data from the biomedical literature. Results We have built the MimoSA application for minimotif annotation. The application supports management of the Minimotif Miner database, literature tracking, and annotation of new minimotifs. MimoSA enables the visualization, organization, selection and editing functions of minimotifs and their attributes in the MnM database. For the literature components, Mimosa provides paper status tracking and scoring of papers for annotation through a freely available machine learning approach, which is based on word correlation. The paper scoring algorithm is also available as a separate program, TextMine. Form-driven annotation of minimotif attributes enables entry of new minimotifs into the MnM database. Several supporting features increase the efficiency of annotation. The layered architecture of MimoSA allows for extensibility by separating the functions of paper scoring, minimotif visualization, and database management. MimoSA is readily adaptable to other annotation efforts that manually curate literature into a MySQL database. Conclusions MimoSA is an extensible application that facilitates minimotif annotation and integrates with the Minimotif Miner database. We have built MimoSA as an application that integrates dynamic abstract scoring with a high performance relational model of minimotif syntax. MimoSA's TextMine, an efficient paper-scoring algorithm, can be used to dynamically rank papers with respect to context. PMID:20565705
The human role in space (THURIS) applications study. Final briefing
NASA Technical Reports Server (NTRS)
Maybee, George W.
1987-01-01
The THURIS (The Human Role in Space) application is an iterative process involving successive assessments of man/machine mixes in terms of performance, cost and technology to arrive at an optimum man/machine mode for the mission application. The process begins with user inputs which define the mission in terms of an event sequence and performance time requirements. The desired initial operational capability date is also an input requirement. THURIS terms and definitions (e.g., generic activities) are applied to the input data converting it into a form which can be analyzed using the THURIS cost model outputs. The cost model produces tabular and graphical outputs for determining the relative cost-effectiveness of a given man/machine mode and generic activity. A technology database is provided to enable assessment of support equipment availability for selected man/machine modes. If technology gaps exist for an application, the database contains information supportive of further investigation into the relevant technologies. The present study concentrated on testing and enhancing the THURIS cost model and subordinate data files and developing a technology database which interfaces directly with the user via technology readiness displays. This effort has resulted in a more powerful, easy-to-use applications system for optimization of man/machine roles. Volume 1 is an executive summary.
Schacht Hansen, M; Dørup, J
2001-01-01
The Wireless Application Protocol technology implemented in newer mobile phones has built-in facilities for handling much of the information processing needed in clinical work. To test a practical approach we ported a relational database of the Danish pharmaceutical catalogue to Wireless Application Protocol using open source freeware at all steps. We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language script for easy update of the database. Data were distributed in 35 interrelated tables. Each pharmaceutical brand name was given its own card with links to general information about the drug, active substances, contraindications etc. Access was available through 1) browsing therapeutic groups and 2) searching for a brand name. The database interface was programmed in the server-side scripting language PHP3. A free, open source Wireless Application Protocol gateway to a pharmaceutical catalogue was established to allow dial-in access independent of commercial Wireless Application Protocol service providers. The application was tested on the Nokia 7110 and Ericsson R320s cellular phones. We have demonstrated that Wireless Application Protocol-based access to a dynamic clinical database can be established using open source freeware. The project opens perspectives for a further integration of Wireless Application Protocol phone functions in clinical information processing: Global System for Mobile communication telephony for bilateral communication, asynchronous unilateral communication via e-mail and Short Message Service, built-in calculator, calendar, personal organizer, phone number catalogue and Dictaphone function via answering machine technology. An independent Wireless Application Protocol gateway may be placed within hospital firewalls, which may be an advantage with respect to security. However, if Wireless Application Protocol phones are to become effective tools for physicians, special attention must be paid to the limitations of the devices. Input tools of Wireless Application Protocol phones should be improved, for instance by increased use of speech control.
Hansen, Michael Schacht
2001-01-01
Background The Wireless Application Protocol technology implemented in newer mobile phones has built-in facilities for handling much of the information processing needed in clinical work. Objectives To test a practical approach we ported a relational database of the Danish pharmaceutical catalogue to Wireless Application Protocol using open source freeware at all steps. Methods We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language script for easy update of the database. Data were distributed in 35 interrelated tables. Each pharmaceutical brand name was given its own card with links to general information about the drug, active substances, contraindications etc. Access was available through 1) browsing therapeutic groups and 2) searching for a brand name. The database interface was programmed in the server-side scripting language PHP3. Results A free, open source Wireless Application Protocol gateway to a pharmaceutical catalogue was established to allow dial-in access independent of commercial Wireless Application Protocol service providers. The application was tested on the Nokia 7110 and Ericsson R320s cellular phones. Conclusions We have demonstrated that Wireless Application Protocol-based access to a dynamic clinical database can be established using open source freeware. The project opens perspectives for a further integration of Wireless Application Protocol phone functions in clinical information processing: Global System for Mobile communication telephony for bilateral communication, asynchronous unilateral communication via e-mail and Short Message Service, built-in calculator, calendar, personal organizer, phone number catalogue and Dictaphone function via answering machine technology. An independent Wireless Application Protocol gateway may be placed within hospital firewalls, which may be an advantage with respect to security. However, if Wireless Application Protocol phones are to become effective tools for physicians, special attention must be paid to the limitations of the devices. Input tools of Wireless Application Protocol phones should be improved, for instance by increased use of speech control. PMID:11720946
Database Driven 6-DOF Trajectory Simulation for Debris Transport Analysis
NASA Technical Reports Server (NTRS)
West, Jeff
2008-01-01
Debris mitigation and risk assessment have been carried out by NASA and its contractors supporting Space Shuttle Return-To-Flight (RTF). As a part of this assessment, analysis of transport potential for debris that may be liberated from the vehicle or from pad facilities prior to tower clear (Lift-Off Debris) is being performed by MSFC. This class of debris includes plume driven and wind driven sources for which lift as well as drag are critical for the determination of the debris trajectory. As a result, NASA MSFC has a need for a debris transport or trajectory simulation that supports the computation of lift effect in addition to drag without the computational expense of fully coupled CFD with 6-DOF. A database driven 6-DOF simulation that uses aerodynamic force and moment coefficients for the debris shape that are interpolated from a database has been developed to meet this need. The design, implementation, and verification of the database driven six degree of freedom (6-DOF) simulation addition to the Lift-Off Debris Transport Analysis (LODTA) software are discussed in this paper.
Motion Pattern Encapsulation for Data-Driven Constraint-Based Motion Editing
NASA Astrophysics Data System (ADS)
Carvalho, Schubert R.; Boulic, Ronan; Thalmann, Daniel
The growth of motion capture systems have contributed to the proliferation of human motion database, mainly because human motion is important in many applications, ranging from games entertainment and films to sports and medicine. However, the captured motions normally attend specific needs. As an effort for adapting and reusing captured human motions in new tasks and environments and improving the animator's work, we present and discuss a new data-driven constraint-based animation system for interactive human motion editing. This method offers the compelling advantage that it provides faster deformations and more natural-looking motion results compared to goal-directed constraint-based methods found in the literature.
Design and development of a web-based application for diabetes patient data management.
Deo, S S; Deobagkar, D N; Deobagkar, Deepti D
2005-01-01
A web-based database management system developed for collecting, managing and analysing information of diabetes patients is described here. It is a searchable, client-server, relational database application, developed on the Windows platform using Oracle, Active Server Pages (ASP), Visual Basic Script (VB Script) and Java Script. The software is menu-driven and allows authorized healthcare providers to access, enter, update and analyse patient information. Graphical representation of data can be generated by the system using bar charts and pie charts. An interactive web interface allows users to query the database and generate reports. Alpha- and beta-testing of the system was carried out and the system at present holds records of 500 diabetes patients and is found useful in diagnosis and treatment. In addition to providing patient data on a continuous basis in a simple format, the system is used in population and comparative analysis. It has proved to be of significant advantage to the healthcare provider as compared to the paper-based system.
Bréant, C; Borst, F; Campi, D; Griesser, V; Momjian, S
1999-01-01
The use of a controlled vocabulary set in a hospital-wide clinical information system is of crucial importance for many departmental database systems to communicate and exchange information. In the absence of an internationally recognized clinical controlled vocabulary set, a new extension of the International statistical Classification of Diseases (ICD) is proposed. It expands the scope of the standard ICD beyond diagnosis and procedures to clinical terminology. In addition, the common Clinical Findings Dictionary (CFD) further records the definition of clinical entities. The construction of the vocabulary set and the CFD is incremental and manual. Tools have been implemented to facilitate the tasks of defining/maintaining/publishing dictionary versions. The design of database applications in the integrated clinical information system is driven by the CFD which is part of the Medical Questionnaire Designer tool. Several integrated clinical database applications in the field of diabetes and neuro-surgery have been developed at the HUG.
Bréant, C.; Borst, F.; Campi, D.; Griesser, V.; Momjian, S.
1999-01-01
The use of a controlled vocabulary set in a hospital-wide clinical information system is of crucial importance for many departmental database systems to communicate and exchange information. In the absence of an internationally recognized clinical controlled vocabulary set, a new extension of the International statistical Classification of Diseases (ICD) is proposed. It expands the scope of the standard ICD beyond diagnosis and procedures to clinical terminology. In addition, the common Clinical Findings Dictionary (CFD) further records the definition of clinical entities. The construction of the vocabulary set and the CFD is incremental and manual. Tools have been implemented to facilitate the tasks of defining/maintaining/publishing dictionary versions. The design of database applications in the integrated clinical information system is driven by the CFD which is part of the Medical Questionnaire Designer tool. Several integrated clinical database applications in the field of diabetes and neuro-surgery have been developed at the HUG. Images Figure 1 PMID:10566451
Duchrow, Timo; Shtatland, Timur; Guettler, Daniel; Pivovarov, Misha; Kramer, Stefan; Weissleder, Ralph
2009-01-01
Background The breadth of biological databases and their information content continues to increase exponentially. Unfortunately, our ability to query such sources is still often suboptimal. Here, we introduce and apply community voting, database-driven text classification, and visual aids as a means to incorporate distributed expert knowledge, to automatically classify database entries and to efficiently retrieve them. Results Using a previously developed peptide database as an example, we compared several machine learning algorithms in their ability to classify abstracts of published literature results into categories relevant to peptide research, such as related or not related to cancer, angiogenesis, molecular imaging, etc. Ensembles of bagged decision trees met the requirements of our application best. No other algorithm consistently performed better in comparative testing. Moreover, we show that the algorithm produces meaningful class probability estimates, which can be used to visualize the confidence of automatic classification during the retrieval process. To allow viewing long lists of search results enriched by automatic classifications, we added a dynamic heat map to the web interface. We take advantage of community knowledge by enabling users to cast votes in Web 2.0 style in order to correct automated classification errors, which triggers reclassification of all entries. We used a novel framework in which the database "drives" the entire vote aggregation and reclassification process to increase speed while conserving computational resources and keeping the method scalable. In our experiments, we simulate community voting by adding various levels of noise to nearly perfectly labelled instances, and show that, under such conditions, classification can be improved significantly. Conclusion Using PepBank as a model database, we show how to build a classification-aided retrieval system that gathers training data from the community, is completely controlled by the database, scales well with concurrent change events, and can be adapted to add text classification capability to other biomedical databases. The system can be accessed at . PMID:19799796
2014-04-25
EA’s Java application programming interface (API), the team built a tool called OWL2EA that can ingest an OWL file and generate the corresponding UML...ObjectItemStructure specification shown in Figure 10. Running this script in the relational database server MySQL creates the physical schema that
Freeman, Kurt A; Duke, Danny C
2013-08-01
The authors assessed the effectiveness of habit reversal training (HRT) to treat a complex motor stereotypy in a healthy 3-year-old female. This data-based case study involved training parents in HRT to deliver the parent-driven intervention to the child. The frequency of the child's behaviors was estimated daily in 30-min intervals by her parents. Outcomes supported the effectiveness of the intervention, with the estimated frequency of the stereotypy decreasing from occurring during approximately 85% of recorded intervals to less than 2% over a period of 4 weeks. Further record keeping over 19 weeks suggested treatment gains were generally maintained over time. The current case study provides preliminary evidence supporting the effectiveness of modified HRT to reduce stereotypies in young children. Further, data suggest that the intervention may be extended to younger ages by teaching parents how to facilitate treatment delivery. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Building an Ontology-driven Database for Clinical Immune Research
Ma, Jingming
2006-01-01
The clinical researches of immune response usually generate a huge amount of biomedical testing data over a certain period of time. The user-friendly data management systems based on the relational database will help immunologists/clinicians to fully manage the data. On the other hand, the same biological assays such as ELISPOT and flow cytometric assays are involved in immunological experiments no matter of different study purposes. The reuse of biological knowledge is one of driving forces behind this ontology-driven data management. Therefore, an ontology-driven database will help to handle different clinical immune researches and help immunologists/clinicians easily understand the immunological data from each other. We will discuss some outlines for building an ontology-driven data management for clinical immune researches (ODMim). PMID:17238637
Hydrogen storage with trilithium aluminum hexahydride
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nathaniel, T.A.
1998-05-14
Fuel cells have good potential to replace batteries for many applications requiring moderate, portable electric power. Applications being researched can range from cellular telephones and radios to power generators for large camps. The primary advantages of fuel cells include high power density, low temperature operation, silent operation, no poisonous exhausts, high electric efficiency, and fast start-up capability. While many commercial industries are just beginning to look at the opportunities fuel cells present, the space program has driven the development of fuel cell technology. The paper discusses the status of the fuel cell and in particular, the technology for hydrogen storagemore » for fuel cell use.« less
Prototype Development: Context-Driven Dynamic XML Ophthalmologic Data Capture Application.
Peissig, Peggy; Schwei, Kelsey M; Kadolph, Christopher; Finamore, Joseph; Cancel, Efrain; McCarty, Catherine A; Okorie, Asha; Thomas, Kate L; Allen Pacheco, Jennifer; Pathak, Jyotishman; Ellis, Stephen B; Denny, Joshua C; Rasmussen, Luke V; Tromp, Gerard; Williams, Marc S; Vrabec, Tamara R; Brilliant, Murray H
2017-09-13
The capture and integration of structured ophthalmologic data into electronic health records (EHRs) has historically been a challenge. However, the importance of this activity for patient care and research is critical. The purpose of this study was to develop a prototype of a context-driven dynamic extensible markup language (XML) ophthalmologic data capture application for research and clinical care that could be easily integrated into an EHR system. Stakeholders in the medical, research, and informatics fields were interviewed and surveyed to determine data and system requirements for ophthalmologic data capture. On the basis of these requirements, an ophthalmology data capture application was developed to collect and store discrete data elements with important graphical information. The context-driven data entry application supports several features, including ink-over drawing capability for documenting eye abnormalities, context-based Web controls that guide data entry based on preestablished dependencies, and an adaptable database or XML schema that stores Web form specifications and allows for immediate changes in form layout or content. The application utilizes Web services to enable data integration with a variety of EHRs for retrieval and storage of patient data. This paper describes the development process used to create a context-driven dynamic XML data capture application for optometry and ophthalmology. The list of ophthalmologic data elements identified as important for care and research can be used as a baseline list for future ophthalmologic data collection activities. ©Peggy Peissig, Kelsey M Schwei, Christopher Kadolph, Joseph Finamore, Efrain Cancel, Catherine A McCarty, Asha Okorie, Kate L Thomas, Jennifer Allen Pacheco, Jyotishman Pathak, Stephen B Ellis, Joshua C Denny, Luke V Rasmussen, Gerard Tromp, Marc S Williams, Tamara R Vrabec, Murray H Brilliant. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 13.09.2017.
WebEAV: automatic metadata-driven generation of web interfaces to entity-attribute-value databases.
Nadkarni, P M; Brandt, C M; Marenco, L
2000-01-01
The task of creating and maintaining a front end to a large institutional entity-attribute-value (EAV) database can be cumbersome when using traditional client-server technology. Switching to Web technology as a delivery vehicle solves some of these problems but introduces others. In particular, Web development environments tend to be primitive, and many features that client-server developers take for granted are missing. WebEAV is a generic framework for Web development that is intended to streamline the process of Web application development for databases having a significant EAV component. It also addresses some challenging user interface issues that arise when any complex system is created. The authors describe the architecture of WebEAV and provide an overview of its features with suitable examples.
Development of a biomarkers database for the National Children's Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobdell, Danelle T.; Mendola, Pauline
The National Children's Study (NCS) is a federally-sponsored, longitudinal study of environmental influences on the health and development of children across the United States (www.nationalchildrensstudy.gov). Current plans are to study approximately 100,000 children and their families beginning before birth up to age 21 years. To explore potential biomarkers that could be important measurements in the NCS, we compiled the relevant scientific literature to identify both routine or standardized biological markers as well as new and emerging biological markers. Although the search criteria encouraged examination of factors that influence the breadth of child health and development, attention was primarily focused onmore » exposure, susceptibility, and outcome biomarkers associated with four important child health outcomes: autism and neurobehavioral disorders, injury, cancer, and asthma. The Biomarkers Database was designed to allow users to: (1) search the biomarker records compiled by type of marker (susceptibility, exposure or effect), sampling media (e.g., blood, urine, etc.), and specific marker name; (2) search the citations file; and (3) read the abstract evaluations relative to our search criteria. A searchable, user-friendly database of over 2000 articles was created and is publicly available at: http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=85844. PubMed was the primary source of references with some additional searches of Toxline, NTIS, and other reference databases. Our initial focus was on review articles, beginning as early as 1996, supplemented with searches of the recent primary research literature from 2001 to 2003. We anticipate this database will have applicability for the NCS as well as other studies of children's environmental health.« less
NASA Technical Reports Server (NTRS)
Hochstadt, Jake
2011-01-01
Ruby on Rails is an open source web application framework for the Ruby programming language. The first application I built was a web application to manage and authenticate other applications. One of the main requirements for this application was a single sign-on service. This allowed authentication to be built in one location and be implemented in many different applications. For example, users would be able to login using their existing credentials, and be able to access other NASA applications without authenticating again. The second application I worked on was an internal qualification plan app. Previously, the viewing of employee qualifications was managed through Excel spread sheets. I built a database driven application to streamline the process of managing qualifications. Employees would be able to login securely to view, edit and update their personal qualifications.
Handbook of the Materials Properties of FeCrAl Alloys For Nuclear Power Production Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Yukinori; Snead, Mary A.; Field, Kevin G.
FeCrAl alloys are a class of alloys that have seen increased interest for nuclear power applications including as accident tolerant fuel cladding, structural components for fast fission reactors, and as first wall and blanket structures for fusion reactors. FeCrAl alloys are under consideration for these applications due to their inherent corrosion resistance, stress corrosion cracking resistance, radiation-induced swelling resistance, and high temperature oxidation resistance. A substantial amount of research effort has been completed to design, develop, and begin commercial scaling of FeCrAl alloys for nuclear power applications over the past half a century. These efforts have led to the developmentmore » of an extensive database on material properties and process knowledge for FeCrAl alloys but not within a consolidated format. The following report is the first edition of a materials handbook to consolidate the state-of-the-art on FeCrAl alloys for nuclear power applications. This centralized database focuses solely on wrought FeCrAl alloys, oxide dispersion strengthened alloys, although discussed in brief, are not covered. Where appropriate, recommendations for applications of the data is provided and current knowledge gaps are identified.« less
Pattern database applications from design to manufacturing
NASA Astrophysics Data System (ADS)
Zhuang, Linda; Zhu, Annie; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh
2017-03-01
Pattern-based approaches are becoming more common and popular as the industry moves to advanced technology nodes. At the beginning of a new technology node, a library of process weak point patterns for physical and electrical verification are starting to build up and used to prevent known hotspots from re-occurring on new designs. Then the pattern set is expanded to create test keys for process development in order to verify the manufacturing capability and precheck new tape-out designs for any potential yield detractors. With the database growing, the adoption of pattern-based approaches has expanded from design flows to technology development and then needed for mass-production purposes. This paper will present the complete downstream working flows of a design pattern database(PDB). This pattern-based data analysis flow covers different applications across different functional teams from generating enhancement kits to improving design manufacturability, populating new testing design data based on previous-learning, generating analysis data to improve mass-production efficiency and manufacturing equipment in-line control to check machine status consistency across different fab sites.
NASA's computer science research program
NASA Technical Reports Server (NTRS)
Larsen, R. L.
1983-01-01
Following a major assessment of NASA's computing technology needs, a new program of computer science research has been initiated by the Agency. The program includes work in concurrent processing, management of large scale scientific databases, software engineering, reliable computing, and artificial intelligence. The program is driven by applications requirements in computational fluid dynamics, image processing, sensor data management, real-time mission control and autonomous systems. It consists of university research, in-house NASA research, and NASA's Research Institute for Advanced Computer Science (RIACS) and Institute for Computer Applications in Science and Engineering (ICASE). The overall goal is to provide the technical foundation within NASA to exploit advancing computing technology in aerospace applications.
Statewide Education Databases: Policy Issues. Discussion Draft.
ERIC Educational Resources Information Center
Hansen, Kenneth H.
This essay reviews current policy issues regarding statewide educational databases. It begins by defining the major characteristics of a database and raising two questions: (1) Is it really necessary to have a statewide educational database? (2) What is the primary rationale for creating one? The limitations of databases in formulating educational…
Location-Driven Image Retrieval for Images Collected by a Mobile Robot
NASA Astrophysics Data System (ADS)
Tanaka, Kanji; Hirayama, Mitsuru; Okada, Nobuhiro; Kondo, Eiji
Mobile robot teleoperation is a method for a human user to interact with a mobile robot over time and distance. Successful teleoperation depends on how well images taken by the mobile robot are visualized to the user. To enhance the efficiency and flexibility of the visualization, an image retrieval system on such a robot’s image database would be very useful. The main difference of the robot’s image database from standard image databases is that various relevant images exist due to variety of viewing conditions. The main contribution of this paper is to propose an efficient retrieval approach, named location-driven approach, utilizing correlation between visual features and real world locations of images. Combining the location-driven approach with the conventional feature-driven approach, our goal can be viewed as finding an optimal classifier between relevant and irrelevant feature-location pairs. An active learning technique based on support vector machine is extended for this aim.
Gonzalez, Sergio; Clavijo, Bernardo; Rivarola, Máximo; Moreno, Patricio; Fernandez, Paula; Dopazo, Joaquín; Paniego, Norma
2017-02-22
In the last years, applications based on massively parallelized RNA sequencing (RNA-seq) have become valuable approaches for studying non-model species, e.g., without a fully sequenced genome. RNA-seq is a useful tool for detecting novel transcripts and genetic variations and for evaluating differential gene expression by digital measurements. The large and complex datasets resulting from functional genomic experiments represent a challenge in data processing, management, and analysis. This problem is especially significant for small research groups working with non-model species. We developed a web-based application, called ATGC transcriptomics, with a flexible and adaptable interface that allows users to work with new generation sequencing (NGS) transcriptomic analysis results using an ontology-driven database. This new application simplifies data exploration, visualization, and integration for a better comprehension of the results. ATGC transcriptomics provides access to non-expert computer users and small research groups to a scalable storage option and simple data integration, including database administration and management. The software is freely available under the terms of GNU public license at http://atgcinta.sourceforge.net .
Move Over, Word Processors--Here Come the Databases.
ERIC Educational Resources Information Center
Olds, Henry F., Jr.; Dickenson, Anne
1985-01-01
Discusses the use of beginning, intermediate, and advanced databases for instructional purposes. A table listing seven databases with information on ease of use, smoothness of operation, data capacity, speed, source, and program features is included. (JN)
[Social inequality in medical rehabilitation].
Deck, Ruth; Hofreuter-Gätgens, Kerstin
2016-02-01
The analysis of inequalities in health care provision in Germany is of high sociopolitical relevance. For medical rehabilitation, which is an essential part of health care provision, only a few studies exist. With the example of psychosomatic and orthopedic medical rehabilitation, the present article investigates how features of social inequality influence different aspects of medical rehabilitation. The database consists of a written survey on the quality assurance of medical rehabilitation in northern Germany that includes 687 patients aged between 21 and 87 years. Aspects of the access to rehabilitation (e.g., the motivation for application), the process (participation in therapies) and the outcomes (e.g., subjective health and occupational risk) of rehabilitation were investigated in relation to social inequality. Social inequality was measured by means of a social class index. For the analysis, Chi-squared tests, t tests and a repeated measures analysis of variance, adjusted for sex and age, were conducted. Initially, the analyses indicate that social inequality is of minor importance for access to rehabilitation and processes within rehabilitation. As subjective health is unequally distributed at the beginning of rehabilitation, however, equal treatment has to be discussed critically in terms of demand-driven treatment. In rehabilitation outcome distinct differences between social classes exist. To reduce these differences, rehabilitation aftercare close to the individual's living environment is necessary, which promotes the empowerment of vulnerable social groups in burdensome living conditions.
Sahoo, Satya S; Zhang, Guo-Qiang; Lhatoo, Samden D
2013-08-01
The epilepsy community increasingly recognizes the need for a modern classification system that can also be easily integrated with effective informatics tools. The 2010 reports by the United States President's Council of Advisors on Science and Technology (PCAST) identified informatics as a critical resource to improve quality of patient care, drive clinical research, and reduce the cost of health services. An effective informatics infrastructure for epilepsy, which is underpinned by a formal knowledge model or ontology, can leverage an ever increasing amount of multimodal data to improve (1) clinical decision support, (2) access to information for patients and their families, (3) easier data sharing, and (4) accelerate secondary use of clinical data. Modeling the recommendations of the International League Against Epilepsy (ILAE) classification system in the form of an epilepsy domain ontology is essential for consistent use of terminology in a variety of applications, including electronic health records systems and clinical applications. In this review, we discuss the data management issues in epilepsy and explore the benefits of an ontology-driven informatics infrastructure and its role in adoption of a "data-driven" paradigm in epilepsy research. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.
2011-01-01
Background Although many biological databases are applying semantic web technologies, meaningful biological hypothesis testing cannot be easily achieved. Database-driven high throughput genomic hypothesis testing requires both of the capabilities of obtaining semantically relevant experimental data and of performing relevant statistical testing for the retrieved data. Tissue Microarray (TMA) data are semantically rich and contains many biologically important hypotheses waiting for high throughput conclusions. Methods An application-specific ontology was developed for managing TMA and DNA microarray databases by semantic web technologies. Data were represented as Resource Description Framework (RDF) according to the framework of the ontology. Applications for hypothesis testing (Xperanto-RDF) for TMA data were designed and implemented by (1) formulating the syntactic and semantic structures of the hypotheses derived from TMA experiments, (2) formulating SPARQLs to reflect the semantic structures of the hypotheses, and (3) performing statistical test with the result sets returned by the SPARQLs. Results When a user designs a hypothesis in Xperanto-RDF and submits it, the hypothesis can be tested against TMA experimental data stored in Xperanto-RDF. When we evaluated four previously validated hypotheses as an illustration, all the hypotheses were supported by Xperanto-RDF. Conclusions We demonstrated the utility of high throughput biological hypothesis testing. We believe that preliminary investigation before performing highly controlled experiment can be benefited. PMID:21342584
DataHub: Knowledge-based data management for data discovery
NASA Astrophysics Data System (ADS)
Handley, Thomas H.; Li, Y. Philip
1993-08-01
Currently available database technology is largely designed for business data-processing applications, and seems inadequate for scientific applications. The research described in this paper, the DataHub, will address the issues associated with this shortfall in technology utilization and development. The DataHub development is addressing the key issues in scientific data management of scientific database models and resource sharing in a geographically distributed, multi-disciplinary, science research environment. Thus, the DataHub will be a server between the data suppliers and data consumers to facilitate data exchanges, to assist science data analysis, and to provide as systematic approach for science data management. More specifically, the DataHub's objectives are to provide support for (1) exploratory data analysis (i.e., data driven analysis); (2) data transformations; (3) data semantics capture and usage; analysis-related knowledge capture and usage; and (5) data discovery, ingestion, and extraction. Applying technologies that vary from deductive databases, semantic data models, data discovery, knowledge representation and inferencing, exploratory data analysis techniques and modern man-machine interfaces, DataHub will provide a prototype, integrated environement to support research scientists' needs in multiple disciplines (i.e. oceanography, geology, and atmospheric) while addressing the more general science data management issues. Additionally, the DataHub will provide data management services to exploratory data analysis applications such as LinkWinds and NCSA's XIMAGE.
Surface chemistry of Au/TiO2: Thermally and photolytically activated reactions
NASA Astrophysics Data System (ADS)
Panayotov, Dimitar A.; Morris, John R.
2016-03-01
The fascinating particle size dependence to the physical, photophysical, and chemical properties of gold has motivated thousands of studies focused on exploring the ability of supported gold nanoparticles to catalyze chemical transformations. In particular, titanium dioxide-supported gold (Au/TiO2) nanoparticles may provide the right combination of electronic structure, structural dynamics, and stability to affect catalysis in important practical applications from environmental remediation to selective hydrogenation to carbon monoxide oxidation. Harnessing the full potential of Au/TiO2 will require a detailed atomic-scale understanding of the thermal and photolytic processes that accompany chemical conversion. This review describes some of the unique properties exhibited by particulate gold before delving into how those properties affect chemistry on titania supports. Particular attention is given first to thermally driven reactions on single crystal system. This review then addresses nanoparticulate samples in an effort begin to bridge the so-called materials gap. Building on the foundation provided by the large body of work in the field of thermal catalysis, the review describes new research into light-driven catalysis on Au/TiO2. Importantly, the reader should bear in mind throughout this review that thermal chemistry and thermal effects typically accompany photochemistry. Distinguishing between thermally-driven stages of a reaction and photo-induced steps remains a significant challenge, but one that experimentalists and theorists are beginning to decipher with new approaches. Finally, a summary of several state-of-the-art studies describes how they are illuminating new frontiers in the quest to exploit Au/TiO2 as an efficient catalyst and low-energy photocatalyst.
Skillbäck, Tobias; Mattsson, Niklas; Hansson, Karl; Mirgorodskaya, Ekaterina; Dahlén, Rahil; van der Flier, Wiesje; Scheltens, Philip; Duits, Floor; Hansson, Oskar; Teunissen, Charlotte; Blennow, Kaj; Zetterberg, Henrik; Gobom, Johan
2017-10-17
We present a new, quantification-driven proteomic approach to identifying biomarkers. In contrast to the identification-driven approach, limited in scope to peptides that are identified by database searching in the first step, all MS data are considered to select biomarker candidates. The endopeptidome of cerebrospinal fluid from 40 Alzheimer's disease (AD) patients, 40 subjects with mild cognitive impairment, and 40 controls with subjective cognitive decline was analyzed using multiplex isobaric labeling. Spectral clustering was used to match MS/MS spectra. The top biomarker candidate cluster (215% higher in AD compared to controls, area under ROC curve = 0.96) was identified as a fragment of pleiotrophin located near the protein's C-terminus. Analysis of another cohort (n = 60 over four clinical groups) verified that the biomarker was increased in AD patients while no change in controls, Parkinson's disease or progressive supranuclear palsy was observed. The identification of the novel biomarker pleiotrophin 151-166 demonstrates that our quantification-driven proteomic approach is a promising method for biomarker discovery, which may be universally applicable in clinical proteomics.
NASA Astrophysics Data System (ADS)
Scharberg, Maureen A.; Cox, Oran E.; Barelli, Carl A.
1997-07-01
"The Molecule of the Day" consumer chemical database has been created to allow introductory chemistry students to explore molecular structures of chemicals in household products, and to provide opportunities in molecular modeling for undergraduate chemistry students. Before class begins, an overhead transparency is displayed which shows a three-dimensional molecular structure of a household chemical, and lists relevant features and uses of this chemical. Within answers to questionnaires, students have commented that this molecular graphics database has helped them to visually connect the microscopic structure of a molecule with its physical and chemical properties, as well as its uses in consumer products. It is anticipated that this database will be incorporated into a navigational software package such as Netscape.
NASA Technical Reports Server (NTRS)
Callac, Christopher; Lunsford, Michelle
2005-01-01
The NASA Records Database, comprising a Web-based application program and a database, is used to administer an archive of paper records at Stennis Space Center. The system begins with an electronic form, into which a user enters information about records that the user is sending to the archive. The form is smart : it provides instructions for entering information correctly and prompts the user to enter all required information. Once complete, the form is digitally signed and submitted to the database. The system determines which storage locations are not in use, assigns the user s boxes of records to some of them, and enters these assignments in the database. Thereafter, the software tracks the boxes and can be used to locate them. By use of search capabilities of the software, specific records can be sought by box storage locations, accession numbers, record dates, submitting organizations, or details of the records themselves. Boxes can be marked with such statuses as checked out, lost, transferred, and destroyed. The system can generate reports showing boxes awaiting destruction or transfer. When boxes are transferred to the National Archives and Records Administration (NARA), the system can automatically fill out NARA records-transfer forms. Currently, several other NASA Centers are considering deploying the NASA Records Database to help automate their records archives.
JAX Colony Management System (JCMS): an extensible colony and phenotype data management system.
Donnelly, Chuck J; McFarland, Mike; Ames, Abigail; Sundberg, Beth; Springer, Dave; Blauth, Peter; Bult, Carol J
2010-04-01
The Jackson Laboratory Colony Management System (JCMS) is a software application for managing data and information related to research mouse colonies, associated biospecimens, and experimental protocols. JCMS runs directly on computers that run one of the PC Windows operating systems, but can be accessed via web browser interfaces from any computer running a Windows, Macintosh, or Linux operating system. JCMS can be configured for a single user or multiple users in small- to medium-size work groups. The target audience for JCMS includes laboratory technicians, animal colony managers, and principal investigators. The application provides operational support for colony management and experimental workflows, sample and data tracking through transaction-based data entry forms, and date-driven work reports. Flexible query forms allow researchers to retrieve database records based on user-defined criteria. Recent advances in handheld computers with integrated barcode readers, middleware technologies, web browsers, and wireless networks add to the utility of JCMS by allowing real-time access to the database from any networked computer.
The MR-Base platform supports systematic causal inference across the human phenome
Wade, Kaitlin H; Haberland, Valeriia; Baird, Denis; Laurin, Charles; Burgess, Stephen; Bowden, Jack; Langdon, Ryan; Tan, Vanessa Y; Yarmolinsky, James; Shihab, Hashem A; Timpson, Nicholas J; Evans, David M; Relton, Caroline; Martin, Richard M; Davey Smith, George
2018-01-01
Results from genome-wide association studies (GWAS) can be used to infer causal relationships between phenotypes, using a strategy known as 2-sample Mendelian randomization (2SMR) and bypassing the need for individual-level data. However, 2SMR methods are evolving rapidly and GWAS results are often insufficiently curated, undermining efficient implementation of the approach. We therefore developed MR-Base (http://www.mrbase.org): a platform that integrates a curated database of complete GWAS results (no restrictions according to statistical significance) with an application programming interface, web app and R packages that automate 2SMR. The software includes several sensitivity analyses for assessing the impact of horizontal pleiotropy and other violations of assumptions. The database currently comprises 11 billion single nucleotide polymorphism-trait associations from 1673 GWAS and is updated on a regular basis. Integrating data with software ensures more rigorous application of hypothesis-driven analyses and allows millions of potential causal relationships to be efficiently evaluated in phenome-wide association studies. PMID:29846171
On Mixed Data and Event Driven Design for Adaptive-Critic-Based Nonlinear $H_{\\infty}$ Control.
Wang, Ding; Mu, Chaoxu; Liu, Derong; Ma, Hongwen
2018-04-01
In this paper, based on the adaptive critic learning technique, the control for a class of unknown nonlinear dynamic systems is investigated by adopting a mixed data and event driven design approach. The nonlinear control problem is formulated as a two-player zero-sum differential game and the adaptive critic method is employed to cope with the data-based optimization. The novelty lies in that the data driven learning identifier is combined with the event driven design formulation, in order to develop the adaptive critic controller, thereby accomplishing the nonlinear control. The event driven optimal control law and the time driven worst case disturbance law are approximated by constructing and tuning a critic neural network. Applying the event driven feedback control, the closed-loop system is built with stability analysis. Simulation studies are conducted to verify the theoretical results and illustrate the control performance. It is significant to observe that the present research provides a new avenue of integrating data-based control and event-triggering mechanism into establishing advanced adaptive critic systems.
Using PHP/MySQL to Manage Potential Mass Impacts
NASA Technical Reports Server (NTRS)
Hager, Benjamin I.
2010-01-01
This paper presents a new application using commercially available software to manage mass properties for spaceflight vehicles. PHP/MySQL(PHP: Hypertext Preprocessor and My Structured Query Language) are a web scripting language and a database language commonly used in concert with each other. They open up new opportunities to develop cutting edge mass properties tools, and in particular, tools for the management of potential mass impacts (threats and opportunities). The paper begins by providing an overview of the functions and capabilities of PHP/MySQL. The focus of this paper is on how PHP/MySQL are being used to develop an advanced "web accessible" database system for identifying and managing mass impacts on NASA's Ares I Upper Stage program, managed by the Marshall Space Flight Center. To fully describe this application, examples of the data, search functions, and views are provided to promote, not only the function, but the security, ease of use, simplicity, and eye-appeal of this new application. This paper concludes with an overview of the other potential mass properties applications and tools that could be developed using PHP/MySQL. The premise behind this paper is that PHP/MySQL are software tools that are easy to use and readily available for the development of cutting edge mass properties applications. These tools are capable of providing "real-time" searching and status of an active database, automated report generation, and other capabilities to streamline and enhance mass properties management application. By using PHP/MySQL, proven existing methods for managing mass properties can be adapted to present-day information technology to accelerate mass properties data gathering, analysis, and reporting, allowing mass property management to keep pace with today's fast-pace design and development processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theodore Larrieu, Christopher Slominski, Michele Joyce
2011-03-01
With the inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting control computers to building controls screens. A requirement influencing the CED design is that it provide access to not only present, but also future and past configurations of the accelerator. To accomplish this, an introspective database schema was designed that allows new elements, types, and properties to be defined on-the-fly withmore » no changes to table structure. Used in conjunction with Oracle Workspace Manager, it allows users to query data from any time in the database history with the same tools used to query the present configuration. Users can also check-out workspaces to use as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented Application Programming Interface (API) that is translated automatically from original C++ source code into native libraries for scripting languages such as perl, php, and TCL making access to the CED easy and ubiquitous.« less
University Real Estate Development Database: A Database-Driven Internet Research Tool
ERIC Educational Resources Information Center
Wiewel, Wim; Kunst, Kara
2008-01-01
The University Real Estate Development Database is an Internet resource developed by the University of Baltimore for the Lincoln Institute of Land Policy, containing over six hundred cases of university expansion outside of traditional campus boundaries. The University Real Estate Development database is a searchable collection of real estate…
GIS Toolsets for Planetary Geomorphology and Landing-Site Analysis
NASA Astrophysics Data System (ADS)
Nass, Andrea; van Gasselt, Stephan
2015-04-01
Modern Geographic Information Systems (GIS) allow expert and lay users alike to load and position geographic data and perform simple to highly complex surface analyses. For many applications dedicated and ready-to-use GIS tools are available in standard software systems while other applications require the modular combination of available basic tools to answer more specific questions. This also applies to analyses in modern planetary geomorphology where many of such (basic) tools can be used to build complex analysis tools, e.g. in image- and terrain model analysis. Apart from the simple application of sets of different tools, many complex tasks require a more sophisticated design for storing and accessing data using databases (e.g. ArcHydro for hydrological data analysis). In planetary sciences, complex database-driven models are often required to efficiently analyse potential landings sites or store rover data, but also geologic mapping data can be efficiently stored and accessed using database models rather than stand-alone shapefiles. For landings-site analyses, relief and surface roughness estimates are two common concepts that are of particular interest and for both, a number of different definitions co-exist. We here present an advanced toolset for the analysis of image and terrain-model data with an emphasis on extraction of landing site characteristics using established criteria. We provide working examples and particularly focus on the concepts of terrain roughness as it is interpreted in geomorphology and engineering studies.
Magidson, Jessica F; Roberts, Brent W; Collado-Rodriguez, Anahi; Lejuez, C W
2014-05-01
Considerable evidence suggests that personality traits may be changeable, raising the possibility that personality traits most linked to health problems can be modified with intervention. A growing body of research suggests that problematic personality traits may be altered with behavioral intervention using a bottom-up approach. That is, by targeting core behaviors that underlie personality traits with the goal of engendering new, healthier patterns of behavior that, over time, become automatized and manifest in changes in personality traits. Nevertheless, a bottom-up model for changing personality traits is somewhat diffuse and requires clearer integration of theory and relevant interventions to enable real clinical application. As such, this article proposes a set of guiding principles for theory-driven modification of targeted personality traits using a bottom-up approach, focusing specifically on targeting the trait of conscientiousness using a relevant behavioral intervention, Behavioral Activation (BA), considered within the motivational framework of expectancy value theory (EVT). We conclude with a real case example of the application of BA to alter behaviors counter to conscientiousness in a substance-dependent patient, highlighting the EVT principles most relevant to the approach and the importance and viability of a theoretically driven, bottom-up approach to changing personality traits. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
The Supernovae Analysis Application (SNAP)
NASA Astrophysics Data System (ADS)
Bayless, Amanda J.; Fryer, Chris L.; Wollaeger, Ryan; Wiggins, Brandon; Even, Wesley; de la Rosa, Janie; Roming, Peter W. A.; Frey, Lucy; Young, Patrick A.; Thorpe, Rob; Powell, Luke; Landers, Rachel; Persson, Heather D.; Hay, Rebecca
2017-09-01
The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginning to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.
Beginning the 21st century with advanced Automatic Parts Identification (API)
NASA Technical Reports Server (NTRS)
Schramm, Fred; Roxby, Don
1994-01-01
Under the direction of the NASA George C. Marshall Space Flight Center, Huntsville, Alabama, the development and commercialization of an advanced Automated Parts Indentification (API) system is being undertaken by Rockwell International Corporation. The new API system is based on a variable sized, machine-readable, two-dimensioanl matrix symbol that can be applied directly onto most metallic and nonmetallic materials using safe, permanent marking methods. Its checkerboard-like structure is the most space efficient of all symbologies. This high data-density symbology can be applied to products of different material sizes and geometries using application-dependent, computer-driven marking devices. The high fidelity markings produced by these devices can then be captured using a specially designed camera linked to any IBM-compatible computer. Applications of compressed symbology technology will reduce costs and improve quality, productivity, and processes in a wide variety of federal and commercial applications.
Marketing the pathology practice.
Berkowitz, E N
1995-07-01
Effective marketing of the pathology practice is essential in the face of an increasingly competitive market. Successful marketing begins with a market-driven planning process. As opposed to the traditional planning process used in health care organizations, a market-driven approach is externally driven. Implementing a market-driven plan also requires recognition of the definition of the service. Each market to which pathologists direct their service defines the service differently. Recognition of these different service definitions and creation of a product to meet these needs could lead to competitive advantages in the marketplace.
NASA Astrophysics Data System (ADS)
Mohan, C.
In this paper, I survey briefly some of the recent and emerging trends in hardware and software features which impact high performance transaction processing and data analytics applications. These features include multicore processor chips, ultra large main memories, flash storage, storage class memories, database appliances, field programmable gate arrays, transactional memory, key-value stores, and cloud computing. While some applications, e.g., Web 2.0 ones, were initially built without traditional transaction processing functionality in mind, slowly system architects and designers are beginning to address such previously ignored issues. The availability, analytics and response time requirements of these applications were initially given more importance than ACID transaction semantics and resource consumption characteristics. A project at IBM Almaden is studying the implications of phase change memory on transaction processing, in the context of a key-value store. Bitemporal data management has also become an important requirement, especially for financial applications. Power consumption and heat dissipation properties are also major considerations in the emergence of modern software and hardware architectural features. Considerations relating to ease of configuration, installation, maintenance and monitoring, and improvement of total cost of ownership have resulted in database appliances becoming very popular. The MapReduce paradigm is now quite popular for large scale data analysis, in spite of the major inefficiencies associated with it.
You, Seng Chan; Lee, Seongwon; Cho, Soo-Yeon; Park, Hojun; Jung, Sungjae; Cho, Jaehyeong; Yoon, Dukyong; Park, Rae Woong
2017-01-01
It is increasingly necessary to generate medical evidence applicable to Asian people compared to those in Western countries. Observational Health Data Sciences a Informatics (OHDSI) is an international collaborative which aims to facilitate generating high-quality evidence via creating and applying open-source data analytic solutions to a large network of health databases across countries. We aimed to incorporate Korean nationwide cohort data into the OHDSI network by converting the national sample cohort into Observational Medical Outcomes Partnership-Common Data Model (OMOP-CDM). The data of 1.13 million subjects was converted to OMOP-CDM, resulting in average 99.1% conversion rate. The ACHILLES, open-source OMOP-CDM-based data profiling tool, was conducted on the converted database to visualize data-driven characterization and access the quality of data. The OMOP-CDM version of National Health Insurance Service-National Sample Cohort (NHIS-NSC) can be a valuable tool for multiple aspects of medical research by incorporation into the OHDSI research network.
ERAIZDA: a model for holistic annotation of animal infectious and zoonotic diseases
Buza, Teresia M.; Jack, Sherman W.; Kirunda, Halid; Khaitsa, Margaret L.; Lawrence, Mark L.; Pruett, Stephen; Peterson, Daniel G.
2015-01-01
There is an urgent need for a unified resource that integrates trans-disciplinary annotations of emerging and reemerging animal infectious and zoonotic diseases. Such data integration will provide wonderful opportunity for epidemiologists, researchers and health policy makers to make data-driven decisions designed to improve animal health. Integrating emerging and reemerging animal infectious and zoonotic disease data from a large variety of sources into a unified open-access resource provides more plausible arguments to achieve better understanding of infectious and zoonotic diseases. We have developed a model for interlinking annotations of these diseases. These diseases are of particular interest because of the threats they pose to animal health, human health and global health security. We demonstrated the application of this model using brucellosis, an infectious and zoonotic disease. Preliminary annotations were deposited into VetBioBase database (http://vetbiobase.igbb.msstate.edu). This database is associated with user-friendly tools to facilitate searching, retrieving and downloading of disease-related information. Database URL: http://vetbiobase.igbb.msstate.edu PMID:26581408
Tufts Health Sciences Database: Lessons, Issues, and Opportunities.
ERIC Educational Resources Information Center
Lee, Mary Y.; Albright, Susan A.; Alkasab, Tarik; Damassa, David A.; Wang, Paul J.; Eaton, Elizabeth K.
2003-01-01
Describes a seven-year experience with developing the Tufts Health Sciences Database, a database-driven information management system that combines the strengths of a digital library, content delivery tools, and curriculum management. Identifies major effects on teaching and learning. Also addresses issues of faculty development, copyright and…
Characterizing the genetic structure of a forensic DNA database using a latent variable approach.
Kruijver, Maarten
2016-07-01
Several problems in forensic genetics require a representative model of a forensic DNA database. Obtaining an accurate representation of the offender database can be difficult, since databases typically contain groups of persons with unregistered ethnic origins in unknown proportions. We propose to estimate the allele frequencies of the subpopulations comprising the offender database and their proportions from the database itself using a latent variable approach. We present a model for which parameters can be estimated using the expectation maximization (EM) algorithm. This approach does not rely on relatively small and possibly unrepresentative population surveys, but is driven by the actual genetic composition of the database only. We fit the model to a snapshot of the Dutch offender database (2014), which contains close to 180,000 profiles, and find that three subpopulations suffice to describe a large fraction of the heterogeneity in the database. We demonstrate the utility and reliability of the approach with three applications. First, we use the model to predict the number of false leads obtained in database searches. We assess how well the model predicts the number of false leads obtained in mock searches in the Dutch offender database, both for the case of familial searching for first degree relatives of a donor and searching for contributors to three-person mixtures. Second, we study the degree of partial matching between all pairs of profiles in the Dutch database and compare this to what is predicted using the latent variable approach. Third, we use the model to provide evidence to support that the Dutch practice of estimating match probabilities using the Balding-Nichols formula with a native Dutch reference database and θ=0.03 is conservative. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
ADASS Web Database XML Project
NASA Astrophysics Data System (ADS)
Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.
In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.
Vivar, Juan C; Pemu, Priscilla; McPherson, Ruth; Ghosh, Sujoy
2013-08-01
Abstract Unparalleled technological advances have fueled an explosive growth in the scope and scale of biological data and have propelled life sciences into the realm of "Big Data" that cannot be managed or analyzed by conventional approaches. Big Data in the life sciences are driven primarily via a diverse collection of 'omics'-based technologies, including genomics, proteomics, metabolomics, transcriptomics, metagenomics, and lipidomics. Gene-set enrichment analysis is a powerful approach for interrogating large 'omics' datasets, leading to the identification of biological mechanisms associated with observed outcomes. While several factors influence the results from such analysis, the impact from the contents of pathway databases is often under-appreciated. Pathway databases often contain variously named pathways that overlap with one another to varying degrees. Ignoring such redundancies during pathway analysis can lead to the designation of several pathways as being significant due to high content-similarity, rather than truly independent biological mechanisms. Statistically, such dependencies also result in correlated p values and overdispersion, leading to biased results. We investigated the level of redundancies in multiple pathway databases and observed large discrepancies in the nature and extent of pathway overlap. This prompted us to develop the application, ReCiPa (Redundancy Control in Pathway Databases), to control redundancies in pathway databases based on user-defined thresholds. Analysis of genomic and genetic datasets, using ReCiPa-generated overlap-controlled versions of KEGG and Reactome pathways, led to a reduction in redundancy among the top-scoring gene-sets and allowed for the inclusion of additional gene-sets representing possibly novel biological mechanisms. Using obesity as an example, bioinformatic analysis further demonstrated that gene-sets identified from overlap-controlled pathway databases show stronger evidence of prior association to obesity compared to pathways identified from the original databases.
Research on Ajax and Hibernate technology in the development of E-shop system
NASA Astrophysics Data System (ADS)
Yin, Luo
2011-12-01
Hibernate is a object relational mapping framework of open source code, which conducts light-weighted object encapsulation of JDBC to let Java programmers use the concept of object-oriented programming to manipulate database at will. The appearence of the concept of Ajax (asynchronous JavaScript and XML technology) begins the time prelude of page partial refresh so that developers can develop web application programs with stronger interaction. The paper illustrates the concrete application of Ajax and Hibernate to the development of E-shop in details and adopts them to design to divide the entire program code into relatively independent parts which can cooperate with one another as well. In this way, it is easier for the entire program to maintain and expand.
A radiology department intranet: development and applications.
Willing, S J; Berland, L L
1999-01-01
An intranet is a "private Internet" that uses the protocols of the World Wide Web to share information resources within a company or with the company's business partners and clients. The hardware requirements for an intranet begin with a dedicated Web server permanently connected to the departmental network. The heart of a Web server is the hypertext transfer protocol (HTTP) service, which receives a page request from a client's browser and transmits the page back to the client. Although knowledge of hypertext markup language (HTML) is not essential for authoring a Web page, a working familiarity with HTML is useful, as is knowledge of programming and database management. Security can be ensured by using scripts to write information in hidden fields or by means of "cookies." Interfacing databases and database management systems with the Web server and conforming the user interface to HTML syntax can be achieved by means of the common gateway interface (CGI), Active Server Pages (ASP), or other methods. An intranet in a radiology department could include the following types of content: on-call schedules, work schedules and a calendar, a personnel directory, resident resources, memorandums and discussion groups, software for a radiology information system, and databases.
NASA Astrophysics Data System (ADS)
Pritychenko, Boris; Hlavac, Stanislav; Schwerer, Otto; Zerkin, Viktor
2017-09-01
The Exchange Format (EXFOR) or experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource includes numerical data sets and bibliographical information for more than 22,000 experiments since the beginning of nuclear science. Analysis of the experimental data sets, recovery and archiving will be discussed. Examples of the recent developments of the data renormalization, uploads and inverse reaction calculations for nuclear science and technology applications will be presented. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development and research activities. It is publicly available at the National Nuclear Data Center website http://www.nndc.bnl.gov/exfor and the International Atomic Energy Agency mirror site http://www-nds.iaea.org/exfor. This work was sponsored in part by the Office of Nuclear Physics, Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886 with Brookha ven Science Associates, LLC.
ERIC Educational Resources Information Center
Lundquist, Carol; Frieder, Ophir; Holmes, David O.; Grossman, David
1999-01-01
Describes a scalable, parallel, relational database-drive information retrieval engine. To support portability across a wide range of execution environments, all algorithms adhere to the SQL-92 standard. By incorporating relevance feedback algorithms, accuracy is enhanced over prior database-driven information retrieval efforts. Presents…
a Review on State-Of Face Recognition Approaches
NASA Astrophysics Data System (ADS)
Mahmood, Zahid; Muhammad, Nazeer; Bibi, Nargis; Ali, Tauseef
Automatic Face Recognition (FR) presents a challenging task in the field of pattern recognition and despite the huge research in the past several decades; it still remains an open research problem. This is primarily due to the variability in the facial images, such as non-uniform illuminations, low resolution, occlusion, and/or variation in poses. Due to its non-intrusive nature, the FR is an attractive biometric modality and has gained a lot of attention in the biometric research community. Driven by the enormous number of potential application domains, many algorithms have been proposed for the FR. This paper presents an overview of the state-of-the-art FR algorithms, focusing their performances on publicly available databases. We highlight the conditions of the image databases with regard to the recognition rate of each approach. This is useful as a quick research overview and for practitioners as well to choose an algorithm for their specified FR application. To provide a comprehensive survey, the paper divides the FR algorithms into three categories: (1) intensity-based, (2) video-based, and (3) 3D based FR algorithms. In each category, the most commonly used algorithms and their performance is reported on standard face databases and a brief critical discussion is carried out.
NASA Astrophysics Data System (ADS)
Ichii, Kazuhito; Ueyama, Masahito; Kondo, Masayuki; Saigusa, Nobuko; Kim, Joon; Alberto, Ma. Carmelita; Ardö, Jonas; Euskirchen, Eugénie S.; Kang, Minseok; Hirano, Takashi; Joiner, Joanna; Kobayashi, Hideki; Marchesini, Luca Belelli; Merbold, Lutz; Miyata, Akira; Saitoh, Taku M.; Takagi, Kentaro; Varlagin, Andrej; Bret-Harte, M. Syndonia; Kitamura, Kenzo; Kosugi, Yoshiko; Kotani, Ayumi; Kumar, Kireet; Li, Sheng-Gong; Machimura, Takashi; Matsuura, Yojiro; Mizoguchi, Yasuko; Ohta, Takeshi; Mukherjee, Sandipan; Yanagi, Yuji; Yasuda, Yukio; Zhang, Yiping; Zhao, Fenghua
2017-04-01
The lack of a standardized database of eddy covariance observations has been an obstacle for data-driven estimation of terrestrial CO2 fluxes in Asia. In this study, we developed such a standardized database using 54 sites from various databases by applying consistent postprocessing for data-driven estimation of gross primary productivity (GPP) and net ecosystem CO2 exchange (NEE). Data-driven estimation was conducted by using a machine learning algorithm: support vector regression (SVR), with remote sensing data for 2000 to 2015 period. Site-level evaluation of the estimated CO2 fluxes shows that although performance varies in different vegetation and climate classifications, GPP and NEE at 8 days are reproduced (e.g., r2 = 0.73 and 0.42 for 8 day GPP and NEE). Evaluation of spatially estimated GPP with Global Ozone Monitoring Experiment 2 sensor-based Sun-induced chlorophyll fluorescence shows that monthly GPP variations at subcontinental scale were reproduced by SVR (r2 = 1.00, 0.94, 0.91, and 0.89 for Siberia, East Asia, South Asia, and Southeast Asia, respectively). Evaluation of spatially estimated NEE with net atmosphere-land CO2 fluxes of Greenhouse Gases Observing Satellite (GOSAT) Level 4A product shows that monthly variations of these data were consistent in Siberia and East Asia; meanwhile, inconsistency was found in South Asia and Southeast Asia. Furthermore, differences in the land CO2 fluxes from SVR-NEE and GOSAT Level 4A were partially explained by accounting for the differences in the definition of land CO2 fluxes. These data-driven estimates can provide a new opportunity to assess CO2 fluxes in Asia and evaluate and constrain terrestrial ecosystem models.
The Impact of Data-Based Science Instruction on Standardized Test Performance
NASA Astrophysics Data System (ADS)
Herrington, Tia W.
Increased teacher accountability efforts have resulted in the use of data to improve student achievement. This study addressed teachers' inconsistent use of data-driven instruction in middle school science. Evidence of the impact of data-based instruction on student achievement and school and district practices has been well documented by researchers. In science, less information has been available on teachers' use of data for classroom instruction. Drawing on data-driven decision making theory, the purpose of this study was to examine whether data-based instruction impacted performance on the science Criterion Referenced Competency Test (CRCT) and to explore the factors that impeded its use by a purposeful sample of 12 science teachers at a data-driven school. The research questions addressed in this study included understanding: (a) the association between student performance on the science portion of the CRCT and data-driven instruction professional development, (b) middle school science teachers' perception of the usefulness of data, and (c) the factors that hindered the use of data for science instruction. This study employed a mixed methods sequential explanatory design. Data collected included 8th grade CRCT data, survey responses, and individual teacher interviews. A chi-square test revealed no improvement in the CRCT scores following the implementation of professional development on data-driven instruction (chi 2 (1) = .183, p = .67). Results from surveys and interviews revealed that teachers used data to inform their instruction, indicating time as the major hindrance to their use. Implications for social change include the development of lesson plans that will empower science teachers to deliver data-based instruction and students to achieve identified academic goals.
A Dynamic Human Health Risk Assessment System
Prasad, Umesh; Singh, Gurmit; Pant, A. B.
2012-01-01
An online human health risk assessment system (OHHRAS) has been designed and developed in the form of a prototype database-driven system and made available for the population of India through a website – www.healthriskindia.in. OHHRAS provide the three utilities, that is, health survey, health status, and bio-calculators. The first utility health survey is functional on the basis of database being developed dynamically and gives the desired output to the user on the basis of input criteria entered into the system; the second utility health status is providing the output on the basis of dynamic questionnaire and ticked (selected) answers and generates the health status reports based on multiple matches set as per advise of medical experts and the third utility bio-calculators are very useful for the scientists/researchers as online statistical analysis tool that gives more accuracy and save the time of user. The whole system and database-driven website has been designed and developed by using the software (mainly are PHP, My-SQL, Deamweaver, C++ etc.) and made available publically through a database-driven website (www.healthriskindia.in), which are very useful for researchers, academia, students, and general masses of all sectors. PMID:22778520
Is Library Database Searching a Language Learning Activity?
ERIC Educational Resources Information Center
Bordonaro, Karen
2010-01-01
This study explores how non-native speakers of English think of words to enter into library databases when they begin the process of searching for information in English. At issue is whether or not language learning takes place when these students use library databases. Language learning in this study refers to the use of strategies employed by…
The Ed Tech Journey and a Future Driven by Disruptive Change
ERIC Educational Resources Information Center
Grush, Mary, Ed.
2010-01-01
In this article, the author talks about the education technology journey and a future driven by disruptive change. The author first provides a definition of disruptive change. To understand the potential for disruptive change in higher education--a disruption fueled by technology and related trends--the author begins with a look at the past and…
Flowers, Natalie L
2010-01-01
CodeSlinger is a desktop application that was developed to aid medical professionals in the intertranslation, exploration, and use of biomedical coding schemes. The application was designed to provide a highly intuitive, easy-to-use interface that simplifies a complex business problem: a set of time-consuming, laborious tasks that were regularly performed by a group of medical professionals involving manually searching coding books, searching the Internet, and checking documentation references. A workplace observation session with a target user revealed the details of the current process and a clear understanding of the business goals of the target user group. These goals drove the design of the application's interface, which centers on searches for medical conditions and displays the codes found in the application's database that represent those conditions. The interface also allows the exploration of complex conceptual relationships across multiple coding schemes.
User applications driven by the community contribution framework MPContribs in the Materials Project
Huck, P.; Gunter, D.; Cholia, S.; ...
2015-10-12
This paper discusses how the MPContribs framework in the Materials Project (MP) allows user-contributed data to be shown and analyzed alongside the core MP database. The MP is a searchable database of electronic structure properties of over 65,000 bulk solid materials, which is accessible through a web-based science-gateway. We describe the motivation for enabling user contributions to the materials data and present the framework's features and challenges in the context of two real applications. These use cases illustrate how scientific collaborations can build applications with their own 'user-contributed' data using MPContribs. The Nanoporous Materials Explorer application provides a unique searchmore » interface to a novel dataset of hundreds of thousands of materials, each with tables of user-contributed values related to material adsorption and density at varying temperature and pressure. The Unified Theoretical and Experimental X-ray Spectroscopy application discusses a full workflow for the association, dissemination, and combined analyses of experimental data from the Advanced Light Source with MP's theoretical core data, using MPContribs tools for data formatting, management, and exploration. The capabilities being developed for these collaborations are serving as the model for how new materials data can be incorporated into the MP website with minimal staff overhead while giving powerful tools for data search and display to the user community.« less
Life sciences domain analysis model
Freimuth, Robert R; Freund, Elaine T; Schick, Lisa; Sharma, Mukesh K; Stafford, Grace A; Suzek, Baris E; Hernandez, Joyce; Hipp, Jason; Kelley, Jenny M; Rokicki, Konrad; Pan, Sue; Buckler, Andrew; Stokes, Todd H; Fernandez, Anna; Fore, Ian; Buetow, Kenneth H
2012-01-01
Objective Meaningful exchange of information is a fundamental challenge in collaborative biomedical research. To help address this, the authors developed the Life Sciences Domain Analysis Model (LS DAM), an information model that provides a framework for communication among domain experts and technical teams developing information systems to support biomedical research. The LS DAM is harmonized with the Biomedical Research Integrated Domain Group (BRIDG) model of protocol-driven clinical research. Together, these models can facilitate data exchange for translational research. Materials and methods The content of the LS DAM was driven by analysis of life sciences and translational research scenarios and the concepts in the model are derived from existing information models, reference models and data exchange formats. The model is represented in the Unified Modeling Language and uses ISO 21090 data types. Results The LS DAM v2.2.1 is comprised of 130 classes and covers several core areas including Experiment, Molecular Biology, Molecular Databases and Specimen. Nearly half of these classes originate from the BRIDG model, emphasizing the semantic harmonization between these models. Validation of the LS DAM against independently derived information models, research scenarios and reference databases supports its general applicability to represent life sciences research. Discussion The LS DAM provides unambiguous definitions for concepts required to describe life sciences research. The processes established to achieve consensus among domain experts will be applied in future iterations and may be broadly applicable to other standardization efforts. Conclusions The LS DAM provides common semantics for life sciences research. Through harmonization with BRIDG, it promotes interoperability in translational science. PMID:22744959
Substrate-Driven Mapping of the Degradome by Comparison of Sequence Logos
Fuchs, Julian E.; von Grafenstein, Susanne; Huber, Roland G.; Kramer, Christian; Liedl, Klaus R.
2013-01-01
Sequence logos are frequently used to illustrate substrate preferences and specificity of proteases. Here, we employed the compiled substrates of the MEROPS database to introduce a novel metric for comparison of protease substrate preferences. The constructed similarity matrix of 62 proteases can be used to intuitively visualize similarities in protease substrate readout via principal component analysis and construction of protease specificity trees. Since our new metric is solely based on substrate data, we can engraft the protease tree including proteolytic enzymes of different evolutionary origin. Thereby, our analyses confirm pronounced overlaps in substrate recognition not only between proteases closely related on sequence basis but also between proteolytic enzymes of different evolutionary origin and catalytic type. To illustrate the applicability of our approach we analyze the distribution of targets of small molecules from the ChEMBL database in our substrate-based protease specificity trees. We observe a striking clustering of annotated targets in tree branches even though these grouped targets do not necessarily share similarity on protein sequence level. This highlights the value and applicability of knowledge acquired from peptide substrates in drug design of small molecules, e.g., for the prediction of off-target effects or drug repurposing. Consequently, our similarity metric allows to map the degradome and its associated drug target network via comparison of known substrate peptides. The substrate-driven view of protein-protein interfaces is not limited to the field of proteases but can be applied to any target class where a sufficient amount of known substrate data is available. PMID:24244149
NATIONAL URBAN DATABASE AND ACCESS PROTAL TOOL
Current mesoscale weather prediction and microscale dispersion models are limited in their ability to perform accurate assessments in urban areas. A project called the National Urban Database with Access Portal Tool (NUDAPT) is beginning to provide urban data and improve the para...
Toward Computational Cumulative Biology by Combining Models of Biological Datasets
Faisal, Ali; Peltonen, Jaakko; Georgii, Elisabeth; Rung, Johan; Kaski, Samuel
2014-01-01
A main challenge of data-driven sciences is how to make maximal use of the progressively expanding databases of experimental datasets in order to keep research cumulative. We introduce the idea of a modeling-based dataset retrieval engine designed for relating a researcher's experimental dataset to earlier work in the field. The search is (i) data-driven to enable new findings, going beyond the state of the art of keyword searches in annotations, (ii) modeling-driven, to include both biological knowledge and insights learned from data, and (iii) scalable, as it is accomplished without building one unified grand model of all data. Assuming each dataset has been modeled beforehand, by the researchers or automatically by database managers, we apply a rapidly computable and optimizable combination model to decompose a new dataset into contributions from earlier relevant models. By using the data-driven decomposition, we identify a network of interrelated datasets from a large annotated human gene expression atlas. While tissue type and disease were major driving forces for determining relevant datasets, the found relationships were richer, and the model-based search was more accurate than the keyword search; moreover, it recovered biologically meaningful relationships that are not straightforwardly visible from annotations—for instance, between cells in different developmental stages such as thymocytes and T-cells. Data-driven links and citations matched to a large extent; the data-driven links even uncovered corrections to the publication data, as two of the most linked datasets were not highly cited and turned out to have wrong publication entries in the database. PMID:25427176
Toward computational cumulative biology by combining models of biological datasets.
Faisal, Ali; Peltonen, Jaakko; Georgii, Elisabeth; Rung, Johan; Kaski, Samuel
2014-01-01
A main challenge of data-driven sciences is how to make maximal use of the progressively expanding databases of experimental datasets in order to keep research cumulative. We introduce the idea of a modeling-based dataset retrieval engine designed for relating a researcher's experimental dataset to earlier work in the field. The search is (i) data-driven to enable new findings, going beyond the state of the art of keyword searches in annotations, (ii) modeling-driven, to include both biological knowledge and insights learned from data, and (iii) scalable, as it is accomplished without building one unified grand model of all data. Assuming each dataset has been modeled beforehand, by the researchers or automatically by database managers, we apply a rapidly computable and optimizable combination model to decompose a new dataset into contributions from earlier relevant models. By using the data-driven decomposition, we identify a network of interrelated datasets from a large annotated human gene expression atlas. While tissue type and disease were major driving forces for determining relevant datasets, the found relationships were richer, and the model-based search was more accurate than the keyword search; moreover, it recovered biologically meaningful relationships that are not straightforwardly visible from annotations-for instance, between cells in different developmental stages such as thymocytes and T-cells. Data-driven links and citations matched to a large extent; the data-driven links even uncovered corrections to the publication data, as two of the most linked datasets were not highly cited and turned out to have wrong publication entries in the database.
The utilization of neural nets in populating an object-oriented database
NASA Technical Reports Server (NTRS)
Campbell, William J.; Hill, Scott E.; Cromp, Robert F.
1989-01-01
Existing NASA supported scientific data bases are usually developed, managed and populated in a tedious, error prone and self-limiting way in terms of what can be described in a relational Data Base Management System (DBMS). The next generation Earth remote sensing platforms (i.e., Earth Observation System, (EOS), will be capable of generating data at a rate of over 300 Mbs per second from a suite of instruments designed for different applications. What is needed is an innovative approach that creates object-oriented databases that segment, characterize, catalog and are manageable in a domain-specific context and whose contents are available interactively and in near-real-time to the user community. Described here is work in progress that utilizes an artificial neural net approach to characterize satellite imagery of undefined objects into high-level data objects. The characterized data is then dynamically allocated to an object-oriented data base where it can be reviewed and assessed by a user. The definition, development, and evolution of the overall data system model are steps in the creation of an application-driven knowledge-based scientific information system.
A 300-mV 220-nW event-driven ADC with real-time QRS detection for wearable ECG sensors.
Zhang, Xiaoyang; Lian, Yong
2014-12-01
This paper presents an ultra-low-power event-driven analog-to-digital converter (ADC) with real-time QRS detection for wearable electrocardiogram (ECG) sensors in wireless body sensor network (WBSN) applications. Two QRS detection algorithms, pulse-triggered (PUT) and time-assisted PUT (t-PUT), are proposed based on the level-crossing events generated from the ADC. The PUT detector achieves 97.63% sensitivity and 97.33% positive prediction in simulation on the MIT-BIH Arrhythmia Database. The t-PUT improves the sensitivity and positive prediction to 97.76% and 98.59% respectively. Fabricated in 0.13 μm CMOS technology, the ADC with QRS detector consumes only 220 nW measured under 300 mV power supply, making it the first nanoWatt compact analog-to-information (A2I) converter with embedded QRS detector.
Data-based adjoint and H2 optimal control of the Ginzburg-Landau equation
NASA Astrophysics Data System (ADS)
Banks, Michael; Bodony, Daniel
2017-11-01
Equation-free, reduced-order methods of control are desirable when the governing system of interest is of very high dimension or the control is to be applied to a physical experiment. Two-phase flow optimal control problems, our target application, fit these criteria. Dynamic Mode Decomposition (DMD) is a data-driven method for model reduction that can be used to resolve the dynamics of very high dimensional systems and project the dynamics onto a smaller, more manageable basis. We evaluate the effectiveness of DMD-based forward and adjoint operator estimation when applied to H2 optimal control approaches applied to the linear and nonlinear Ginzburg-Landau equation. Perspectives on applying the data-driven adjoint to two phase flow control will be given. Office of Naval Research (ONR) as part of the Multidisciplinary University Research Initiatives (MURI) Program, under Grant Number N00014-16-1-2617.
Trends in basic mathematical competencies of beginning undergraduates in Ireland, 2003-2013
NASA Astrophysics Data System (ADS)
Treacy, Páraic; Faulkner, Fiona
2015-11-01
Deficiencies in beginning undergraduate students' basic mathematical skills has been an issue of concern in higher education, particularly in the past 15 years. This issue has been tracked and analysed in a number of universities in Ireland and internationally through student scores recorded in mathematics diagnostic tests. Students beginning their science-based and technology-based undergraduate courses in the University of Limerick have had their basic mathematics skills tested without any prior warning through a 40 question diagnostic test during their initial service mathematics lecture since 1998. Data gathered through this diagnostic test have been recorded in a database kept at the university and explored to track trends in mathematical competency of these beginning undergraduates. This paper details findings surrounding an analysis of the database between 2003 and 2013, outlining changes in mathematical competencies of these beginning undergraduates in an attempt to determine reasons for such changes. The analysis found that the proportion of students tested through this diagnostic test that are predicted to be at risk of failing their service mathematics end-of-semester examinations has increased significantly between 2003 and 2013. Furthermore, when students' performance in secondary level mathematics was controlled, it was determined that the performance of beginning undergraduates in 2013 was statistically significantly below that of the performance of the beginning undergraduates recorded 10 years previously.
The Novice User and CD-ROM Database Services. ERIC Digest.
ERIC Educational Resources Information Center
Schamber, Linda
This digest answers the following questions that beginning or novice users may have about CD-ROM (a compact disk with read-only memory) database services: (1) What is CD-ROM? (2) What databases are available? (3) Is CD-ROM difficult to use? (4) How much does CD-ROM cost? and (5) What is the future of CD-ROM? (15 references) (MES)
Study of an External Neutron Source for an Accelerator-Driven System using the PHITS Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugawara, Takanori; Iwasaki, Tomohiko; Chiba, Takashi
A code system for the Accelerator Driven System (ADS) has been under development for analyzing dynamic behaviors of a subcritical core coupled with an accelerator. This code system named DSE (Dynamics calculation code system for a Subcritical system with an External neutron source) consists of an accelerator part and a reactor part. The accelerator part employs a database, which is calculated by using PHITS, for investigating the effect related to the accelerator such as the changes of beam energy, beam diameter, void generation, and target level. This analysis method using the database may introduce some errors into dynamics calculations sincemore » the neutron source data derived from the database has some errors in fitting or interpolating procedures. In this study, the effects of various events are investigated to confirm that the method based on the database is appropriate.« less
Data-driven indexing mechanism for the recognition of polyhedral objects
NASA Astrophysics Data System (ADS)
McLean, Stewart; Horan, Peter; Caelli, Terry M.
1992-02-01
This paper is concerned with the problem of searching large model databases. To date, most object recognition systems have concentrated on the problem of matching using simple searching algorithms. This is quite acceptable when the number of object models is small. However, in the future, general purpose computer vision systems will be required to recognize hundreds or perhaps thousands of objects and, in such circumstances, efficient searching algorithms will be needed. The problem of searching a large model database is one which must be addressed if future computer vision systems are to be at all effective. In this paper we present a method we call data-driven feature-indexed hypothesis generation as one solution to the problem of searching large model databases.
Storkey, J; Holst, N; Bøjer, O Q; Bigongiali, F; Bocci, G; Colbach, N; Dorner, Z; Riemens, M M; Sartorato, I; Sønderskov, M; Verschwele, A
2015-04-01
A functional approach to predicting shifts in weed floras in response to management or environmental change requires the combination of data on weed traits with analytical frameworks that capture the filtering effect of selection pressures on traits. A weed traits database (WTDB) was designed, populated and analysed, initially using data for 19 common European weeds, to begin to consolidate trait data in a single repository. The initial choice of traits was driven by the requirements of empirical models of weed population dynamics to identify correlations between traits and model parameters. These relationships were used to build a generic model, operating at the level of functional traits, to simulate the impact of increasing herbicide and fertiliser use on virtual weeds along gradients of seed weight and maximum height. The model generated 'fitness contours' (defined as population growth rates) within this trait space in different scenarios, onto which two sets of weed species, defined as common or declining in the UK, were mapped. The effect of increasing inputs on the weed flora was successfully simulated; 77% of common species were predicted to have stable or increasing populations under high fertiliser and herbicide use, in contrast with only 29% of the species that have declined. Future development of the WTDB will aim to increase the number of species covered, incorporate a wider range of traits and analyse intraspecific variability under contrasting management and environments.
Online Reference Service--How to Begin: A Selected Bibliography.
ERIC Educational Resources Information Center
Shroder, Emelie J., Ed.
1982-01-01
Materials in this bibliography were selected and recommended by members of the Use of Machine-Assisted Reference in Public Libraries Committee, Reference and Adult Services Division, American Library Association. Topics include: financial aspects, equipment and communications considerations, comparing databases and database systems, advertising…
ERAIZDA: a model for holistic annotation of animal infectious and zoonotic diseases.
Buza, Teresia M; Jack, Sherman W; Kirunda, Halid; Khaitsa, Margaret L; Lawrence, Mark L; Pruett, Stephen; Peterson, Daniel G
2015-01-01
There is an urgent need for a unified resource that integrates trans-disciplinary annotations of emerging and reemerging animal infectious and zoonotic diseases. Such data integration will provide wonderful opportunity for epidemiologists, researchers and health policy makers to make data-driven decisions designed to improve animal health. Integrating emerging and reemerging animal infectious and zoonotic disease data from a large variety of sources into a unified open-access resource provides more plausible arguments to achieve better understanding of infectious and zoonotic diseases. We have developed a model for interlinking annotations of these diseases. These diseases are of particular interest because of the threats they pose to animal health, human health and global health security. We demonstrated the application of this model using brucellosis, an infectious and zoonotic disease. Preliminary annotations were deposited into VetBioBase database (http://vetbiobase.igbb.msstate.edu). This database is associated with user-friendly tools to facilitate searching, retrieving and downloading of disease-related information. Database URL: http://vetbiobase.igbb.msstate.edu. © The Author(s) 2015. Published by Oxford University Press.
Varela, Sara; González-Hernández, Javier; Casabella, Eduardo; Barrientos, Rafael
2014-01-01
Citizen science projects store an enormous amount of information about species distribution, diversity and characteristics. Researchers are now beginning to make use of this rich collection of data. However, access to these databases is not always straightforward. Apart from the largest and international projects, citizen science repositories often lack specific Application Programming Interfaces (APIs) to connect them to the scientific environments. Thus, it is necessary to develop simple routines to allow researchers to take advantage of the information collected by smaller citizen science projects, for instance, programming specific packages to connect them to popular scientific environments (like R). Here, we present rAvis, an R-package to connect R-users with Proyecto AVIS (http://proyectoavis.com), a Spanish citizen science project with more than 82,000 bird observation records. We develop several functions to explore the database, to plot the geographic distribution of the species occurrences, and to generate personal queries to the database about species occurrences (number of individuals, distribution, etc.) and birdwatcher observations (number of species recorded by each collaborator, UTMs visited, etc.). This new R-package will allow scientists to access this database and to exploit the information generated by Spanish birdwatchers over the last 40 years.
Varela, Sara; González-Hernández, Javier; Casabella, Eduardo; Barrientos, Rafael
2014-01-01
Citizen science projects store an enormous amount of information about species distribution, diversity and characteristics. Researchers are now beginning to make use of this rich collection of data. However, access to these databases is not always straightforward. Apart from the largest and international projects, citizen science repositories often lack specific Application Programming Interfaces (APIs) to connect them to the scientific environments. Thus, it is necessary to develop simple routines to allow researchers to take advantage of the information collected by smaller citizen science projects, for instance, programming specific packages to connect them to popular scientific environments (like R). Here, we present rAvis, an R-package to connect R-users with Proyecto AVIS (http://proyectoavis.com), a Spanish citizen science project with more than 82,000 bird observation records. We develop several functions to explore the database, to plot the geographic distribution of the species occurrences, and to generate personal queries to the database about species occurrences (number of individuals, distribution, etc.) and birdwatcher observations (number of species recorded by each collaborator, UTMs visited, etc.). This new R-package will allow scientists to access this database and to exploit the information generated by Spanish birdwatchers over the last 40 years. PMID:24626233
Asynchronous Data Retrieval from an Object-Oriented Database
NASA Astrophysics Data System (ADS)
Gilbert, Jonathan P.; Bic, Lubomir
We present an object-oriented semantic database model which, similar to other object-oriented systems, combines the virtues of four concepts: the functional data model, a property inheritance hierarchy, abstract data types and message-driven computation. The main emphasis is on the last of these four concepts. We describe generic procedures that permit queries to be processed in a purely message-driven manner. A database is represented as a network of nodes and directed arcs, in which each node is a logical processing element, capable of communicating with other nodes by exchanging messages. This eliminates the need for shared memory and for centralized control during query processing. Hence, the model is suitable for implementation on a multiprocessor computer architecture, consisting of large numbers of loosely coupled processing elements.
13 CFR 127.604 - How will SBA process an EDWOSB or WOSB status protest?
Code of Federal Regulations, 2011 CFR
2011-01-01
... award the contract or begin performance after receipt of a protest if the contracting officer determines... to begin performance. (2) Where award was made and performance commenced before receipt of a negative... must update the Federal Procurement Data System-Next Generation (FPDS-NG) and other databases from the...
Task-Driven Dynamic Text Summarization
ERIC Educational Resources Information Center
Workman, Terri Elizabeth
2011-01-01
The objective of this work is to examine the efficacy of natural language processing (NLP) in summarizing bibliographic text for multiple purposes. Researchers have noted the accelerating growth of bibliographic databases. Information seekers using traditional information retrieval techniques when searching large bibliographic databases are often…
Near-realtime Cosmic Ray measurements for space weather applications
NASA Astrophysics Data System (ADS)
Steigies, C. T.
2013-12-01
In its FP7 program the European Commission has funded the creation of scientific databases. One successful project is the Neutron Monitor database NMDB which provides near-realtime access to ground-based Neutron Monitor measurements. In its beginning NMDB hosted only data from European and Asian participants, but it has recently grown to also include data from North American stations. We are currently working on providing also data from Australian stations. With the increased coverage of stations the accuracy of the NMDB applications to issue an alert of a ground level enhancement (GLE) or to predict the arrival of a coronal mass ejection (CME) is constantly improving. Besides the Cosmic Ray community and Airlines, that want to calculate radiation doses on flight routes, NMDB has also attracted users from outside the core field, for example hydrologists who compare local Neutron measurements with data from NMDB to determine soil humidity. By providing access to data from 50 stations, NMDB includes already data from the majority of the currently operating stations. However, in the future we want to include data from the few remaining stations, as well as historical data from stations that have been shut down.
Sahoo, Satya S.; Zhang, Guo-Qiang; Lhatoo, Samden D.
2013-01-01
Summary The epilepsy community increasingly recognizes the need for a modern classification system that can also be easily integrated with effective informatics tools. The 2010 reports by the United States President's Council of Advisors on Science and Technology (PCAST) identified informatics as a critical resource to improve quality of patient care, drive clinical research, and reduce the cost of health services. An effective informatics infrastructure for epilepsy, which is underpinned by a formal knowledge model or ontology, can leverage an ever increasing amount of multimodal data to improve (1) clinical decision support, (2) access to information for patients and their families, (3) easier data sharing, and (4) accelerate secondary use of clinical data. Modeling the recommendations of the International League Against Epilepsy (ILAE) classification system in the form of an epilepsy domain ontology is essential for consistent use of terminology in a variety of applications, including electronic health records systems and clinical applications. In this review, we discuss the data management issues in epilepsy and explore the benefits of an ontology-driven informatics infrastructure and its role in adoption of a “data-driven” paradigm in epilepsy research. PMID:23647220
AMP: a science-driven web-based application for the TeraGrid
NASA Astrophysics Data System (ADS)
Woitaszek, M.; Metcalfe, T.; Shorrock, I.
The Asteroseismic Modeling Portal (AMP) provides a web-based interface for astronomers to run and view simulations that derive the properties of Sun-like stars from observations of their pulsation frequencies. In this paper, we describe the architecture and implementation of AMP, highlighting the lightweight design principles and tools used to produce a functional fully-custom web-based science application in less than a year. Targeted as a TeraGrid science gateway, AMP's architecture and implementation are intended to simplify its orchestration of TeraGrid computational resources. AMP's web-based interface was developed as a traditional standalone database-backed web application using the Python-based Django web development framework, allowing us to leverage the Django framework's capabilities while cleanly separating the user interface development from the grid interface development. We have found this combination of tools flexible and effective for rapid gateway development and deployment.
Biopolymer Aerogels and Foams: Chemistry, Properties, and Applications.
Zhao, Shanyu; Malfait, Wim J; Guerrero-Alburquerque, Natalia; Koebel, Matthias M; Nyström, Gustav
2018-06-25
Biopolymer aerogels were among the first aerogels produced, but only in the last decade has research on biopolymer and biopolymer-composite aerogels become popular, motivated by sustainability arguments, their unique and tunable properties, and ease of functionalization. Biopolymer aerogels and open-cell foams have great potential for classical aerogel applications such as thermal insulation, as well as emerging applications in filtration, oil-water separation, CO 2 capture, catalysis, and medicine. The biopolymer aerogel field today is driven forward by empirical materials discovery at the laboratory scale, but requires a firmer theoretical basis and pilot studies to close the gap to market. This Review includes a database with over 3800 biopolymer aerogel properties, evaluates the state of the biopolymer aerogel field, and critically discusses the scientific, technological, and commercial barriers to the commercialization of these exciting materials. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
AQUA-USERS: AQUAculture USEr Driven Operational Remote Sensing Information Services
NASA Astrophysics Data System (ADS)
Laanen, Marnix; Poser, Kathrin; Peters, Steef; de Reus, Nils; Ghebrehiwot, Semhar; Eleveld, Marieke; Miller, Peter; Groom, Steve; Clements, Oliver; Kurekin, Andrey; Martinez Vicente, Victor; Brotas, Vanda; Sa, Carolina; Couto, Andre; Brito, Ana; Amorim, Ana; Dale, Trine; Sorensen, Kai; Boye Hansen, Lars; Huber, Silvia; Kaas, Hanne; Andersson, Henrik; Icely, John; Fragoso, Bruno
2015-12-01
The FP7 project AQUA-USERS provides the aquaculture industry with user-relevant and timely information based on the most up-to-date satellite data and innovative optical in-situ measurements. Its key purpose is to develop an application that brings together satellite information on water quality and temperature with in-situ observations as well as relevant weather prediction and met-ocean data. The application and its underlying database are linked to a decision support system that includes a set of (user-determined) management options. Specific focus is on the development of indicators for aquaculture management including indicators for harmful algae bloom (HAB) events. The methods and services developed within AQUA-USERS are tested by the members of the user board, who represent different geographic areas and aquaculture production systems.
Bigdata Driven Cloud Security: A Survey
NASA Astrophysics Data System (ADS)
Raja, K.; Hanifa, Sabibullah Mohamed
2017-08-01
Cloud Computing (CC) is a fast-growing technology to perform massive-scale and complex computing. It eliminates the need to maintain expensive computing hardware, dedicated space, and software. Recently, it has been observed that massive growth in the scale of data or big data generated through cloud computing. CC consists of a front-end, includes the users’ computers and software required to access the cloud network, and back-end consists of various computers, servers and database systems that create the cloud. In SaaS (Software as-a-Service - end users to utilize outsourced software), PaaS (Platform as-a-Service-platform is provided) and IaaS (Infrastructure as-a-Service-physical environment is outsourced), and DaaS (Database as-a-Service-data can be housed within a cloud), where leading / traditional cloud ecosystem delivers the cloud services become a powerful and popular architecture. Many challenges and issues are in security or threats, most vital barrier for cloud computing environment. The main barrier to the adoption of CC in health care relates to Data security. When placing and transmitting data using public networks, cyber attacks in any form are anticipated in CC. Hence, cloud service users need to understand the risk of data breaches and adoption of service delivery model during deployment. This survey deeply covers the CC security issues (covering Data Security in Health care) so as to researchers can develop the robust security application models using Big Data (BD) on CC (can be created / deployed easily). Since, BD evaluation is driven by fast-growing cloud-based applications developed using virtualized technologies. In this purview, MapReduce [12] is a good example of big data processing in a cloud environment, and a model for Cloud providers.
Nims, Raymond W; Sykes, Greg; Cottrill, Karin; Ikonomi, Pranvera; Elmore, Eugene
2010-12-01
The role of cell authentication in biomedical science has received considerable attention, especially within the past decade. This quality control attribute is now beginning to be given the emphasis it deserves by granting agencies and by scientific journals. Short tandem repeat (STR) profiling, one of a few DNA profiling technologies now available, is being proposed for routine identification (authentication) of human cell lines, stem cells, and tissues. The advantage of this technique over methods such as isoenzyme analysis, karyotyping, human leukocyte antigen typing, etc., is that STR profiling can establish identity to the individual level, provided that the appropriate number and types of loci are evaluated. To best employ this technology, a standardized protocol and a data-driven, quality-controlled, and publically searchable database will be necessary. This public STR database (currently under development) will enable investigators to rapidly authenticate human-based cultures to the individual from whom the cells were sourced. Use of similar approaches for non-human animal cells will require developing other suitable loci sets. While implementing STR analysis on a more routine basis should significantly reduce the frequency of cell misidentification, additional technologies may be needed as part of an overall authentication paradigm. For instance, isoenzyme analysis, PCR-based DNA amplification, and sequence-based barcoding methods enable rapid confirmation of a cell line's species of origin while screening against cross-contaminations, especially when the cells present are not recognized by the species-specific STR method. Karyotyping may also be needed as a supporting tool during establishment of an STR database. Finally, good cell culture practices must always remain a major component of any effort to reduce the frequency of cell misidentification.
AnaBench: a Web/CORBA-based workbench for biomolecular sequence analysis
Badidi, Elarbi; De Sousa, Cristina; Lang, B Franz; Burger, Gertraud
2003-01-01
Background Sequence data analyses such as gene identification, structure modeling or phylogenetic tree inference involve a variety of bioinformatics software tools. Due to the heterogeneity of bioinformatics tools in usage and data requirements, scientists spend much effort on technical issues including data format, storage and management of input and output, and memorization of numerous parameters and multi-step analysis procedures. Results In this paper, we present the design and implementation of AnaBench, an interactive, Web-based bioinformatics Analysis workBench allowing streamlined data analysis. Our philosophy was to minimize the technical effort not only for the scientist who uses this environment to analyze data, but also for the administrator who manages and maintains the workbench. With new bioinformatics tools published daily, AnaBench permits easy incorporation of additional tools. This flexibility is achieved by employing a three-tier distributed architecture and recent technologies including CORBA middleware, Java, JDBC, and JSP. A CORBA server permits transparent access to a workbench management database, which stores information about the users, their data, as well as the description of all bioinformatics applications that can be launched from the workbench. Conclusion AnaBench is an efficient and intuitive interactive bioinformatics environment, which offers scientists application-driven, data-driven and protocol-driven analysis approaches. The prototype of AnaBench, managed by a team at the Université de Montréal, is accessible on-line at: . Please contact the authors for details about setting up a local-network AnaBench site elsewhere. PMID:14678565
First-principles data-driven discovery of transition metal oxides for artificial photosynthesis
NASA Astrophysics Data System (ADS)
Yan, Qimin
We develop a first-principles data-driven approach for rapid identification of transition metal oxide (TMO) light absorbers and photocatalysts for artificial photosynthesis using the Materials Project. Initially focusing on Cr, V, and Mn-based ternary TMOs in the database, we design a broadly-applicable multiple-layer screening workflow automating density functional theory (DFT) and hybrid functional calculations of bulk and surface electronic and magnetic structures. We further assess the electrochemical stability of TMOs in aqueous environments from computed Pourbaix diagrams. Several promising earth-abundant low band-gap TMO compounds with desirable band edge energies and electrochemical stability are identified by our computational efforts and then synergistically evaluated using high-throughput synthesis and photoelectrochemical screening techniques by our experimental collaborators at Caltech. Our joint theory-experiment effort has successfully identified new earth-abundant copper and manganese vanadate complex oxides that meet highly demanding requirements for photoanodes, substantially expanding the known space of such materials. By integrating theory and experiment, we validate our approach and develop important new insights into structure-property relationships for TMOs for oxygen evolution photocatalysts, paving the way for use of first-principles data-driven techniques in future applications. This work is supported by the Materials Project Predictive Modeling Center and the Joint Center for Artificial Photosynthesis through the U.S. Department of Energy, Office of Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract No. DE-AC02-05CH11231. Computational resources also provided by the Department of Energy through the National Energy Supercomputing Center.
The Design and Product of National 1:1000000 Cartographic Data of Topographic Map
NASA Astrophysics Data System (ADS)
Wang, Guizhi
2016-06-01
National administration of surveying, mapping and geoinformation started to launch the project of national fundamental geographic information database dynamic update in 2012. Among them, the 1:50000 database was updated once a year, furthermore the 1:250000 database was downsized and linkage-updated on the basis. In 2014, using the latest achievements of 1:250000 database, comprehensively update the 1:1000000 digital line graph database. At the same time, generate cartographic data of topographic map and digital elevation model data. This article mainly introduce national 1:1000000 cartographic data of topographic map, include feature content, database structure, Database-driven Mapping technology, workflow and so on.
Integrating geo web services for a user driven exploratory analysis
NASA Astrophysics Data System (ADS)
Moncrieff, Simon; Turdukulov, Ulanbek; Gulland, Elizabeth-Kate
2016-04-01
In data exploration, several online data sources may need to be dynamically aggregated or summarised over spatial region, time interval, or set of attributes. With respect to thematic data, web services are mainly used to present results leading to a supplier driven service model limiting the exploration of the data. In this paper we propose a user need driven service model based on geo web processing services. The aim of the framework is to provide a method for the scalable and interactive access to various geographic data sources on the web. The architecture combines a data query, processing technique and visualisation methodology to rapidly integrate and visually summarise properties of a dataset. We illustrate the environment on a health related use case that derives Age Standardised Rate - a dynamic index that needs integration of the existing interoperable web services of demographic data in conjunction with standalone non-spatial secure database servers used in health research. Although the example is specific to the health field, the architecture and the proposed approach are relevant and applicable to other fields that require integration and visualisation of geo datasets from various web services and thus, we believe is generic in its approach.
James Webb Space Telescope XML Database: From the Beginning to Today
NASA Technical Reports Server (NTRS)
Gal-Edd, Jonathan; Fatig, Curtis C.
2005-01-01
The James Webb Space Telescope (JWST) Project has been defining, developing, and exercising the use of a common eXtensible Markup Language (XML) for the command and telemetry (C&T) database structure. JWST is the first large NASA space mission to use XML for databases. The JWST project started developing the concepts for the C&T database in 2002. The database will need to last at least 20 years since it will be used beginning with flight software development, continuing through Observatory integration and test (I&T) and through operations. Also, a database tool kit has been provided to the 18 various flight software development laboratories located in the United States, Europe, and Canada that allows the local users to create their own databases. Recently the JWST Project has been working with the Jet Propulsion Laboratory (JPL) and Object Management Group (OMG) XML Telemetry and Command Exchange (XTCE) personnel to provide all the information needed by JWST and JPL for exchanging database information using a XML standard structure. The lack of standardization requires custom ingest scripts for each ground system segment, increasing the cost of the total system. Providing a non-proprietary standard of the telemetry and command database definition formation will allow dissimilar systems to communicate without the need for expensive mission specific database tools and testing of the systems after the database translation. The various ground system components that would benefit from a standardized database are the telemetry and command systems, archives, simulators, and trending tools. JWST has exchanged the XML database with the Eclipse, EPOCH, ASIST ground systems, Portable spacecraft simulator (PSS), a front-end system, and Integrated Trending and Plotting System (ITPS) successfully. This paper will discuss how JWST decided to use XML, the barriers to a new concept, experiences utilizing the XML structure, exchanging databases with other users, and issues that have been experienced in creating databases for the C&T system.
CrossCheck: an open-source web tool for high-throughput screen data analysis.
Najafov, Jamil; Najafov, Ayaz
2017-07-19
Modern high-throughput screening methods allow researchers to generate large datasets that potentially contain important biological information. However, oftentimes, picking relevant hits from such screens and generating testable hypotheses requires training in bioinformatics and the skills to efficiently perform database mining. There are currently no tools available to general public that allow users to cross-reference their screen datasets with published screen datasets. To this end, we developed CrossCheck, an online platform for high-throughput screen data analysis. CrossCheck is a centralized database that allows effortless comparison of the user-entered list of gene symbols with 16,231 published datasets. These datasets include published data from genome-wide RNAi and CRISPR screens, interactome proteomics and phosphoproteomics screens, cancer mutation databases, low-throughput studies of major cell signaling mediators, such as kinases, E3 ubiquitin ligases and phosphatases, and gene ontological information. Moreover, CrossCheck includes a novel database of predicted protein kinase substrates, which was developed using proteome-wide consensus motif searches. CrossCheck dramatically simplifies high-throughput screen data analysis and enables researchers to dig deep into the published literature and streamline data-driven hypothesis generation. CrossCheck is freely accessible as a web-based application at http://proteinguru.com/crosscheck.
Scrubchem: Building Bioactivity Datasets from Pubchem ...
The PubChem Bioassay database is a non-curated public repository with data from 64 sources, including: ChEMBL, BindingDb, DrugBank, EPA Tox21, NIH Molecular Libraries Screening Program, and various other academic, government, and industrial contributors. Methods for extracting this public data into quality datasets, useable for analytical research, presents several big-data challenges for which we have designed manageable solutions. According to our preliminary work, there are approximately 549 million bioactivity values and related meta-data within PubChem that can be mapped to over 10,000 biological targets. However, this data is not ready for use in data-driven research, mainly due to lack of structured annotations.We used a pragmatic approach that provides increasing access to bioactivity values in the PubChem Bioassay database. This included restructuring of individual PubChem Bioassay files into a relational database (ScrubChem). ScrubChem contains all primary PubChem Bioassay data that was: reparsed; error-corrected (when applicable); enriched with additional data links from other NCBI databases; and improved by adding key biological and assay annotations derived from logic-based language processing rules. The utility of ScrubChem and the curation process were illustrated using an example bioactivity dataset for the androgen receptor protein. This initial work serves as a trial ground for establishing the technical framework for accessing, integrating, cu
DOE Office of Scientific and Technical Information (OSTI.GOV)
McNutt, T.
Advancements in informatics in radiotherapy are opening up opportunities to improve our ability to assess treatment plans. Models on individualizing patient dose constraints from prior patient data and shape relationships have been extensively researched and are now making their way into commercial products. New developments in knowledge based treatment planning involve understanding the impact of the radiation dosimetry on the patient. Akin to radiobiology models that have driven intensity modulated radiotherapy optimization, toxicity and outcome predictions based on treatment plans and prior patient experiences may be the next step in knowledge based planning. In order to realize these predictions, itmore » is necessary to understand how the clinical information can be captured, structured and organized with ontologies and databases designed for recall. Large databases containing radiation dosimetry and outcomes present the opportunity to evaluate treatment plans against predictions of toxicity and disease response. Such evaluations can be based on dose volume histogram or even the full 3-dimensional dose distribution and its relation to the critical anatomy. This session will provide an understanding of ontologies and standard terminologies used to capture clinical knowledge into structured databases; How data can be organized and accessed to utilize the knowledge in planning; and examples of research and clinical efforts to incorporate that clinical knowledge into planning for improved care for our patients. Learning Objectives: Understand the role of standard terminologies, ontologies and data organization in oncology Understand methods to capture clinical toxicity and outcomes in a clinical setting Understand opportunities to learn from clinical data and its application to treatment planning Todd McNutt receives funding from Philips, Elekta and Toshiba for some of the work presented.« less
Steeger, Christine M; Gondoli, Dawn M
2013-04-01
This study examined mother-adolescent conflict as a mediator of longitudinal reciprocal relations between adolescent aggression and depressive symptoms and maternal psychological control. Motivated by family systems theory and the transactions that occur between individual and dyadic levels of the family system, we examined the connections among these variables during a developmental period when children and parents experience significant psychosocial changes. Three years of self-report data were collected from 168 mother-adolescent dyads, beginning when the adolescents (55.4% girls) were in 6th grade. Models were tested using longitudinal path analysis. Results indicated that the connection between adolescent aggression (and depressive symptoms) and maternal psychological control was best characterized as adolescent-driven, indirect, and mediated by mother-adolescent conflict; there were no indications of parent-driven indirect effects. That is, prior adolescent aggression and depressive symptoms were associated with increased conflict. In turn, conflict was associated with increased psychological control. Within our mediation models, reciprocal direct effects between both problem behaviors and conflict and between conflict and psychological control were also found. Additionally, exploratory analyses regarding the role of adolescent gender as a moderator of variable relations were conducted. These analyses revealed no gender-related patterns of moderation, whether moderated mediation or specific path tests for moderation were considered. This study corroborates prior research finding support for child effects on parenting behaviors during early adolescence. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Ring waves as a mass transport mechanism in air-driven core-annular flows.
Camassa, Roberto; Forest, M Gregory; Lee, Long; Ogrosky, H Reed; Olander, Jeffrey
2012-12-01
Air-driven core-annular fluid flows occur in many situations, from lung airways to engineering applications. Here we study, experimentally and theoretically, flows where a viscous liquid film lining the inside of a tube is forced upwards against gravity by turbulent airflow up the center of the tube. We present results on the thickness and mean speed of the film and properties of the interfacial waves that develop from an instability of the air-liquid interface. We derive a long-wave asymptotic model and compare properties of its solutions with those of the experiments. Traveling wave solutions of this long-wave model exhibit evidence of different mass transport regimes: Past a certain threshold, sufficiently large-amplitude waves begin to trap cores of fluid which propagate upward at wave speeds. This theoretical result is then confirmed by a second set of experiments that show evidence of ring waves of annular fluid propagating over the underlying creeping flow. By tuning the parameters of the experiments, the strength of this phenomenon can be adjusted in a way that is predicted qualitatively by the model.
ERIC Educational Resources Information Center
Cominole, Melissa; Wheeless, Sara; Dudley, Kristin; Franklin, Jeff; Wine, Jennifer
2007-01-01
The "2004/06 Beginning Postsecondary Students Longitudinal Study (BPS:04/06)" is sponsored by the U.S. Department of Education to respond to the need for a national, comprehensive database concerning issues students may face in enrollment, persistence, progress, and attainment in postsecondary education and in consequent early rates of…
Evaluation and selection of refrigeration systems for lunar surface and space applications
NASA Technical Reports Server (NTRS)
Copeland, R. J.; Blount, T. D.; Williams, J. L.
1971-01-01
Evaluated are the various refrigeration machines which could be used to provide heat rejection in environmental control systems for lunar surface and spacecraft applications, in order to select the best refrigeration machine for satisfying each individual application and the best refrigeration machine for satisfying all of the applications. The refrigeration machine considered include: (1) vapor comparison cycle (work-driven); (2) vapor adsorption cycle (heat-driven); (3) vapor absorption cycle (heat-driven); (4) thermoelectric (electrically-driven); (5) gas cycle (work driven); (6) steam-jet (heat-driven).
Duda, Jeffrey J.; Wieferich, Daniel J.; Bristol, R. Sky; Bellmore, J. Ryan; Hutchison, Vivian B.; Vittum, Katherine M.; Craig, Laura; Warrick, Jonathan A.
2016-08-18
The removal of dams has recently increased over historical levels due to aging infrastructure, changing societal needs, and modern safety standards rendering some dams obsolete. Where possibilities for river restoration, or improved safety, exceed the benefits of retaining a dam, removal is more often being considered as a viable option. Yet, as this is a relatively new development in the history of river management, science is just beginning to guide our understanding of the physical and ecological implications of dam removal. Ultimately, the “lessons learned” from previous scientific studies on the outcomes dam removal could inform future scientific understanding of ecosystem outcomes, as well as aid in decision-making by stakeholders. We created a database visualization tool, the Dam Removal Information Portal (DRIP), to display map-based, interactive information about the scientific studies associated with dam removals. Serving both as a bibliographic source as well as a link to other existing databases like the National Hydrography Dataset, the derived National Dam Removal Science Database serves as the foundation for a Web-based application that synthesizes the existing scientific studies associated with dam removals. Thus, using the DRIP application, users can explore information about completed dam removal projects (for example, their location, height, and date removed), as well as discover sources and details of associated of scientific studies. As such, DRIP is intended to be a dynamic collection of scientific information related to dams that have been removed in the United States and elsewhere. This report describes the architecture and concepts of this “metaknowledge” database and the DRIP visualization tool.
Spiritual leadership at the workplace: Perspectives and theories
Meng, Yishuang
2016-01-01
Leadership has always been an area of interest since time immemorial. Nevertheless, scientific theories regarding leadership started to appear only from the beginning of the 20th century. Modern theories of leadership such as strategic leadership theory emerged as early as the 1980s when outdated theories of behavioral contingency were questioned, resulting in the beginning of a shift in focus leading to the emergence of modern theories hypothesizing the importance of vision, motivation and value-based control of clan and culture. Value-driven clan control emphasizes the importance of the role played by employees in a rapidly changing work environment. Therefore, the 21st century marked the rise of the need to establish a culture driven by values, inspiring the workforce to struggle and strongly seek a shared vision. This can be accomplished by an effective and motivating leadership. PMID:27699006
Spiritual leadership at the workplace: Perspectives and theories.
Meng, Yishuang
2016-10-01
Leadership has always been an area of interest since time immemorial. Nevertheless, scientific theories regarding leadership started to appear only from the beginning of the 20th century. Modern theories of leadership such as strategic leadership theory emerged as early as the 1980s when outdated theories of behavioral contingency were questioned, resulting in the beginning of a shift in focus leading to the emergence of modern theories hypothesizing the importance of vision, motivation and value-based control of clan and culture. Value-driven clan control emphasizes the importance of the role played by employees in a rapidly changing work environment. Therefore, the 21st century marked the rise of the need to establish a culture driven by values, inspiring the workforce to struggle and strongly seek a shared vision. This can be accomplished by an effective and motivating leadership.
NASA Astrophysics Data System (ADS)
Adrian, Brian; Zollman, Dean; Stevens, Scott
2006-02-01
To demonstrate how state-of-the-art video databases can address issues related to the lack of preparation of many physics teachers, we have created the prototype Physics Teaching Web Advisory (Pathway). Pathway's Synthetic Interviews and related video materials are beginning to provide pre-service and out-of-field in-service teachers with much-needed professional development and well-prepared teachers with new perspectives on teaching physics. The prototype was limited to a demonstration of the systems. Now, with an additional grant we will extend the system and conduct research and evaluation on its effectiveness. This project will provide virtual expert help on issues of pedagogy and content. In particular, the system will convey, by example and explanation, contemporary ideas about the teaching of physics and applications of physics education research. The research effort will focus on the value of contemporary technology to address the continuing education of teachers who are teaching in a field in which they have not been trained.
eHealth Networking Information Systems - The New Quality of Information Exchange.
Messer-Misak, Karin; Reiter, Christoph
2017-01-01
The development and introduction of platforms that enable interdisciplinary exchange on current developments and projects in the area of eHealth have been stimulated by different authorities. The aim of this project was to develop a repository of eHealth projects that will make the wealth of eHealth projects visible and enable mutual learning through the sharing of experiences and good practice. The content of the database and search criteria as well as their categories were determined in close co-ordination and cooperation with stakeholders from the specialist areas. Technically, we used Java Server Faces (JSF) for the implementation of the frontend of the web application. Access to structured information on projects can support stakeholders to combining skills and knowledge residing in different places to create new solutions and approaches within a network of evolving competencies and opportunities. A regional database is the beginning of a structured collection and presentation of projects, which can then be incorporated into a broader context. The next step will be to unify this information transparently.
Unraveling the Web of Viroinformatics: Computational Tools and Databases in Virus Research
Priyadarshini, Pragya; Vrati, Sudhanshu
2014-01-01
The beginning of the second century of research in the field of virology (the first virus was discovered in 1898) was marked by its amalgamation with bioinformatics, resulting in the birth of a new domain—viroinformatics. The availability of more than 100 Web servers and databases embracing all or specific viruses (for example, dengue virus, influenza virus, hepatitis virus, human immunodeficiency virus [HIV], hemorrhagic fever virus [HFV], human papillomavirus [HPV], West Nile virus, etc.) as well as distinct applications (comparative/diversity analysis, viral recombination, small interfering RNA [siRNA]/short hairpin RNA [shRNA]/microRNA [miRNA] studies, RNA folding, protein-protein interaction, structural analysis, and phylotyping and genotyping) will definitely aid the development of effective drugs and vaccines. However, information about their access and utility is not available at any single source or on any single platform. Therefore, a compendium of various computational tools and resources dedicated specifically to virology is presented in this article. PMID:25428870
Unraveling the web of viroinformatics: computational tools and databases in virus research.
Sharma, Deepak; Priyadarshini, Pragya; Vrati, Sudhanshu
2015-02-01
The beginning of the second century of research in the field of virology (the first virus was discovered in 1898) was marked by its amalgamation with bioinformatics, resulting in the birth of a new domain--viroinformatics. The availability of more than 100 Web servers and databases embracing all or specific viruses (for example, dengue virus, influenza virus, hepatitis virus, human immunodeficiency virus [HIV], hemorrhagic fever virus [HFV], human papillomavirus [HPV], West Nile virus, etc.) as well as distinct applications (comparative/diversity analysis, viral recombination, small interfering RNA [siRNA]/short hairpin RNA [shRNA]/microRNA [miRNA] studies, RNA folding, protein-protein interaction, structural analysis, and phylotyping and genotyping) will definitely aid the development of effective drugs and vaccines. However, information about their access and utility is not available at any single source or on any single platform. Therefore, a compendium of various computational tools and resources dedicated specifically to virology is presented in this article. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
A data-driven modeling approach to stochastic computation for low-energy biomedical devices.
Lee, Kyong Ho; Jang, Kuk Jin; Shoeb, Ali; Verma, Naveen
2011-01-01
Low-power devices that can detect clinically relevant correlations in physiologically-complex patient signals can enable systems capable of closed-loop response (e.g., controlled actuation of therapeutic stimulators, continuous recording of disease states, etc.). In ultra-low-power platforms, however, hardware error sources are becoming increasingly limiting. In this paper, we present how data-driven methods, which allow us to accurately model physiological signals, also allow us to effectively model and overcome prominent hardware error sources with nearly no additional overhead. Two applications, EEG-based seizure detection and ECG-based arrhythmia-beat classification, are synthesized to a logic-gate implementation, and two prominent error sources are introduced: (1) SRAM bit-cell errors and (2) logic-gate switching errors ('stuck-at' faults). Using patient data from the CHB-MIT and MIT-BIH databases, performance similar to error-free hardware is achieved even for very high fault rates (up to 0.5 for SRAMs and 7 × 10(-2) for logic) that cause computational bit error rates as high as 50%.
Data-driven exploration of copper mineralogy and its application to Earth's near-surface oxidation
NASA Astrophysics Data System (ADS)
Morrison, S. M.; Eleish, A.; Runyon, S.; Prabhu, A.; Fox, P. A.; Ralph, J.; Golden, J. J.; Downs, R. T.; Liu, C.; Meyer, M.; Hazen, R. M.
2017-12-01
Earth's atmospheric composition has changed radically throughout geologic history.1,2 The oxidation of our atmosphere, driven by biology, began with the Great Oxidation Event (GOE) 2.5 Ga and has heavily influenced Earth's near surface mineralogy. Therefore, temporal trends in mineral occurrence elucidate large and small scale geologic and biologic processes. Cu, and other first-row transition elements, are of particular interest due to their variation in valance state and sensitivity to ƒO2. Widespread formation of oxidized Cu mineral species (Cu2+) would not have been possible prior to the GOE and we have found that the proportion of oxidized Cu minerals increased steadily with the increase in atmospheric O2 on Earth's surface (see Fig. 1). To better characterize the changes in Cu mineralogy through time, we have employed advanced analytical and visualization methods. These techniques rely on large and growing mineral databases (e.g., rruff.info, mindat.org, earthchem.org, usgs.gov) and allow us to quantify and visualize multi-dimensional trends.5
USER'S GUIDE FOR GLOED VERSION 1.0 - THE GLOBAL EMISSIONS DATABASE
The document is a user's guide for the EPA-developed, powerful software package, Global Emissions Database (GloED). GloED is a user-friendly, menu-driven tool for storing and retrieving emissions factors and activity data on a country-specific basis. Data can be selected from dat...
The BioMart community portal: an innovative alternative to large, centralized data repositories
USDA-ARS?s Scientific Manuscript database
The BioMart Community Portal (www.biomart.org) is a community-driven effort to provide a unified interface to biomedical databases that are distributed worldwide. The portal provides access to numerous database projects supported by 30 scientific organizations. It includes over 800 different biologi...
ERIC Educational Resources Information Center
Williamson, Ben
2015-01-01
This article examines the emergence of "digital governance" in public education in England. Drawing on and combining concepts from software studies, policy and political studies, it identifies some specific approaches to digital governance facilitated by network-based communications and database-driven information processing software…
Communication Lower Bounds and Optimal Algorithms for Programs that Reference Arrays - Part 1
2013-05-14
include tensor contractions, the direct N-body algorithm, and database join. 1This indicates that this is the first of 5 times that matrix multiplication...and database join. Section 8 summarizes our results, and outlines the contents of Part 2 of this paper. Part 2 will discuss how to compute lower...contractions, the direct N–body algo- rithm, database join, and computing matrix powers Ak. 2 Geometric Model We begin by reviewing the geometric
Machine learning in materials informatics: recent applications and prospects
NASA Astrophysics Data System (ADS)
Ramprasad, Rampi; Batra, Rohit; Pilania, Ghanshyam; Mannodi-Kanakkithodi, Arun; Kim, Chiho
2017-12-01
Propelled partly by the Materials Genome Initiative, and partly by the algorithmic developments and the resounding successes of data-driven efforts in other domains, informatics strategies are beginning to take shape within materials science. These approaches lead to surrogate machine learning models that enable rapid predictions based purely on past data rather than by direct experimentation or by computations/simulations in which fundamental equations are explicitly solved. Data-centric informatics methods are becoming useful to determine material properties that are hard to measure or compute using traditional methods—due to the cost, time or effort involved—but for which reliable data either already exists or can be generated for at least a subset of the critical cases. Predictions are typically interpolative, involving fingerprinting a material numerically first, and then following a mapping (established via a learning algorithm) between the fingerprint and the property of interest. Fingerprints, also referred to as "descriptors", may be of many types and scales, as dictated by the application domain and needs. Predictions may also be extrapolative—extending into new materials spaces—provided prediction uncertainties are properly taken into account. This article attempts to provide an overview of some of the recent successful data-driven "materials informatics" strategies undertaken in the last decade, with particular emphasis on the fingerprint or descriptor choices. The review also identifies some challenges the community is facing and those that should be overcome in the near future.
Estimating a Service-Life Distribution Based on Production Counts and a Failure Database
Ryan, Kenneth J.; Hamada, Michael Scott; Vardeman, Stephen B.
2017-04-01
A manufacturer wanted to compare the service-life distributions of two similar products. These concern product lifetimes after installation (not manufacture). For each product, there were available production counts and an imperfect database providing information on failing units. In the real case, these units were expensive repairable units warrantied against repairs. Failure (of interest here) was relatively rare and driven by a different mode/mechanism than ordinary repair events (not of interest here). Approach: Data models for the service life based on a standard parametric lifetime distribution and a related limited failure population were developed. These models were used to develop expressionsmore » for the likelihood of the available data that properly accounts for information missing in the failure database. Results: A Bayesian approach was employed to obtain estimates of model parameters (with associated uncertainty) in order to investigate characteristics of the service-life distribution. Custom software was developed and is included as Supplemental Material to this case study. One part of a responsible approach to the original case was a simulation experiment used to validate the correctness of the software and the behavior of the statistical methodology before using its results in the application, and an example of such an experiment is included here. Because of confidentiality issues that prevent use of the original data, simulated data with characteristics like the manufacturer’s proprietary data are used to illustrate some aspects of our real analyses. Lastly, we also note that, although this case focuses on rare and complete product failure, the statistical methodology provided is directly applicable to more standard warranty data problems involving typically much larger warranty databases where entries are warranty claims (often for repairs) rather than reports of complete failures.« less
Development of a functional, internet-accessible department of surgery outcomes database.
Newcomb, William L; Lincourt, Amy E; Gersin, Keith; Kercher, Kent; Iannitti, David; Kuwada, Tim; Lyons, Cynthia; Sing, Ronald F; Hadzikadic, Mirsad; Heniford, B Todd; Rucho, Susan
2008-06-01
The need for surgical outcomes data is increasing due to pressure from insurance companies, patients, and the need for surgeons to keep their own "report card". Current data management systems are limited by inability to stratify outcomes based on patients, surgeons, and differences in surgical technique. Surgeons along with research and informatics personnel from an academic, hospital-based Department of Surgery and a state university's Department of Information Technology formed a partnership to develop a dynamic, internet-based, clinical data warehouse. A five-component model was used: data dictionary development, web application creation, participating center education and management, statistics applications, and data interpretation. A data dictionary was developed from a list of data elements to address needs of research, quality assurance, industry, and centers of excellence. A user-friendly web interface was developed with menu-driven check boxes, multiple electronic data entry points, direct downloads from hospital billing information, and web-based patient portals. Data were collected on a Health Insurance Portability and Accountability Act-compliant server with a secure firewall. Protected health information was de-identified. Data management strategies included automated auditing, on-site training, a trouble-shooting hotline, and Institutional Review Board oversight. Real-time, daily, monthly, and quarterly data reports were generated. Fifty-eight publications and 109 abstracts have been generated from the database during its development and implementation. Seven national academic departments now use the database to track patient outcomes. The development of a robust surgical outcomes database requires a combination of clinical, informatics, and research expertise. Benefits of surgeon involvement in outcomes research include: tracking individual performance, patient safety, surgical research, legal defense, and the ability to provide accurate information to patient and payers.
Estimating a Service-Life Distribution Based on Production Counts and a Failure Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, Kenneth J.; Hamada, Michael Scott; Vardeman, Stephen B.
A manufacturer wanted to compare the service-life distributions of two similar products. These concern product lifetimes after installation (not manufacture). For each product, there were available production counts and an imperfect database providing information on failing units. In the real case, these units were expensive repairable units warrantied against repairs. Failure (of interest here) was relatively rare and driven by a different mode/mechanism than ordinary repair events (not of interest here). Approach: Data models for the service life based on a standard parametric lifetime distribution and a related limited failure population were developed. These models were used to develop expressionsmore » for the likelihood of the available data that properly accounts for information missing in the failure database. Results: A Bayesian approach was employed to obtain estimates of model parameters (with associated uncertainty) in order to investigate characteristics of the service-life distribution. Custom software was developed and is included as Supplemental Material to this case study. One part of a responsible approach to the original case was a simulation experiment used to validate the correctness of the software and the behavior of the statistical methodology before using its results in the application, and an example of such an experiment is included here. Because of confidentiality issues that prevent use of the original data, simulated data with characteristics like the manufacturer’s proprietary data are used to illustrate some aspects of our real analyses. Lastly, we also note that, although this case focuses on rare and complete product failure, the statistical methodology provided is directly applicable to more standard warranty data problems involving typically much larger warranty databases where entries are warranty claims (often for repairs) rather than reports of complete failures.« less
Odronitz, Florian; Kollmar, Martin
2006-11-29
Annotation of protein sequences of eukaryotic organisms is crucial for the understanding of their function in the cell. Manual annotation is still by far the most accurate way to correctly predict genes. The classification of protein sequences, their phylogenetic relation and the assignment of function involves information from various sources. This often leads to a collection of heterogeneous data, which is hard to track. Cytoskeletal and motor proteins consist of large and diverse superfamilies comprising up to several dozen members per organism. Up to date there is no integrated tool available to assist in the manual large-scale comparative genomic analysis of protein families. Pfarao (Protein Family Application for Retrieval, Analysis and Organisation) is a database driven online working environment for the analysis of manually annotated protein sequences and their relationship. Currently, the system can store and interrelate a wide range of information about protein sequences, species, phylogenetic relations and sequencing projects as well as links to literature and domain predictions. Sequences can be imported from multiple sequence alignments that are generated during the annotation process. A web interface allows to conveniently browse the database and to compile tabular and graphical summaries of its content. We implemented a protein sequence-centric web application to store, organize, interrelate, and present heterogeneous data that is generated in manual genome annotation and comparative genomics. The application has been developed for the analysis of cytoskeletal and motor proteins (CyMoBase) but can easily be adapted for any protein.
Use of Knowledge Bases in Education of Database Management
ERIC Educational Resources Information Center
Radványi, Tibor; Kovács, Emod
2008-01-01
In this article we present a segment of Sulinet Digital Knowledgebase curriculum system in which you can find the sections of subject-matter which aid educating the database management. You can follow the order of the course from the beginning when some topics appearance and raise in elementary school, through the topics accomplish in secondary…
ERIC Educational Resources Information Center
Ripoll, C. Lopez Cerdan; And Others
This paper describes the development by the Mexican Electric Power Research Institute (Instituto de Investigaciones Electricas or IIE) over a 10-year period of a publications and conferences database (PCDB) of research and development output of the institute. The paper begins by listing the objectives of the database and describing data coverage…
A Logistic Approach to Predicting Student Success in Online Database Courses
ERIC Educational Resources Information Center
Garman, George
2010-01-01
This paper examines the affects of reading comprehension on the performance of online students in a beginning database management class. Reading comprehension is measured by the results of a Cloze Test administered online to the students during the first week of classes. Using data collected from 2002 through 2008, the significance of the Cloze…
77 FR 38292 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-27
... purchase from the HCUP Central Distributor for data years beginning in 1988. (2) The Kids' Inpatient Database (KID) is the only all-payer inpatient care database for children in the United States. The KID was... child health issues. The KID contains a sample of over 3 million discharges for children age 20 and...
Database Design Learning: A Project-Based Approach Organized through a Course Management System
ERIC Educational Resources Information Center
Dominguez, Cesar; Jaime, Arturo
2010-01-01
This paper describes an active method for database design learning through practical tasks development by student teams in a face-to-face course. This method integrates project-based learning, and project management techniques and tools. Some scaffolding is provided at the beginning that forms a skeleton that adapts to a great variety of…
NASA Astrophysics Data System (ADS)
Liu, G.; Wu, C.; Li, X.; Song, P.
2013-12-01
The 3D urban geological information system has been a major part of the national urban geological survey project of China Geological Survey in recent years. Large amount of multi-source and multi-subject data are to be stored in the urban geological databases. There are various models and vocabularies drafted and applied by industrial companies in urban geological data. The issues such as duplicate and ambiguous definition of terms and different coding structure increase the difficulty of information sharing and data integration. To solve this problem, we proposed a national standard-driven information classification and coding method to effectively store and integrate urban geological data, and we applied the data dictionary technology to achieve structural and standard data storage. The overall purpose of this work is to set up a common data platform to provide information sharing service. Research progresses are as follows: (1) A unified classification and coding method for multi-source data based on national standards. Underlying national standards include GB 9649-88 for geology and GB/T 13923-2006 for geography. Current industrial models are compared with national standards to build a mapping table. The attributes of various urban geological data entity models are reduced to several categories according to their application phases and domains. Then a logical data model is set up as a standard format to design data file structures for a relational database. (2) A multi-level data dictionary for data standardization constraint. Three levels of data dictionary are designed: model data dictionary is used to manage system database files and enhance maintenance of the whole database system; attribute dictionary organizes fields used in database tables; term and code dictionary is applied to provide a standard for urban information system by adopting appropriate classification and coding methods; comprehensive data dictionary manages system operation and security. (3) An extension to system data management function based on data dictionary. Data item constraint input function is making use of the standard term and code dictionary to get standard input result. Attribute dictionary organizes all the fields of an urban geological information database to ensure the consistency of term use for fields. Model dictionary is used to generate a database operation interface automatically with standard semantic content via term and code dictionary. The above method and technology have been applied to the construction of Fuzhou Urban Geological Information System, South-East China with satisfactory results.
"First generation" automated DNA sequencing technology.
Slatko, Barton E; Kieleczawa, Jan; Ju, Jingyue; Gardner, Andrew F; Hendrickson, Cynthia L; Ausubel, Frederick M
2011-10-01
Beginning in the 1980s, automation of DNA sequencing has greatly increased throughput, reduced costs, and enabled large projects to be completed more easily. The development of automation technology paralleled the development of other aspects of DNA sequencing: better enzymes and chemistry, separation and imaging technology, sequencing protocols, robotics, and computational advancements (including base-calling algorithms with quality scores, database developments, and sequence analysis programs). Despite the emergence of high-throughput sequencing platforms, automated Sanger sequencing technology remains useful for many applications. This unit provides background and a description of the "First-Generation" automated DNA sequencing technology. It also includes protocols for using the current Applied Biosystems (ABI) automated DNA sequencing machines. © 2011 by John Wiley & Sons, Inc.
Momo, Kenji
2018-01-01
Hospital-prepared drugs (HP), known as In'Naiseizai in Japan, are custom-prepared formulations which offer medical professionals an alternative administration pathway by changing the formulation of existing drugs according to a patients' needs. Preparing the HP is one of several roles of pharmacists in providing personalized medicine at hospitals in Japan. In 2012, the Japanese Society of Hospital Pharmacists provided guidelines for the appropriate use of "Hospital-prepared drugs". The following information was included in this guide: 1) documentation of the proper procedures, materials, prescription practices, etc., 2) required approval from the institutional review board of each HP on the risk-based classifications, and 3) to assess the stability, efficacy, and safety of each HP. However, several problems persist for pharmacists trying to prepare or use HP appropriately; the most common is insufficient manpower to both assess and prepare these drugs during routine hospital work. To resolve this problem, we are developing an evidence database for HP based on surveys of the current literature. This database has been developed for 109 drugs to date. Data-driven assessment of the stability of HP showed that 52 out of 109 drugs examined (47.7%). Notably, only 6 of the 109 HP (5.5%) in the database had all three characteristics of "stability", "safety", and "efficacy". In conclusion, the application of this database will save manpower hours for hospital pharmacists in the preparation of HP. In the near future, we will make this database available to the wider medical community via the web or through literature.
Zirconia in biomedical applications.
Chen, Yen-Wei; Moussi, Joelle; Drury, Jeanie L; Wataha, John C
2016-10-01
The use of zirconia in medicine and dentistry has rapidly expanded over the past decade, driven by its advantageous physical, biological, esthetic, and corrosion properties. Zirconia orthopedic hip replacements have shown superior wear-resistance over other systems; however, risk of catastrophic fracture remains a concern. In dentistry, zirconia has been widely adopted for endosseous implants, implant abutments, and all-ceramic crowns. Because of an increasing demand for esthetically pleasing dental restorations, zirconia-based ceramic restorations have become one of the dominant restorative choices. Areas covered: This review provides an updated overview of the applications of zirconia in medicine and dentistry with a focus on dental applications. The MEDLINE electronic database (via PubMed) was searched, and relevant original and review articles from 2010 to 2016 were included. Expert commentary: Recent data suggest that zirconia performs favorably in both orthopedic and dental applications, but quality long-term clinical data remain scarce. Concerns about the effects of wear, crystalline degradation, crack propagation, and catastrophic fracture are still debated. The future of zirconia in biomedical applications will depend on the generation of these data to resolve concerns.
LOLAweb: a containerized web server for interactive genomic locus overlap enrichment analysis.
Nagraj, V P; Magee, Neal E; Sheffield, Nathan C
2018-06-06
The past few years have seen an explosion of interest in understanding the role of regulatory DNA. This interest has driven large-scale production of functional genomics data and analytical methods. One popular analysis is to test for enrichment of overlaps between a query set of genomic regions and a database of region sets. In this way, new genomic data can be easily connected to annotations from external data sources. Here, we present an interactive interface for enrichment analysis of genomic locus overlaps using a web server called LOLAweb. LOLAweb accepts a set of genomic ranges from the user and tests it for enrichment against a database of region sets. LOLAweb renders results in an R Shiny application to provide interactive visualization features, enabling users to filter, sort, and explore enrichment results dynamically. LOLAweb is built and deployed in a Linux container, making it scalable to many concurrent users on our servers and also enabling users to download and run LOLAweb locally.
Intelligent Data Granulation on Load: Improving Infobright's Knowledge Grid
NASA Astrophysics Data System (ADS)
Ślęzak, Dominik; Kowalski, Marcin
One of the major aspects of Infobright's relational database technology is automatic decomposition of each of data tables onto Rough Rows, each consisting of 64K of original rows. Rough Rows are automatically annotated by Knowledge Nodes that represent compact information about the rows' values. Query performance depends on the quality of Knowledge Nodes, i.e., their efficiency in minimizing the access to the compressed portions of data stored on disk, according to the specific query optimization procedures. We show how to implement the mechanism of organizing the incoming data into such Rough Rows that maximize the quality of the corresponding Knowledge Nodes. Given clear business-driven requirements, the implemented mechanism needs to be fully integrated with the data load process, causing no decrease in the data load speed. The performance gain resulting from better data organization is illustrated by some tests over our benchmark data. The differences between the proposed mechanism and some well-known procedures of database clustering or partitioning are discussed. The paper is a continuation of our patent application [22].
A Brief Assessment of LC2IEDM, MIST and Web Services for use in Naval Tactical Data Management
2004-07-01
server software, messaging between the client and server, and a database. The MIST database is implemented in an open source DBMS named PostGreSQL ... PostGreSQL had its beginnings at the University of California, Berkley, in 1986 [11]. The development of PostGreSQL has since evolved into a...contact history from the database. DRDC Atlantic TM 2004-148 9 Request Software Request Software Server Side Response from service
Personalizing Sample Databases with Facebook Information to Increase Intrinsic Motivation
ERIC Educational Resources Information Center
Marzo, Asier; Ardaiz, Oscar; Sanz de Acedo, María Teresa; Sanz de Acedo, María Luisa
2017-01-01
Motivation is fundamental for students to achieve successful and complete learning. Motivation can be extrinsic, i.e., driven by external rewards, or intrinsic, i.e., driven by internal factors. Intrinsic motivation is the most effective and must be inspired by the task at hand. Here, a novel strategy is presented to increase intrinsic motivation…
Calyx{trademark} EA implementation at AECB
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-12-31
This report describes a project to examine the applicability of a knowledge-based decision support software for environmental assessment (Calyx) to assist the Atomic Energy Control Board in environmental screenings, assessment, management, and database searches. The report begins with background on the Calyx software and then reviews activities with regard to modification of the Calyx knowledge base for application to the nuclear sector. This is followed by lists of standard activities handled by the software and activities specific to the Board; the hierarchy of environmental components developed for the Board; details of impact rules that describe the conditions under which environmentalmore » impacts will occur (the bulk of the report); information on mitigation and monitoring rules and on instance data; and considerations for future work on implementing Calyx at the Board. Appendices include an introduction to expert systems and an overview of the Calyx knowledge base structure.« less
NASA Astrophysics Data System (ADS)
Raschka, Sebastian; Scott, Anne M.; Liu, Nan; Gunturu, Santosh; Huertas, Mar; Li, Weiming; Kuhn, Leslie A.
2018-03-01
While the advantage of screening vast databases of molecules to cover greater molecular diversity is often mentioned, in reality, only a few studies have been published demonstrating inhibitor discovery by screening more than a million compounds for features that mimic a known three-dimensional (3D) ligand. Two factors contribute: the general difficulty of discovering potent inhibitors, and the lack of free, user-friendly software to incorporate project-specific knowledge and user hypotheses into 3D ligand-based screening. The Screenlamp modular toolkit presented here was developed with these needs in mind. We show Screenlamp's ability to screen more than 12 million commercially available molecules and identify potent in vivo inhibitors of a G protein-coupled bile acid receptor within the first year of a discovery project. This pheromone receptor governs sea lamprey reproductive behavior, and to our knowledge, this project is the first to establish the efficacy of computational screening in discovering lead compounds for aquatic invasive species control. Significant enhancement in activity came from selecting compounds based on one of the hypotheses: that matching two distal oxygen groups in the 3D structure of the pheromone is crucial for activity. Six of the 15 most active compounds met these criteria. A second hypothesis—that presence of an alkyl sulfate side chain results in high activity—identified another 6 compounds in the top 10, demonstrating the significant benefits of hypothesis-driven screening.
Protein Bioinformatics Databases and Resources
Chen, Chuming; Huang, Hongzhan; Wu, Cathy H.
2017-01-01
Many publicly available data repositories and resources have been developed to support protein related information management, data-driven hypothesis generation and biological knowledge discovery. To help researchers quickly find the appropriate protein related informatics resources, we present a comprehensive review (with categorization and description) of major protein bioinformatics databases in this chapter. We also discuss the challenges and opportunities for developing next-generation protein bioinformatics databases and resources to support data integration and data analytics in the Big Data era. PMID:28150231
The implementation and use of Ada on distributed systems with high reliability requirements
NASA Technical Reports Server (NTRS)
Knight, J. C.
1987-01-01
Performance analysis was begin on the Ada implementations. The goal is to supply the system designer with tools that will allow a rational decision to be made about whether a particular implementation can support a given application early in the design cycle. Primary activities were: analysis of the original approach to recovery in distributed Ada programs using the Advanced Transport Operating System (ATOPS) example; review and assessment of the original approach which was found to be capable of improvement; preparation and presentation of a paper at the 1987 Washington DC Ada Symposium; development of a refined approach to recovery that is presently being applied to the ATOPS example; and design and development of a performance assessment scheme for Ada programs based on a flexible user-driven benchmarking system.
NASA Technical Reports Server (NTRS)
Stockwell, Alan E.; Cooper, Paul A.
1991-01-01
The Integrated Multidisciplinary Analysis Tool (IMAT) consists of a menu driven executive system coupled with a relational database which links commercial structures, structural dynamics and control codes. The IMAT graphics system, a key element of the software, provides a common interface for storing, retrieving, and displaying graphical information. The IMAT Graphics Manual shows users of commercial analysis codes (MATRIXx, MSC/NASTRAN and I-DEAS) how to use the IMAT graphics system to obtain high quality graphical output using familiar plotting procedures. The manual explains the key features of the IMAT graphics system, illustrates their use with simple step-by-step examples, and provides a reference for users who wish to take advantage of the flexibility of the software to customize their own applications.
Scaria, Joy; Sreedharan, Aswathy; Chang, Yung-Fu
2008-01-01
Background Microarrays are becoming a very popular tool for microbial detection and diagnostics. Although these diagnostic arrays are much simpler when compared to the traditional transcriptome arrays, due to the high throughput nature of the arrays, the data analysis requirements still form a bottle neck for the widespread use of these diagnostic arrays. Hence we developed a new online data sharing and analysis environment customised for diagnostic arrays. Methods Microbial Diagnostic Array Workstation (MDAW) is a database driven application designed in MS Access and front end designed in ASP.NET. Conclusion MDAW is a new resource that is customised for the data analysis requirements for microbial diagnostic arrays. PMID:18811969
Scaria, Joy; Sreedharan, Aswathy; Chang, Yung-Fu
2008-09-23
Microarrays are becoming a very popular tool for microbial detection and diagnostics. Although these diagnostic arrays are much simpler when compared to the traditional transcriptome arrays, due to the high throughput nature of the arrays, the data analysis requirements still form a bottle neck for the widespread use of these diagnostic arrays. Hence we developed a new online data sharing and analysis environment customised for diagnostic arrays. Microbial Diagnostic Array Workstation (MDAW) is a database driven application designed in MS Access and front end designed in ASP.NET. MDAW is a new resource that is customised for the data analysis requirements for microbial diagnostic arrays.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blom, Philip Stephen; Marcillo, Omar Eduardo; Euler, Garrett Gene
InfraPy is a Python-based analysis toolkit being development at LANL. The algorithms are intended for ground-based nuclear detonation detection applications to detect, locate, and characterize explosive sources using infrasonic observations. The implementation is usable as a stand-alone Python library or as a command line driven tool operating directly on a database. With multiple scientists working on the project, we've begun using a LANL git repository for collaborative development and version control. Current and planned work on InfraPy focuses on the development of new algorithms and propagation models. Collaboration with Southern Methodist University (SMU) has helped identify bugs and limitations ofmore » the algorithms. The current focus of usage development is focused on library imports and CLI.« less
Interdisciplinary analysis procedures in the modeling and control of large space-based structures
NASA Technical Reports Server (NTRS)
Cooper, Paul A.; Stockwell, Alan E.; Kim, Zeen C.
1987-01-01
The paper describes a computer software system called the Integrated Multidisciplinary Analysis Tool, IMAT, that has been developed at NASA Langley Research Center. IMAT provides researchers and analysts with an efficient capability to analyze satellite control systems influenced by structural dynamics. Using a menu-driven interactive executive program, IMAT links a relational database to commercial structural and controls analysis codes. The paper describes the procedures followed to analyze a complex satellite structure and control system. The codes used to accomplish the analysis are described, and an example is provided of an application of IMAT to the analysis of a reference space station subject to a rectangular pulse loading at its docking port.
ERIC Educational Resources Information Center
Fagan, Judy Condit
2001-01-01
Discusses the need for libraries to routinely redesign their Web sites, and presents a case study that describes how a Perl-driven database at Southern Illinois University's library improved Web site organization and patron access, simplified revisions, and allowed staff unfamiliar with HTML to update content. (Contains 56 references.) (Author/LRW)
Case and Model Driven Dynamic Template Linking
2005-06-01
store the trips in a PostgreSQL database (www.postgresql.org) and the values stored in this database could be re-used to provide values for similar trips...Preferences YES Yes but limited Print Form YES NO Close Form YES NO Just “X” Quit YES NO Just “X” Show User Action History YES NO 6.5 DAML Ontologies
Conceptualizing a Genomics Software Institute (GSI)
Gilbert, Jack A.; Catlett, Charlie; Desai, Narayan; Knight, Rob; White, Owen; Robbins, Robert; Sankaran, Rajesh; Sansone, Susanna-Assunta; Field, Dawn; Meyer, Folker
2012-01-01
Microbial ecology has been enhanced greatly by the ongoing ‘omics revolution, bringing half the world's biomass and most of its biodiversity into analytical view for the first time; indeed, it feels almost like the invention of the microscope and the discovery of the new world at the same time. With major microbial ecology research efforts accumulating prodigious quantities of sequence, protein, and metabolite data, we are now poised to address environmental microbial research at macro scales, and to begin to characterize and understand the dimensions of microbial biodiversity on the planet. What is currently impeding progress is the need for a framework within which the research community can develop, exchange and discuss predictive ecosystem models that describe the biodiversity and functional interactions. Such a framework must encompass data and metadata transparency and interoperation; data and results validation, curation, and search; application programming interfaces for modeling and analysis tools; and human and technical processes and services necessary to ensure broad adoption. Here we discuss the need for focused community interaction to augment and deepen established community efforts, beginning with the Genomic Standards Consortium (GSC), to create a science-driven strategic plan for a Genomic Software Institute (GSI). PMID:22675605
Han, Chang-Hee; Lim, Jeong-Hwan; Lee, Jun-Hak; Kim, Kangsan; Im, Chang-Hwan
2016-01-01
It has frequently been reported that some users of conventional neurofeedback systems can experience only a small portion of the total feedback range due to the large interindividual variability of EEG features. In this study, we proposed a data-driven neurofeedback strategy considering the individual variability of electroencephalography (EEG) features to permit users of the neurofeedback system to experience a wider range of auditory or visual feedback without a customization process. The main idea of the proposed strategy is to adjust the ranges of each feedback level using the density in the offline EEG database acquired from a group of individuals. Twenty-two healthy subjects participated in offline experiments to construct an EEG database, and five subjects participated in online experiments to validate the performance of the proposed data-driven user feedback strategy. Using the optimized bin sizes, the number of feedback levels that each individual experienced was significantly increased to 139% and 144% of the original results with uniform bin sizes in the offline and online experiments, respectively. Our results demonstrated that the use of our data-driven neurofeedback strategy could effectively increase the overall range of feedback levels that each individual experienced during neurofeedback training.
Lim, Jeong-Hwan; Lee, Jun-Hak; Kim, Kangsan
2016-01-01
It has frequently been reported that some users of conventional neurofeedback systems can experience only a small portion of the total feedback range due to the large interindividual variability of EEG features. In this study, we proposed a data-driven neurofeedback strategy considering the individual variability of electroencephalography (EEG) features to permit users of the neurofeedback system to experience a wider range of auditory or visual feedback without a customization process. The main idea of the proposed strategy is to adjust the ranges of each feedback level using the density in the offline EEG database acquired from a group of individuals. Twenty-two healthy subjects participated in offline experiments to construct an EEG database, and five subjects participated in online experiments to validate the performance of the proposed data-driven user feedback strategy. Using the optimized bin sizes, the number of feedback levels that each individual experienced was significantly increased to 139% and 144% of the original results with uniform bin sizes in the offline and online experiments, respectively. Our results demonstrated that the use of our data-driven neurofeedback strategy could effectively increase the overall range of feedback levels that each individual experienced during neurofeedback training. PMID:27631005
Structure and software tools of AIDA.
Duisterhout, J S; Franken, B; Witte, F
1987-01-01
AIDA consists of a set of software tools to allow for fast development and easy-to-maintain Medical Information Systems. AIDA supports all aspects of such a system both during development and operation. It contains tools to build and maintain forms for interactive data entry and on-line input validation, a database management system including a data dictionary and a set of run-time routines for database access, and routines for querying the database and output formatting. Unlike an application generator, the user of AIDA may select parts of the tools to fulfill his needs and program other subsystems not developed with AIDA. The AIDA software uses as host language the ANSI-standard programming language MUMPS, an interpreted language embedded in an integrated database and programming environment. This greatly facilitates the portability of AIDA applications. The database facilities supported by AIDA are based on a relational data model. This data model is built on top of the MUMPS database, the so-called global structure. This relational model overcomes the restrictions of the global structure regarding string length. The global structure is especially powerful for sorting purposes. Using MUMPS as a host language allows the user an easy interface between user-defined data validation checks or other user-defined code and the AIDA tools. AIDA has been designed primarily for prototyping and for the construction of Medical Information Systems in a research environment which requires a flexible approach. The prototyping facility of AIDA operates terminal independent and is even to a great extent multi-lingual. Most of these features are table-driven; this allows on-line changes in the use of terminal type and language, but also causes overhead. AIDA has a set of optimizing tools by which it is possible to build a faster, but (of course) less flexible code from these table definitions. By separating the AIDA software in a source and a run-time version, one is able to write implementation-specific code which can be selected and loaded by a special source loader, being part of the AIDA software. This feature is also accessible for maintaining software on different sites and on different installations.
The EPA Comptox Chemistry Dashboard: A Web-Based Data ...
The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data driven approaches that integrate chemistry, exposure and biological data. As an outcome of these efforts the National Center for Computational Toxicology (NCCT) has measured, assembled and delivered an enormous quantity and diversity of data for the environmental sciences including high-throughput in vitro screening data, in vivo and functional use data, exposure models and chemical databases with associated properties. A series of software applications and databases have been produced over the past decade to deliver these data but recent developments have focused on the development of a new software architecture that assembles the resources into a single platform. A new web application, the CompTox Chemistry Dashboard provides access to data associated with ~720,000 chemical substances. These data include experimental and predicted physicochemical property data, bioassay screening data associated with the ToxCast program, product and functional use information and a myriad of related data of value to environmental scientists. The dashboard provides chemical-based searching based on chemical names, synonyms and CAS Registry Numbers. Flexible search capabilities allow for chemical identificati
The EPA CompTox Chemistry Dashboard - an online resource ...
The U.S. Environmental Protection Agency (EPA) Computational Toxicology Program integrates advances in biology, chemistry, and computer science to help prioritize chemicals for further research based on potential human health risks. This work involves computational and data driven approaches that integrate chemistry, exposure and biological data. As an outcome of these efforts the National Center for Computational Toxicology (NCCT) has measured, assembled and delivered an enormous quantity and diversity of data for the environmental sciences including high-throughput in vitro screening data, in vivo and functional use data, exposure models and chemical databases with associated properties. A series of software applications and databases have been produced over the past decade to deliver these data. Recent work has focused on the development of a new architecture that assembles the resources into a single platform. With a focus on delivering access to Open Data streams, web service integration accessibility and a user-friendly web application the CompTox Dashboard provides access to data associated with ~720,000 chemical substances. These data include research data in the form of bioassay screening data associated with the ToxCast program, experimental and predicted physicochemical properties, product and functional use information and related data of value to environmental scientists. This presentation will provide an overview of the CompTox Dashboard and its va
NASA Astrophysics Data System (ADS)
Schau, Kyle A.
This thesis presents a complete method of modeling the autospectra of turbulence in closed form via an expansion series using the von Karman model as a basis function. It is capable of modeling turbulence in all three directions of fluid flow: longitudinal, lateral, and vertical, separately, thus eliminating the assumption of homogeneous, isotropic flow. A thorough investigation into the expansion series is presented, with the strengths and weaknesses highlighted. Furthermore, numerical aspects and theoretical derivations are provided. This method is then tested against three highly complex flow fields: wake turbulence inside wind farms, helicopter downwash, and helicopter downwash coupled with turbulence shed from a ship superstructure. These applications demonstrate that this method is remarkably robust, that the developed autospectral models are virtually tailored to the design of white noise driven shaping filters, and that these models in closed form facilitate a greater understanding of complex flow fields in wind engineering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, Jennifer; Sandberg, Tami
The Wind-Wildlife Impacts Literature Database (WILD), formerly known as the Avian Literature Database, was created in 1997. The goal of the database was to begin tracking the research that detailed the potential impact of wind energy development on birds. The Avian Literature Database was originally housed on a proprietary platform called Livelink ECM from Open- Text and maintained by in-house technical staff. The initial set of records was added by library staff. A vital part of the newly launched Drupal-based WILD database is the Bibliography module. Many of the resources included in the database have digital object identifiers (DOI). Themore » bibliographic information for any item that has a DOI can be imported into the database using this module. This greatly reduces the amount of manual data entry required to add records to the database. The content available in WILD is international in scope, which can be easily discerned by looking at the tags available in the browse menu.« less
Perspective: Interactive material property databases through aggregation of literature data
NASA Astrophysics Data System (ADS)
Seshadri, Ram; Sparks, Taylor D.
2016-05-01
Searchable, interactive, databases of material properties, particularly those relating to functional materials (magnetics, thermoelectrics, photovoltaics, etc.) are curiously missing from discussions of machine-learning and other data-driven methods for advancing new materials discovery. Here we discuss the manual aggregation of experimental data from the published literature for the creation of interactive databases that allow the original experimental data as well additional metadata to be visualized in an interactive manner. The databases described involve materials for thermoelectric energy conversion, and for the electrodes of Li-ion batteries. The data can be subject to machine-learning, accelerating the discovery of new materials.
International Soil Carbon Network (ISCN) Database v3-1
Nave, Luke [University of Michigan] (ORCID:0000000182588335); Johnson, Kris [USDA-Forest Service; van Ingen, Catharine [Microsoft Research; Agarwal, Deborah [Lawrence Berkeley National Laboratory] (ORCID:0000000150452396); Humphrey, Marty [University of Virginia; Beekwilder, Norman [University of Virginia
2016-01-01
The ISCN is an international scientific community devoted to the advancement of soil carbon research. The ISCN manages an open-access, community-driven soil carbon database. This is version 3-1 of the ISCN Database, released in December 2015. It gathers 38 separate dataset contributions, totalling 67,112 sites with data from 71,198 soil profiles and 431,324 soil layers. For more information about the ISCN, its scientific community and resources, data policies and partner networks visit: http://iscn.fluxdata.org/.
33 CFR 334.1190 - Hood Canal and Dabob Bay, Wash.; naval non-explosive torpedo testing area.
Code of Federal Regulations, 2013 CFR
2013-07-01
... southwesterly to the point of beginning. (2) The regulations. (i) Propeller-driven or other noise-generating craft shall not work their screws or otherwise generate other than incidental noise in the area during...
33 CFR 334.1190 - Hood Canal and Dabob Bay, Wash.; naval non-explosive torpedo testing area.
Code of Federal Regulations, 2012 CFR
2012-07-01
... southwesterly to the point of beginning. (2) The regulations. (i) Propeller-driven or other noise-generating craft shall not work their screws or otherwise generate other than incidental noise in the area during...
33 CFR 334.1190 - Hood Canal and Dabob Bay, Wash.; naval non-explosive torpedo testing area.
Code of Federal Regulations, 2014 CFR
2014-07-01
... southwesterly to the point of beginning. (2) The regulations. (i) Propeller-driven or other noise-generating craft shall not work their screws or otherwise generate other than incidental noise in the area during...
Getting Started with AppleWorks Data Base. First Edition.
ERIC Educational Resources Information Center
Schlenker, Richard M.
This manual is a hands-on teaching tool for beginning users of the AppleWorks database software. It was developed to allow Apple IIGS users who are generally familiar with their machine and its peripherals to build a simple AppleWorks database file using version 2.0 or 2.1 of the program, and to store, print, and manipulate the file. The materials…
Schematic driven silicon photonics design
NASA Astrophysics Data System (ADS)
Chrostowski, Lukas; Lu, Zeqin; Flückiger, Jonas; Pond, James; Klein, Jackson; Wang, Xu; Li, Sarah; Tai, Wei; Hsu, En Yao; Kim, Chan; Ferguson, John; Cone, Chris
2016-03-01
Electronic circuit designers commonly start their design process with a schematic, namely an abstract representation of the physical circuit. In integrated photonics on the other hand, it is very common for the design to begin at the physical component level. In order to build large integrated photonic systems, it is crucial to design using a schematic-driven approach. This includes simulations based on schematics, schematic-driven layout, layout versus schematic verification, and post-layout simulations. This paper describes such a design framework implemented using Mentor Graphics and Lumerical Solutions design tools. In addition, we describe challenges in silicon photonics related to manufacturing, and how these can be taken into account in simulations and how these impact circuit performance.
Assessment of IT solutions used in the Hungarian income tax microsimulation system
NASA Astrophysics Data System (ADS)
Molnar, I.; Hardhienata, S.
2017-01-01
This paper focuses on the use of information technology (IT) in diverse microsimulation studies and presents state-of-the-art solutions in the traditional application field of personal income tax simulation. The aim of the paper is to promote solutions, which can improve the efficiency and quality of microsimulation model implementation, assess their applicability and help to shift attention from microsimulation model implementation and data analysis towards experiment design and model use. First, the authors shortly discuss the relevant characteristics of the microsimulation application field and the managerial decision-making problem. After examination of the salient problems, advanced IT solutions, such as meta-database and service-oriented architecture are presented. The authors show how selected technologies can be applied to support both data- and behavior-driven and even agent-based personal income tax microsimulation model development. Finally, examples are presented and references made to the Hungarian Income Tax Simulator (HITS) models and their results. The paper concludes with a summary of the IT assessment and application-related author remarks dedicated to an Indonesian Income Tax Microsimulation Model.
ERIC Educational Resources Information Center
Weintraub, Robert S.; Martineau, Jennifer W.
2002-01-01
Increasinginly in demand, just-in-time learning is associated with informal, learner-driven knowledge acquisition. Technologies being used include databases, intranets, portals, and content management systems. (JOW)
Design of Integrated Database on Mobile Information System: A Study of Yogyakarta Smart City App
NASA Astrophysics Data System (ADS)
Nurnawati, E. K.; Ermawati, E.
2018-02-01
An integration database is a database which acts as the data store for multiple applications and thus integrates data across these applications (in contrast to an Application Database). An integration database needs a schema that takes all its client applications into account. The benefit of the schema that sharing data among applications does not require an extra layer of integration services on the applications. Any changes to data made in a single application are made available to all applications at the time of database commit - thus keeping the applications’ data use better synchronized. This study aims to design and build an integrated database that can be used by various applications in a mobile device based system platforms with the based on smart city system. The built-in database can be used by various applications, whether used together or separately. The design and development of the database are emphasized on the flexibility, security, and completeness of attributes that can be used together by various applications to be built. The method used in this study is to choice of the appropriate database logical structure (patterns of data) and to build the relational-database models (Design Databases). Test the resulting design with some prototype apps and analyze system performance with test data. The integrated database can be utilized both of the admin and the user in an integral and comprehensive platform. This system can help admin, manager, and operator in managing the application easily and efficiently. This Android-based app is built based on a dynamic clientserver where data is extracted from an external database MySQL. So if there is a change of data in the database, then the data on Android applications will also change. This Android app assists users in searching of Yogyakarta (as smart city) related information, especially in culture, government, hotels, and transportation.
Search Fermilab Insect Database
data reflects observations at Fermilab. Search Clear Choices Find All Insects |Help| |Glossary | |Advanced Search| How it's named and classified: Common Name: Insect Order: equals contains begins with ends
Odronitz, Florian; Kollmar, Martin
2006-01-01
Background Annotation of protein sequences of eukaryotic organisms is crucial for the understanding of their function in the cell. Manual annotation is still by far the most accurate way to correctly predict genes. The classification of protein sequences, their phylogenetic relation and the assignment of function involves information from various sources. This often leads to a collection of heterogeneous data, which is hard to track. Cytoskeletal and motor proteins consist of large and diverse superfamilies comprising up to several dozen members per organism. Up to date there is no integrated tool available to assist in the manual large-scale comparative genomic analysis of protein families. Description Pfarao (Protein Family Application for Retrieval, Analysis and Organisation) is a database driven online working environment for the analysis of manually annotated protein sequences and their relationship. Currently, the system can store and interrelate a wide range of information about protein sequences, species, phylogenetic relations and sequencing projects as well as links to literature and domain predictions. Sequences can be imported from multiple sequence alignments that are generated during the annotation process. A web interface allows to conveniently browse the database and to compile tabular and graphical summaries of its content. Conclusion We implemented a protein sequence-centric web application to store, organize, interrelate, and present heterogeneous data that is generated in manual genome annotation and comparative genomics. The application has been developed for the analysis of cytoskeletal and motor proteins (CyMoBase) but can easily be adapted for any protein. PMID:17134497
Databases applicable to quantitative hazard/risk assessment-Towards a predictive systems toxicology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waters, Michael; Jackson, Marcus
2008-11-15
The Workshop on The Power of Aggregated Toxicity Data addressed the requirement for distributed databases to support quantitative hazard and risk assessment. The authors have conceived and constructed with federal support several databases that have been used in hazard identification and risk assessment. The first of these databases, the EPA Gene-Tox Database was developed for the EPA Office of Toxic Substances by the Oak Ridge National Laboratory, and is currently hosted by the National Library of Medicine. This public resource is based on the collaborative evaluation, by government, academia, and industry, of short-term tests for the detection of mutagens andmore » presumptive carcinogens. The two-phased evaluation process resulted in more than 50 peer-reviewed publications on test system performance and a qualitative database on thousands of chemicals. Subsequently, the graphic and quantitative EPA/IARC Genetic Activity Profile (GAP) Database was developed in collaboration with the International Agency for Research on Cancer (IARC). A chemical database driven by consideration of the lowest effective dose, GAP has served IARC for many years in support of hazard classification of potential human carcinogens. The Toxicological Activity Profile (TAP) prototype database was patterned after GAP and utilized acute, subchronic, and chronic data from the Office of Air Quality Planning and Standards. TAP demonstrated the flexibility of the GAP format for air toxics, water pollutants and other environmental agents. The GAP format was also applied to developmental toxicants and was modified to represent quantitative results from the rodent carcinogen bioassay. More recently, the authors have constructed: 1) the NIEHS Genetic Alterations in Cancer (GAC) Database which quantifies specific mutations found in cancers induced by environmental agents, and 2) the NIEHS Chemical Effects in Biological Systems (CEBS) Knowledgebase that integrates genomic and other biological data including dose-response studies in toxicology and pathology. Each of the public databases has been discussed in prior publications. They will be briefly described in the present report from the perspective of aggregating datasets to augment the data and information contained within them.« less
ERIC Educational Resources Information Center
Blau, Ina; Hameiri, Mira
2017-01-01
Digital educational data management has become an integral part of school practices. Accessing school database by teachers, students, and parents from mobile devices promotes data-driven educational interactions based on real-time information. This paper analyses mobile access of educational database in a large sample of 429 schools during an…
Machine learning in materials informatics: recent applications and prospects
Ramprasad, Rampi; Batra, Rohit; Pilania, Ghanshyam; ...
2017-12-13
Propelled partly by the Materials Genome Initiative, and partly by the algorithmic developments and the resounding successes of data-driven efforts in other domains, informatics strategies are beginning to take shape within materials science. These approaches lead to surrogate machine learning models that enable rapid predictions based purely on past data rather than by direct experimentation or by computations/simulations in which fundamental equations are explicitly solved. Data-centric informatics methods are becoming useful to determine material properties that are hard to measure or compute using traditional methods—due to the cost, time or effort involved—but for which reliable data either already exists ormore » can be generated for at least a subset of the critical cases. Predictions are typically interpolative, involving fingerprinting a material numerically first, and then following a mapping (established via a learning algorithm) between the fingerprint and the property of interest. Fingerprints, also referred to as “descriptors”, may be of many types and scales, as dictated by the application domain and needs. Predictions may also be extrapolative—extending into new materials spaces—provided prediction uncertainties are properly taken into account. This article attempts to provide an overview of some of the recent successful data-driven “materials informatics” strategies undertaken in the last decade, with particular emphasis on the fingerprint or descriptor choices. The review also identifies some challenges the community is facing and those that should be overcome in the near future.« less
Machine learning in materials informatics: recent applications and prospects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramprasad, Rampi; Batra, Rohit; Pilania, Ghanshyam
Propelled partly by the Materials Genome Initiative, and partly by the algorithmic developments and the resounding successes of data-driven efforts in other domains, informatics strategies are beginning to take shape within materials science. These approaches lead to surrogate machine learning models that enable rapid predictions based purely on past data rather than by direct experimentation or by computations/simulations in which fundamental equations are explicitly solved. Data-centric informatics methods are becoming useful to determine material properties that are hard to measure or compute using traditional methods—due to the cost, time or effort involved—but for which reliable data either already exists ormore » can be generated for at least a subset of the critical cases. Predictions are typically interpolative, involving fingerprinting a material numerically first, and then following a mapping (established via a learning algorithm) between the fingerprint and the property of interest. Fingerprints, also referred to as “descriptors”, may be of many types and scales, as dictated by the application domain and needs. Predictions may also be extrapolative—extending into new materials spaces—provided prediction uncertainties are properly taken into account. This article attempts to provide an overview of some of the recent successful data-driven “materials informatics” strategies undertaken in the last decade, with particular emphasis on the fingerprint or descriptor choices. The review also identifies some challenges the community is facing and those that should be overcome in the near future.« less
Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry
NASA Technical Reports Server (NTRS)
Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.
2004-01-01
Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.
NASA Technical Reports Server (NTRS)
Stoica, A.; Keymeulen, D.; Zebulum, R. S.; Ferguson, M. I.
2003-01-01
This paper describes scalability issues of evolutionary-driven automatic synthesis of electronic circuits. The article begins by reviewing the concepts of circuit evolution and discussing the limitations of this technique when trying to achieve more complex systems.
NASA Technical Reports Server (NTRS)
Anderson, David J.; Mizukami, Masashi
1993-01-01
NASA has initiated the High Speed Research (HSR) program with the goal to develop technologies for a new generation, economically viable, environmentally acceptable, supersonic transport (SST) called the High Speed Civil Transport (HSCT). A significant part of this effort is expected to be in multidisciplinary systems integration, such as in propulsion airframe integration (PAI). In order to assimilate the knowledge database on PAI for SST type aircraft, a bibliography on this subject was compiled. The bibliography with over 1200 entries, full abstracts, and indexes. Related topics are also covered, such as the following: engine inlets, engine cycles, nozzles, existing supersonic cruise aircraft, noise issues, computational fluid dynamics, aerodynamics, and external interference. All identified documents from 1980 through early 1991 are included; this covers the latter part of the NASA Supersonic Cruise Research (SCR) program and the beginnings of the HSR program. In addition, some pre-1980 documents of significant merit or reference value are also included. The references were retrieved via a computerized literature search using the NASA RECON database system.
Nuclear data activities at the n_TOF facility at CERN
NASA Astrophysics Data System (ADS)
Gunsing, F.; Aberle, O.; Andrzejewski, J.; Audouin, L.; Bécares, V.; Bacak, M.; Balibrea-Correa, J.; Barbagallo, M.; Barros, S.; Bečvář, F.; Beinrucker, C.; Belloni, F.; Berthoumieux, E.; Billowes, J.; Bosnar, D.; Brugger, M.; Caamaño, M.; Calviño, F.; Calviani, M.; Cano-Ott, D.; Cardella, R.; Casanovas, A.; Castelluccio, D. M.; Cerutti, F.; Chen, Y. H.; Chiaveri, E.; Colonna, N.; Cortés-Giraldo, M. A.; Cortés, G.; Cosentino, L.; Damone, L. A.; Deo, K.; Diakaki, M.; Domingo-Pardo, C.; Dressler, R.; Dupont, E.; Durán, I.; Fernández-Domínguez, B.; Ferrari, A.; Ferreira, P.; Finocchiaro, P.; Frost, R. J. W.; Furman, V.; Ganesan, S.; García, A. R.; Gawlik, A.; Gheorghe, I.; Glodariu, T.; Gonçalves, I. F.; González, E.; Goverdovski, A.; Griesmayer, E.; Guerrero, C.; Göbel, K.; Harada, H.; Heftrich, T.; Heinitz, S.; Hernández-Prieto, A.; Heyse, J.; Jenkins, D. G.; Jericha, E.; Käppeler, F.; Kadi, Y.; Katabuchi, T.; Kavrigin, P.; Ketlerov, V.; Khryachkov, V.; Kimura, A.; Kivel, N.; Kokkoris, M.; Krtička, M.; Leal-Cidoncha, E.; Lederer, C.; Leeb, H.; Lerendegui, J.; Licata, M.; Lo Meo, S.; Lonsdale, S. J.; Losito, R.; Macina, D.; Marganiec, J.; Martínez, T.; Masi, A.; Massimi, C.; Mastinu, P.; Mastromarco, M.; Matteucci, F.; Maugeri, E. A.; Mazzone, A.; Mendoza, E.; Mengoni, A.; Milazzo, P. M.; Mingrone, F.; Mirea, M.; Montesano, S.; Musumarra, A.; Nolte, R.; Oprea, A.; Palomo-Pinto, F. R.; Paradela, C.; Patronis, N.; Pavlik, A.; Perkowski, J.; Porras, I.; Praena, J.; Quesada, J. M.; Rajeev, K.; Rauscher, T.; Reifarth, R.; Riego-Perez, A.; Robles, M.; Rout, P.; Radeck, D.; Rubbia, C.; Ryan, J. A.; Sabaté-Gilarte, M.; Saxena, A.; Schillebeeckx, P.; Schmidt, S.; Schumann, D.; Sedyshev, P.; Smith, A. G.; Stamatopoulos, A.; Suryanarayana, S. V.; Tagliente, G.; Tain, J. L.; Tarifeño-Saldivia, A.; Tarrío, D.; Tassan-Got, L.; Tsinganis, A.; Valenta, S.; Vannini, G.; Variale, V.; Vaz, P.; Ventura, A.; Vlachoudis, V.; Vlastou, R.; Wallner, A.; Warren, S.; Weigand, M.; Weiss, C.; Wolf, C.; Woods, P. J.; Wright, T.; Žugec, P.
2016-10-01
Nuclear data in general, and neutron-induced reaction cross sections in particular, are important for a wide variety of research fields. They play a key role in the safety and criticality assessment of nuclear technology, not only for existing power reactors but also for radiation dosimetry, medical applications, the transmutation of nuclear waste, accelerator-driven systems, fuel cycle investigations and future reactor systems as in Generation IV. Applications of nuclear data are also related to research fields as the study of nuclear level densities and stellar nucleosynthesis. Simulations and calculations of nuclear technology applications largely rely on evaluated nuclear data libraries. The evaluations in these libraries are based both on experimental data and theoretical models. Experimental nuclear reaction data are compiled on a worldwide basis by the international network of Nuclear Reaction Data Centres (NRDC) in the EXFOR database. The EXFOR database forms an important link between nuclear data measurements and the evaluated data libraries. CERN's neutron time-of-flight facility n_TOF has produced a considerable amount of experimental data since it has become fully operational with the start of the scientific measurement programme in 2001. While for a long period a single measurement station (EAR1) located at 185 m from the neutron production target was available, the construction of a second beam line at 20 m (EAR2) in 2014 has substantially increased the measurement capabilities of the facility. An outline of the experimental nuclear data activities at CERN's neutron time-of-flight facility n_TOF will be presented.
The ``Missing Compounds'' affair in functionality-driven material discovery
NASA Astrophysics Data System (ADS)
Zunger, Alex
2014-03-01
In the paradigm of ``data-driven discovery,'' underlying one of the leading streams of the Material Genome Initiative (MGI), one attempts to compute high-throughput style as many of the properties of as many of the N (about 10**5- 10**6) compounds listed in databases of previously known compounds. One then inspects the ensuing Big Data, searching for useful trends. The alternative and complimentary paradigm of ``functionality-directed search and optimization'' used here, searches instead for the n much smaller than N configurations and compositions that have the desired value of the target functionality. Examples include the use of genetic and other search methods that optimize the structure or identity of atoms on lattice sites, using atomistic electronic structure (such as first-principles) approaches in search of a given electronic property. This addresses a few of the bottlenecks that have faced the alternative, data-driven/high throughput/Big Data philosophy: (i) When the configuration space is theoretically of infinite size, building a complete data base as in data-driven discovery is impossible, yet searching for the optimum functionality, is still a well-posed problem. (ii) The configuration space that we explore might include artificially grown, kinetically stabilized systems (such as 2D layer stacks; superlattices; colloidal nanostructures; Fullerenes) that are not listed in compound databases (used by data-driven approaches), (iii) a large fraction of chemically plausible compounds have not been experimentally synthesized, so in the data-driven approach these are often skipped. In our approach we search explicitly for such ``Missing Compounds''. It is likely that many interesting material properties will be found in cases (i)-(iii) that elude high throughput searches based on databases encapsulating existing knowledge. I will illustrate (a) Functionality-driven discovery of topological insulators and valley-split quantum-computer semiconductors, as well as (b) Use of ``first principles thermodynamics'' to discern which of the previously ``missing compounds'' should, in fact exist and in which structure. Synthesis efforts by Poeppelmeier group at NU realized 20 never-before-made half-Heusler compounds out of the 20 predicted ones, in our predicted space groups. This type of theory-led experimental search of designed materials with target functionalities may shorten the current process of discovery of interesting functional materials. Supported by DOE ,Office of Science, Energy Frontier Research Center for Inverse Design
Seasonal Forecasting of Fire Weather Based on a New Global Fire Weather Database
NASA Technical Reports Server (NTRS)
Dowdy, Andrew J.; Field, Robert D.; Spessa, Allan C.
2016-01-01
Seasonal forecasting of fire weather is examined based on a recently produced global database of the Fire Weather Index (FWI) system beginning in 1980. Seasonal average values of the FWI are examined in relation to measures of the El Nino-Southern Oscillation (ENSO) and the Indian Ocean Dipole (IOD). The results are used to examine seasonal forecasts of fire weather conditions throughout the world.
Modeling Constellation Virtual Missions Using the Vdot(Trademark) Process Management Tool
NASA Technical Reports Server (NTRS)
Hardy, Roger; ONeil, Daniel; Sturken, Ian; Nix, Michael; Yanez, Damian
2011-01-01
The authors have identified a software tool suite that will support NASA's Virtual Mission (VM) effort. This is accomplished by transforming a spreadsheet database of mission events, task inputs and outputs, timelines, and organizations into process visualization tools and a Vdot process management model that includes embedded analysis software as well as requirements and information related to data manipulation and transfer. This paper describes the progress to date, and the application of the Virtual Mission to not only Constellation but to other architectures, and the pertinence to other aerospace applications. Vdot s intuitive visual interface brings VMs to life by turning static, paper-based processes into active, electronic processes that can be deployed, executed, managed, verified, and continuously improved. A VM can be executed using a computer-based, human-in-the-loop, real-time format, under the direction and control of the NASA VM Manager. Engineers in the various disciplines will not have to be Vdot-proficient but rather can fill out on-line, Excel-type databases with the mission information discussed above. The author s tool suite converts this database into several process visualization tools for review and into Microsoft Project, which can be imported directly into Vdot. Many tools can be embedded directly into Vdot, and when the necessary data/information is received from a preceding task, the analysis can be initiated automatically. Other NASA analysis tools are too complex for this process but Vdot automatically notifies the tool user that the data has been received and analysis can begin. The VM can be simulated from end-to-end using the author s tool suite. The planned approach for the Vdot-based process simulation is to generate the process model from a database; other advantages of this semi-automated approach are the participants can be geographically remote and after refining the process models via the human-in-the-loop simulation, the system can evolve into a process management server for the actual process.
Defining the demands and meeting the challenges of integrated bird conservation
Charles K. Baxter
2005-01-01
Understanding the demands of integration bird conservation begins with a critical assessment of the North American Bird Conservation Initiative's (NABCI) goal."Regionally-based, biologically-driven, landscape oriented partnerships delivering the full spectrum of bird conservation across the entirety of North America."
21SSD: a new public 21-cm EoR database
NASA Astrophysics Data System (ADS)
Eames, Evan; Semelin, Benoît
2018-05-01
With current efforts inching closer to detecting the 21-cm signal from the Epoch of Reionization (EoR), proper preparation will require publicly available simulated models of the various forms the signal could take. In this work we present a database of such models, available at 21ssd.obspm.fr. The models are created with a fully-coupled radiative hydrodynamic simulation (LICORICE), and are created at high resolution (10243). We also begin to analyse and explore the possible 21-cm EoR signals (with Power Spectra and Pixel Distribution Functions), and study the effects of thermal noise on our ability to recover the signal out to high redshifts. Finally, we begin to explore the concepts of `distance' between different models, which represents a crucial step towards optimising parameter space sampling, training neural networks, and finally extracting parameter values from observations.
Evaluation of software maintain ability with open EHR - a comparison of architectures.
Atalag, Koray; Yang, Hong Yul; Tempero, Ewan; Warren, James R
2014-11-01
To assess whether it is easier to maintain a clinical information system developed using open EHR model driven development versus mainstream methods. A new open source application (GastrOS) has been developed following open EHR's multi-level modelling approach using .Net/C# based on the same requirements of an existing clinically used application developed using Microsoft Visual Basic and Access database. Almost all the domain knowledge was embedded into the software code and data model in the latter. The same domain knowledge has been expressed as a set of open EHR Archetypes in GastrOS. We then introduced eight real-world change requests that had accumulated during live clinical usage, and implemented these in both systems while measuring time for various development tasks and change in software size for each change request. Overall it took half the time to implement changes in GastrOS. However it was the more difficult application to modify for one change request, suggesting the nature of change is also important. It was not possible to implement changes by modelling only. Comparison of relative measures of time and software size change within each application highlights how architectural differences affected maintain ability across change requests. The use of open EHR model driven development can result in better software maintain ability. The degree to which open EHR affects software maintain ability depends on the extent and nature of domain knowledge involved in changes. Although we used relative measures for time and software size, confounding factors could not be totally excluded as a controlled study design was not feasible. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Decision Manifold Approximation for Physics-Based Simulations
NASA Technical Reports Server (NTRS)
Wong, Jay Ming; Samareh, Jamshid A.
2016-01-01
With the recent surge of success in big-data driven deep learning problems, many of these frameworks focus on the notion of architecture design and utilizing massive databases. However, in some scenarios massive sets of data may be difficult, and in some cases infeasible, to acquire. In this paper we discuss a trajectory-based framework that quickly learns the underlying decision manifold of binary simulation classifications while judiciously selecting exploratory target states to minimize the number of required simulations. Furthermore, we draw particular attention to the simulation prediction application idealized to the case where failures in simulations can be predicted and avoided, providing machine intelligence to novice analysts. We demonstrate this framework in various forms of simulations and discuss its efficacy.
Yates, John R
2015-11-01
Advances in computer technology and software have driven developments in mass spectrometry over the last 50 years. Computers and software have been impactful in three areas: the automation of difficult calculations to aid interpretation, the collection of data and control of instruments, and data interpretation. As the power of computers has grown, so too has the utility and impact on mass spectrometers and their capabilities. This has been particularly evident in the use of tandem mass spectrometry data to search protein and nucleotide sequence databases to identify peptide and protein sequences. This capability has driven the development of many new approaches to study biological systems, including the use of "bottom-up shotgun proteomics" to directly analyze protein mixtures. Graphical Abstract ᅟ.
Sharrow, David J; Anderson, James J
2016-12-01
The rise in human life expectancy has involved declines in intrinsic and extrinsic mortality processes associated, respectively, with senescence and environmental challenges. To better understand the factors driving this rise, we apply a two-process vitality model to data from the Human Mortality Database. Model parameters yield intrinsic and extrinsic cumulative survival curves from which we derive intrinsic and extrinsic expected life spans (ELS). Intrinsic ELS, a measure of longevity acted on by intrinsic, physiological factors, changed slowly over two centuries and then entered a second phase of increasing longevity ostensibly brought on by improvements in old-age death reduction technologies and cumulative health behaviors throughout life. The model partitions the majority of the increase in life expectancy before 1950 to increasing extrinsic ELS driven by reductions in environmental, event-based health challenges in both childhood and adulthood. In the post-1950 era, the extrinsic ELS of females appears to be converging to the intrinsic ELS, whereas the extrinsic ELS of males is approximately 20 years lower than the intrinsic ELS.
Anderson, James J.
2016-01-01
The rise in human life expectancy has involved declines in intrinsic and extrinsic mortality processes associated, respectively, with senescence and environmental challenges. To better understand the factors driving this rise, we apply a two-process vitality model to data from the Human Mortality Database. Model parameters yield intrinsic and extrinsic cumulative survival curves from which we derive intrinsic and extrinsic expected life spans (ELS). Intrinsic ELS, a measure of longevity acted on by intrinsic, physiological factors, changed slowly over two centuries and then entered a second phase of increasing longevity ostensibly brought on by improvements in old-age death reduction technologies and cumulative health behaviors throughout life. The model partitions the majority of the increase in life expectancy before 1950 to increasing extrinsic ELS driven by reductions in environmental, event-based health challenges in both childhood and adulthood. In the post-1950 era, the extrinsic ELS of females appears to be converging to the intrinsic ELS, whereas the extrinsic ELS of males is approximately 20 years lower than the intrinsic ELS. PMID:27837429
NASA Astrophysics Data System (ADS)
Butell, Bart
1996-02-01
Microsoft's Visual Basic (VB) and Borland's Delphi provide an extremely robust programming environment for delivering multimedia solutions for interactive kiosks, games and titles. Their object oriented use of standard and custom controls enable a user to build extremely powerful applications. A multipurpose, database enabled programming environment that can provide an event driven interface functions as a multimedia kernel. This kernel can provide a variety of authoring solutions (e.g. a timeline based model similar to Macromedia Director or a node authoring model similar to Icon Author). At the heart of the kernel is a set of low level multimedia components providing object oriented interfaces for graphics, audio, video and imaging. Data preparation tools (e.g., layout, palette and Sprite Editors) could be built to manage the media database. The flexible interface for VB allows the construction of an infinite number of user models. The proliferation of these models within a popular, easy to use environment will allow the vast developer segment of 'producer' types to bring their ideas to the market. This is the key to building exciting, content rich multimedia solutions. Microsoft's VB and Borland's Delphi environments combined with multimedia components enable these possibilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, A.E.; Tschanz, J.; Monarch, M.
1996-05-01
The Air Quality Utility Information System (AQUIS) is a database management system that operates under dBASE IV. It runs on an IBM-compatible personal computer (PC) with MS DOS 5.0 or later, 4 megabytes of memory, and 30 megabytes of disk space. AQUIS calculates emissions for both traditional and toxic pollutants and reports emissions in user-defined formats. The system was originally designed for use at 7 facilities of the Air Force Materiel Command, and now more than 50 facilities use it. Within the last two years, the system has been used in support of Title V permit applications at Department ofmore » Defense facilities. Growth in the user community, changes and additions to reference emission factor data, and changing regulatory requirements have demanded additions and enhancements to the system. These changes have ranged from adding or updating an emission factor to restructuring databases and adding new capabilities. Quality assurance (QA) procedures have been developed to ensure that emission calculations are correct even when databases are reconfigured and major changes in calculation procedures are implemented. This paper describes these QA and updating procedures. Some user facilities include light industrial operations associated with aircraft maintenance. These facilities have operations such as fiberglass and composite layup and plating operations for which standard emission factors are not available or are inadequate. In addition, generally applied procedures such as material balances may need special treatment to work in an automated environment, for example, in the use of oils and greases and when materials such as polyurethane paints react chemically during application. Some techniques used in these situations are highlighted here. To provide a framework for the main discussions, this paper begins with a description of AQUIS.« less
ERIC Educational Resources Information Center
Shum, Brenda
2016-01-01
Data plays a starring role in promoting educational equity, and data-driven decision making begins with good state policies. With the recent passage of the Every Student Succeeds Act (ESSA) and a proposed federal rule to address racial disproportionality in special education, states will shoulder increased responsibility for eliminating…
Assessment of the SFC database for analysis and modeling
NASA Technical Reports Server (NTRS)
Centeno, Martha A.
1994-01-01
SFC is one of the four clusters that make up the Integrated Work Control System (IWCS), which will integrate the shuttle processing databases at Kennedy Space Center (KSC). The IWCS framework will enable communication among the four clusters and add new data collection protocols. The Shop Floor Control (SFC) module has been operational for two and a half years; however, at this stage, automatic links to the other 3 modules have not been implemented yet, except for a partial link to IOS (CASPR). SFC revolves around a DB/2 database with PFORMS acting as the database management system (DBMS). PFORMS is an off-the-shelf DB/2 application that provides a set of data entry screens and query forms. The main dynamic entity in the SFC and IOS database is a task; thus, the physical storage location and update privileges are driven by the status of the WAD. As we explored the SFC values, we realized that there was much to do before actually engaging in continuous analysis of the SFC data. Half way into this effort, it was realized that full scale analysis would have to be a future third phase of this effort. So, we concentrated on getting to know the contents of the database, and in establishing an initial set of tools to start the continuous analysis process. Specifically, we set out to: (1) provide specific procedures for statistical models, so as to enhance the TP-OAO office analysis and modeling capabilities; (2) design a data exchange interface; (3) prototype the interface to provide inputs to SCRAM; and (4) design a modeling database. These objectives were set with the expectation that, if met, they would provide former TP-OAO engineers with tools that would help them demonstrate the importance of process-based analyses. The latter, in return, will help them obtain the cooperation of various organizations in charting out their individual processes.
AgrAbility: Frequently Asked Questions
... AgrAbility Resources AgrAbility Services Equipment and Vehicle Modifications Financing-Related Matters Other Modifications Other Disability and Agricultural-related questions Main Menu Home About AgrAbility State Projects Directory The Toolbox AT Database Resources Veterans & Beginning ...
Ge, Cheng-Hao; Sun, Na; Kang, Qi; Ren, Long-Fei; Ahmad, Hafiz Adeel; Ni, Shou-Qing; Wang, Zhibin
2018-03-01
A distinct shift of bacterial community driven by organic matter (OM) and powder activated carbon (PAC) was discovered in the simultaneous anammox and denitrification (SAD) process which was operated in an anti-fouling submerged anaerobic membrane bio-reactor. Based on anammox performance, optimal OM dose (50 mg/L) was advised to start up SAD process successfully. The results of qPCR and high throughput sequencing analysis indicated that OM played a key role in microbial community evolutions, impelling denitrifiers to challenge anammox's dominance. The addition of PAC not only mitigated the membrane fouling, but also stimulated the enrichment of denitrifiers, accounting for the predominant phylum changing from Planctomycetes to Proteobacteria in SAD process. Functional genes forecasts based on KEGG database and COG database showed that the expressions of full denitrification functional genes were highly promoted in R C , which demonstrated the enhanced full denitrification pathway driven by OM and PAC under low COD/N value (0.11). Copyright © 2017 Elsevier Ltd. All rights reserved.
Ho, Lap; Cheng, Haoxiang; Wang, Jun; Simon, James E; Wu, Qingli; Zhao, Danyue; Carry, Eileen; Ferruzzi, Mario G; Faith, Jeremiah; Valcarcel, Breanna; Hao, Ke; Pasinetti, Giulio M
2018-03-05
The development of a given botanical preparation for eventual clinical application requires extensive, detailed characterizations of the chemical composition, as well as the biological availability, biological activity, and safety profiles of the botanical. These issues are typically addressed using diverse experimental protocols and model systems. Based on this consideration, in this study we established a comprehensive database and analysis framework for the collection, collation, and integrative analysis of diverse, multiscale data sets. Using this framework, we conducted an integrative analysis of heterogeneous data from in vivo and in vitro investigation of a complex bioactive dietary polyphenol-rich preparation (BDPP) and built an integrated network linking data sets generated from this multitude of diverse experimental paradigms. We established a comprehensive database and analysis framework as well as a systematic and logical means to catalogue and collate the diverse array of information gathered, which is securely stored and added to in a standardized manner to enable fast query. We demonstrated the utility of the database in (1) a statistical ranking scheme to prioritize response to treatments and (2) in depth reconstruction of functionality studies. By examination of these data sets, the system allows analytical querying of heterogeneous data and the access of information related to interactions, mechanism of actions, functions, etc., which ultimately provide a global overview of complex biological responses. Collectively, we present an integrative analysis framework that leads to novel insights on the biological activities of a complex botanical such as BDPP that is based on data-driven characterizations of interactions between BDPP-derived phenolic metabolites and their mechanisms of action, as well as synergism and/or potential cancellation of biological functions. Out integrative analytical approach provides novel means for a systematic integrative analysis of heterogeneous data types in the development of complex botanicals such as polyphenols for eventual clinical and translational applications.
Overview of Historical Earthquake Document Database in Japan and Future Development
NASA Astrophysics Data System (ADS)
Nishiyama, A.; Satake, K.
2014-12-01
In Japan, damage and disasters from historical large earthquakes have been documented and preserved. Compilation of historical earthquake documents started in the early 20th century and 33 volumes of historical document source books (about 27,000 pages) have been published. However, these source books are not effectively utilized for researchers due to a contamination of low-reliability historical records and a difficulty for keyword searching by characters and dates. To overcome these problems and to promote historical earthquake studies in Japan, construction of text database started in the 21 century. As for historical earthquakes from the beginning of the 7th century to the early 17th century, "Online Database of Historical Documents in Japanese Earthquakes and Eruptions in the Ancient and Medieval Ages" (Ishibashi, 2009) has been already constructed. They investigated the source books or original texts of historical literature, emended the descriptions, and assigned the reliability of each historical document on the basis of written age. Another database compiled the historical documents for seven damaging earthquakes occurred along the Sea of Japan coast in Honshu, central Japan in the Edo period (from the beginning of the 17th century to the middle of the 19th century) and constructed text database and seismic intensity data base. These are now publicized on the web (written only in Japanese). However, only about 9 % of the earthquake source books have been digitized so far. Therefore, we plan to digitize all of the remaining historical documents by the research-program which started in 2014. The specification of the data base will be similar for previous ones. We also plan to combine this database with liquefaction traces database, which will be constructed by other research program, by adding the location information described in historical documents. Constructed database would be utilized to estimate the distributions of seismic intensities and tsunami heights.
Qualitative and Quantitative Pedigree Analysis: Graph Theory, Computer Software, and Case Studies.
ERIC Educational Resources Information Center
Jungck, John R.; Soderberg, Patti
1995-01-01
Presents a series of elementary mathematical tools for re-representing pedigrees, pedigree generators, pedigree-driven database management systems, and case studies for exploring genetic relationships. (MKR)
NASA Astrophysics Data System (ADS)
Tellman, B.; Sullivan, J.; Kettner, A.; Brakenridge, G. R.; Slayback, D. A.; Kuhn, C.; Doyle, C.
2016-12-01
There is an increasing need to understand flood vulnerability as the societal and economic effects of flooding increases. Risk models from insurance companies and flood models from hydrologists must be calibrated based on flood observations in order to make future predictions that can improve planning and help societies reduce future disasters. Specifically, to improve these models both traditional methods of flood prediction from physically based models as well as data-driven techniques, such as machine learning, require spatial flood observation to validate model outputs and quantify uncertainty. A key dataset that is missing for flood model validation is a global historical geo-database of flood event extents. Currently, the most advanced database of historical flood extent is hosted and maintained at the Dartmouth Flood Observatory (DFO) that has catalogued 4320 floods (1985-2015) but has only mapped 5% of these floods. We are addressing this data gap by mapping the inventory of floods in the DFO database to create a first-of- its-kind, comprehensive, global and historical geospatial database of flood events. To do so, we combine water detection algorithms on MODIS and Landsat 5,7 and 8 imagery in Google Earth Engine to map discrete flood events. The created database will be available in the Earth Engine Catalogue for download by country, region, or time period. This dataset can be leveraged for new data-driven hydrologic modeling using machine learning algorithms in Earth Engine's highly parallelized computing environment, and we will show examples for New York and Senegal.
Borrelli, Belinda; Ritterband, Lee M
2015-12-01
This special issue is intended to promote a discussion of eHealth and mHealth and its connection with health psychology. "eHealth" generally refers to the use of information technology, including the Internet, digital gaming, virtual reality, and robotics, in the promotion, prevention, treatment, and maintenance of health. "mHealth" refers to mobile and wireless applications, including text messaging, apps, wearable devices, remote sensing, and the use of social media such as Facebook and Twitter, in the delivery of health related services. This special issue includes 11 articles that begin to address the need for more rigorous methodology, valid assessment, innovative interventions, and increased access to evidence-based programs and interventions. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karnowski, Thomas Paul; Giancardo, Luca; Li, Yaquin
2013-01-01
Automated retina image analysis has reached a high level of maturity in recent years, and thus the question of how validation is performed in these systems is beginning to grow in importance. One application of retina image analysis is in telemedicine, where an automated system could enable the automated detection of diabetic retinopathy and other eye diseases as a low-cost method for broad-based screening. In this work we discuss our experiences in developing a telemedical network for retina image analysis, including our progression from a manual diagnosis network to a more fully automated one. We pay special attention to howmore » validations of our algorithm steps are performed, both using data from the telemedicine network and other public databases.« less
Introducing GFWED: The Global Fire Weather Database
NASA Technical Reports Server (NTRS)
Field, R. D.; Spessa, A. C.; Aziz, N. A.; Camia, A.; Cantin, A.; Carr, R.; de Groot, W. J.; Dowdy, A. J.; Flannigan, M. D.; Manomaiphiboon, K.;
2015-01-01
The Canadian Forest Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations, beginning in 1980, called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5 latitude by 2-3 longitude. Input weather data were obtained from the NASA Modern Era Retrospective-Analysis for Research and Applications (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded data sets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia,Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DCD1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously identified in MERRAs precipitation, and they reinforce the need to consider alternative sources of precipitation data. GFWED can be used for analyzing historical relationships between fire weather and fire activity at continental and global scales, in identifying large-scale atmosphereocean controls on fire weather, and calibration of FWI-based fire prediction models.
76 FR 4072 - Registration of Claims of Copyright
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-24
... registration of automated databases that predominantly consist of photographs, and applications for group... to submit electronic applications to register copyrights of such photographic databases or of groups... automated databases, an electronic application for group registration of an automated database that consists...
MONA – Interactive manipulation of molecule collections
2013-01-01
Working with small‐molecule datasets is a routine task for cheminformaticians and chemists. The analysis and comparison of vendor catalogues and the compilation of promising candidates as starting points for screening campaigns are but a few very common applications. The workflows applied for this purpose usually consist of multiple basic cheminformatics tasks such as checking for duplicates or filtering by physico‐chemical properties. Pipelining tools allow to create and change such workflows without much effort, but usually do not support interventions once the pipeline has been started. In many contexts, however, the best suited workflow is not known in advance, thus making it necessary to take the results of the previous steps into consideration before proceeding. To support intuition‐driven processing of compound collections, we developed MONA, an interactive tool that has been designed to prepare and visualize large small‐molecule datasets. Using an SQL database common cheminformatics tasks such as analysis and filtering can be performed interactively with various methods for visual support. Great care was taken in creating a simple, intuitive user interface which can be instantly used without any setup steps. MONA combines the interactivity of molecule database systems with the simplicity of pipelining tools, thus enabling the case‐to‐case application of chemistry expert knowledge. The current version is available free of charge for academic use and can be downloaded at http://www.zbh.uni‐hamburg.de/mona. PMID:23985157
Data-Driven Belief Revision in Children and Adults
ERIC Educational Resources Information Center
Masnick, Amy M.; Klahr, David; Knowles, Erica R.
2017-01-01
The ability to use numerical evidence to revise beliefs about the physical world is an essential component of scientific reasoning that begins to develop in middle childhood. In 2 studies, we explored how data variability and consistency with participants' initial beliefs about causal factors associated with pendulums affected their ability to…
Initial Morphological Learning in Preverbal Infants
ERIC Educational Resources Information Center
Marquis, Alexandra; Shi, Rushen
2012-01-01
How do children learn the internal structure of inflected words? We hypothesized that bound functional morphemes begin to be encoded at the preverbal stage, driven by their frequent occurrence with highly variable roots, and that infants in turn use these morphemes to interpret other words with the same inflections. Using a preferential looking…
The Short and Active History of The Agnew Group
ERIC Educational Resources Information Center
Bronner, Michael
2007-01-01
The field of business education has been driven by the needs of society since the beginnings of the nation's history--from apprenticeship training, to factory vestibule settings, to the emergence of the for-profit private business schools, to specialized vocational high schools, to the comprehensive secondary school, to business teacher…
ERIC Educational Resources Information Center
Lewis, Ann
2008-01-01
Reidun Tangen begins by reviewing interest in children's "voice" (encompassing the consumer driven, rights based, etc). The main body of her paper examines the philosophical underpinnings of child voice in the research context and, in particular, various interpretations of "the subject" (i.e., the knower) and what it is that is known (i.e., the…
Tripal v1.1: a standards-based toolkit for construction of online genetic and genomic databases.
Sanderson, Lacey-Anne; Ficklin, Stephen P; Cheng, Chun-Huai; Jung, Sook; Feltus, Frank A; Bett, Kirstin E; Main, Dorrie
2013-01-01
Tripal is an open-source freely available toolkit for construction of online genomic and genetic databases. It aims to facilitate development of community-driven biological websites by integrating the GMOD Chado database schema with Drupal, a popular website creation and content management software. Tripal provides a suite of tools for interaction with a Chado database and display of content therein. The tools are designed to be generic to support the various ways in which data may be stored in Chado. Previous releases of Tripal have supported organisms, genomic libraries, biological stocks, stock collections and genomic features, their alignments and annotations. Also, Tripal and its extension modules provided loaders for commonly used file formats such as FASTA, GFF, OBO, GAF, BLAST XML, KEGG heir files and InterProScan XML. Default generic templates were provided for common views of biological data, which could be customized using an open Application Programming Interface to change the way data are displayed. Here, we report additional tools and functionality that are part of release v1.1 of Tripal. These include (i) a new bulk loader that allows a site curator to import data stored in a custom tab delimited format; (ii) full support of every Chado table for Drupal Views (a powerful tool allowing site developers to construct novel displays and search pages); (iii) new modules including 'Feature Map', 'Genetic', 'Publication', 'Project', 'Contact' and the 'Natural Diversity' modules. Tutorials, mailing lists, download and set-up instructions, extension modules and other documentation can be found at the Tripal website located at http://tripal.info. DATABASE URL: http://tripal.info/.
Tripal v1.1: a standards-based toolkit for construction of online genetic and genomic databases
Sanderson, Lacey-Anne; Ficklin, Stephen P.; Cheng, Chun-Huai; Jung, Sook; Feltus, Frank A.; Bett, Kirstin E.; Main, Dorrie
2013-01-01
Tripal is an open-source freely available toolkit for construction of online genomic and genetic databases. It aims to facilitate development of community-driven biological websites by integrating the GMOD Chado database schema with Drupal, a popular website creation and content management software. Tripal provides a suite of tools for interaction with a Chado database and display of content therein. The tools are designed to be generic to support the various ways in which data may be stored in Chado. Previous releases of Tripal have supported organisms, genomic libraries, biological stocks, stock collections and genomic features, their alignments and annotations. Also, Tripal and its extension modules provided loaders for commonly used file formats such as FASTA, GFF, OBO, GAF, BLAST XML, KEGG heir files and InterProScan XML. Default generic templates were provided for common views of biological data, which could be customized using an open Application Programming Interface to change the way data are displayed. Here, we report additional tools and functionality that are part of release v1.1 of Tripal. These include (i) a new bulk loader that allows a site curator to import data stored in a custom tab delimited format; (ii) full support of every Chado table for Drupal Views (a powerful tool allowing site developers to construct novel displays and search pages); (iii) new modules including ‘Feature Map’, ‘Genetic’, ‘Publication’, ‘Project’, ‘Contact’ and the ‘Natural Diversity’ modules. Tutorials, mailing lists, download and set-up instructions, extension modules and other documentation can be found at the Tripal website located at http://tripal.info. Database URL: http://tripal.info/ PMID:24163125
NASA Technical Reports Server (NTRS)
Zendejas, Silvino; Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Kim, Rachel; Allen, Christopher; Luong, Ivy; Chang, George; Sadaqathulla, Syed
2009-01-01
The Work Coordination Engine (WCE) is a Java application integrated into the Service Management Database (SMDB), which coordinates the dispatching and monitoring of a work order system. WCE de-queues work orders from SMDB and orchestrates the dispatching of work to a registered set of software worker applications distributed over a set of local, or remote, heterogeneous computing systems. WCE monitors the execution of work orders once dispatched, and accepts the results of the work order by storing to the SMDB persistent store. The software leverages the use of a relational database, Java Messaging System (JMS), and Web Services using Simple Object Access Protocol (SOAP) technologies to implement an efficient work-order dispatching mechanism capable of coordinating the work of multiple computer servers on various platforms working concurrently on different, or similar, types of data or algorithmic processing. Existing (legacy) applications can be wrapped with a proxy object so that no changes to the application are needed to make them available for integration into the work order system as "workers." WCE automatically reschedules work orders that fail to be executed by one server to a different server if available. From initiation to completion, the system manages the execution state of work orders and workers via a well-defined set of events, states, and actions. It allows for configurable work-order execution timeouts by work-order type. This innovation eliminates a current processing bottleneck by providing a highly scalable, distributed work-order system used to quickly generate products needed by the Deep Space Network (DSN) to support space flight operations. WCE is driven by asynchronous messages delivered via JMS indicating the availability of new work or workers. It runs completely unattended in support of the lights-out operations concept in the DSN.
NMRPro: an integrated web component for interactive processing and visualization of NMR spectra.
Mohamed, Ahmed; Nguyen, Canh Hao; Mamitsuka, Hiroshi
2016-07-01
The popularity of using NMR spectroscopy in metabolomics and natural products has driven the development of an array of NMR spectral analysis tools and databases. Particularly, web applications are well used recently because they are platform-independent and easy to extend through reusable web components. Currently available web applications provide the analysis of NMR spectra. However, they still lack the necessary processing and interactive visualization functionalities. To overcome these limitations, we present NMRPro, a web component that can be easily incorporated into current web applications, enabling easy-to-use online interactive processing and visualization. NMRPro integrates server-side processing with client-side interactive visualization through three parts: a python package to efficiently process large NMR datasets on the server-side, a Django App managing server-client interaction, and SpecdrawJS for client-side interactive visualization. Demo and installation instructions are available at http://mamitsukalab.org/tools/nmrpro/ mohamed@kuicr.kyoto-u.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Resistive flex sensors: a survey
NASA Astrophysics Data System (ADS)
Saggio, Giovanni; Riillo, Francesco; Sbernini, Laura; Quitadamo, Lucia Rita
2016-01-01
Resistive flex sensors can be used to measure bending or flexing with relatively little effort and a relatively low budget. Their lightness, compactness, robustness, measurement effectiveness and low power consumption make these sensors useful for manifold applications in diverse fields. Here, we provide a comprehensive survey of resistive flex sensors, taking into account their working principles, manufacturing aspects, electrical characteristics and equivalent models, useful front-end conditioning circuitry, and physic-bio-chemical aspects. Particular effort is devoted to reporting on and analyzing several applications of resistive flex sensors, related to the measurement of body position and motion, and to the implementation of artificial devices. In relation to the human body, we consider the utilization of resistive flex sensors for the measurement of physical activity and for the development of interaction/interface devices driven by human gestures. Concerning artificial devices, we deal with applications related to the automotive field, robots, orthosis and prosthesis, musical instruments and measuring tools. The presented literature is collected from different sources, including bibliographic databases, company press releases, patents, master’s theses and PhD theses.
Software Application for Supporting the Education of Database Systems
ERIC Educational Resources Information Center
Vágner, Anikó
2015-01-01
The article introduces an application which supports the education of database systems, particularly the teaching of SQL and PL/SQL in Oracle Database Management System environment. The application has two parts, one is the database schema and its content, and the other is a C# application. The schema is to administrate and store the tasks and the…
Detection of Explosive Devices using X-ray Backscatter Radiation
NASA Astrophysics Data System (ADS)
Faust, Anthony A.
2002-09-01
It is our goal to develop a coded aperture based X-ray backscatter imaging detector that will provide sufficient speed, contrast and spatial resolution to detect Antipersonnel Landmines and Improvised Explosive Devices (IED). While our final objective is to field a hand-held detector, we have currently constrained ourselves to a design that can be fielded on a small robotic platform. Coded aperture imaging has been used by the observational gamma astronomy community for a number of years. However, it has been the recent advances in the field of medical nuclear imaging which has allowed for the application of the technique to a backscatter scenario. In addition, driven by requirements in medical applications, advances in X-ray detection are continually being made, and detectors are now being produced that are faster, cheaper and lighter than those only a decade ago. With these advances, a coded aperture hand-held imaging system has only recently become a possibility. This paper will begin with an introduction to the technique, identify recent advances which have made this approach possible, present a simulated example case, and conclude with a discussion on future work.
Improved Information Retrieval Performance on SQL Database Using Data Adapter
NASA Astrophysics Data System (ADS)
Husni, M.; Djanali, S.; Ciptaningtyas, H. T.; Wicaksana, I. G. N. A.
2018-02-01
The NoSQL databases, short for Not Only SQL, are increasingly being used as the number of big data applications increases. Most systems still use relational databases (RDBs), but as the number of data increases each year, the system handles big data with NoSQL databases to analyze and access data more quickly. NoSQL emerged as a result of the exponential growth of the internet and the development of web applications. The query syntax in the NoSQL database differs from the SQL database, therefore requiring code changes in the application. Data adapter allow applications to not change their SQL query syntax. Data adapters provide methods that can synchronize SQL databases with NotSQL databases. In addition, the data adapter provides an interface which is application can access to run SQL queries. Hence, this research applied data adapter system to synchronize data between MySQL database and Apache HBase using direct access query approach, where system allows application to accept query while synchronization process in progress. From the test performed using data adapter, the results obtained that the data adapter can synchronize between SQL databases, MySQL, and NoSQL database, Apache HBase. This system spends the percentage of memory resources in the range of 40% to 60%, and the percentage of processor moving from 10% to 90%. In addition, from this system also obtained the performance of database NoSQL better than SQL database.
GOVERNING GENETIC DATABASES: COLLECTION, STORAGE AND USE
Gibbons, Susan M.C.; Kaye, Jane
2008-01-01
This paper provides an introduction to a collection of five papers, published as a special symposium journal issue, under the title: “Governing Genetic Databases: Collection, Storage and Use”. It begins by setting the scene, to provide a backdrop and context for the papers. It describes the evolving scientific landscape around genetic databases and genomic research, particularly within the biomedical and criminal forensic investigation fields. It notes the lack of any clear, coherent or coordinated legal governance regime, either at the national or international level. It then identifies and reflects on key cross-cutting issues and themes that emerge from the five papers, in particular: terminology and definitions; consent; special concerns around population genetic databases (biobanks) and forensic databases; international harmonisation; data protection; data access; boundary-setting; governance; and issues around balancing individual interests against public good values. PMID:18841252
A RESEARCH DATABASE FOR IMPROVED DATA MANAGEMENT AND ANALYSIS IN LONGITUDINAL STUDIES
BIELEFELD, ROGER A.; YAMASHITA, TOYOKO S.; KEREKES, EDWARD F.; ERCANLI, EHAT; SINGER, LYNN T.
2014-01-01
We developed a research database for a five-year prospective investigation of the medical, social, and developmental correlates of chronic lung disease during the first three years of life. We used the Ingres database management system and the Statit statistical software package. The database includes records containing 1300 variables each, the results of 35 psychological tests, each repeated five times (providing longitudinal data on the child, the parents, and behavioral interactions), both raw and calculated variables, and both missing and deferred values. The four-layer menu-driven user interface incorporates automatic activation of complex functions to handle data verification, missing and deferred values, static and dynamic backup, determination of calculated values, display of database status, reports, bulk data extraction, and statistical analysis. PMID:7596250
77 FR 66617 - HIT Policy and Standards Committees; Workgroup Application Database
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-06
... Database AGENCY: Office of the National Coordinator for Health Information Technology, HHS. ACTION: Notice of New ONC HIT FACA Workgroup Application Database. The Office of the National Coordinator (ONC) has launched a new Health Information Technology Federal Advisory Committee Workgroup Application Database...
Bartolo, Ramón; Merchant, Hugo
2015-03-18
β oscillations in the basal ganglia have been associated with interval timing. We recorded the putaminal local field potentials (LFPs) from monkeys performing a synchronization-continuation task (SCT) and a serial reaction-time task (RTT), where the animals produced regularly and irregularly paced tapping sequences, respectively. We compared the activation profile of β oscillations between tasks and found transient bursts of β activity in both the RTT and SCT. During the RTT, β power was higher at the beginning of the task, especially when LFPs were aligned to the stimuli. During the SCT, β was higher during the internally driven continuation phase, especially for tap-aligned LFPs. Interestingly, a set of LFPs showed an initial burst of β at the beginning of the SCT, similar to the RTT, followed by a decrease in β oscillations during the synchronization phase, to finally rebound during the continuation phase. The rebound during the continuation phase of the SCT suggests that the corticostriatal circuit is involved in the control of internally driven motor sequences. In turn, the transient bursts of β activity at the beginning of both tasks suggest that the basal ganglia produce a general initiation signal that engages the motor system in different sequential behaviors. Copyright © 2015 the authors 0270-6474/15/354635-06$15.00/0.
State of the art of geoscience libraries and information services
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pruett, N.J.
Geoscience libraries and geoscience information services are closely related. Both are trying to meet the needs of the geoscientists for information and data. Both are also being affected by many trends: increased availability of personal computers; decreased costs of machine readable storage; increased availability of maps in digital format (Pallatto, 1986); progress in graphic displays and in developing Geographic Information System, (GIS) (Kelly and Phillips, 1986); development in artificial intelligence; and the availability of new formats (e.g. CD-ROM). Some additional factors are at work at changing the role of libraries: libraries are coming to recognize the impossibility of collecting everythingmore » and the validity of Bradford's Law unobtrustive studies of library reference services have pointed out that only 50% of the questions are answered correctly it is clear that the number of databases is increasing although good figures for specifically geoscience databases are not available; lists of numeric database are beginning to appear; evaluative (as opposed to purely descriptive) reviews of available bibliographic databases are beginning to appear; more and more libraries are getting online catalogs and results of studies of users of online catalog are being used to improve catalog design; and research is raising consciousness about the value of; and research is raising consciousness about the value of information. All these trends are having or will have an effect on geoscience information.« less
Schafer, Ilana J; Knudsen, Erik; McNamara, Lucy A; Agnihotri, Sachin; Rollin, Pierre E; Islam, Asad
2016-10-15
The Epi Info Viral Hemorrhagic Fever application (Epi Info VHF) was developed in response to challenges managing outbreak data during four 2012 filovirus outbreaks. Development goals included combining case and contact data in a relational database, facilitating data-driven contact tracing, and improving outbreak data consistency and use. The application was first deployed in Guinea, when the West Africa Ebola epidemic was detected, in March 2014, and has been used in 7 African countries and 2 US states. Epi Info VHF enabled reporting of compatible data from multiple countries, contributing to international Ebola knowledge. However, challenges were encountered in accommodating the epidemic's unexpectedly large magnitude, addressing country-specific needs within 1 software product, and using the application in settings with limited Internet access and information technology support. Use of Epi Info VHF in the West Africa Ebola epidemic highlighted the fundamental importance of good data management for effective outbreak response, regardless of the software used. Published by Oxford University Press for the Infectious Diseases Society of America 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Event-driven time-optimal control for a class of discontinuous bioreactors.
Moreno, Jaime A; Betancur, Manuel J; Buitrón, Germán; Moreno-Andrade, Iván
2006-07-05
Discontinuous bioreactors may be further optimized for processing inhibitory substrates using a convenient fed-batch mode. To do so the filling rate must be controlled in such a way as to push the reaction rate to its maximum value, by increasing the substrate concentration just up to the point where inhibition begins. However, an exact optimal controller requires measuring several variables (e.g., substrate concentrations in the feed and in the tank) and also good model knowledge (e.g., yield and kinetic parameters), requirements rarely satisfied in real applications. An environmentally important case, that exemplifies all these handicaps, is toxicant wastewater treatment. There the lack of online practical pollutant sensors may allow unforeseen high shock loads to be fed to the bioreactor, causing biomass inhibition that slows down the treatment process and, in extreme cases, even renders the biological process useless. In this work an event-driven time-optimal control (ED-TOC) is proposed to circumvent these limitations. We show how to detect a "there is inhibition" event by using some computable function of the available measurements. This event drives the ED-TOC to stop the filling. Later, by detecting the symmetric event, "there is no inhibition," the ED-TOC may restart the filling. A fill-react cycling then maintains the process safely hovering near its maximum reaction rate, allowing a robust and practically time-optimal operation of the bioreactor. An experimental study case of a wastewater treatment process application is presented. There the dissolved oxygen concentration was used to detect the events needed to drive the controller. (c) 2006 Wiley Periodicals, Inc.
The MOLGENIS toolkit: rapid prototyping of biosoftware at the push of a button.
Swertz, Morris A; Dijkstra, Martijn; Adamusiak, Tomasz; van der Velde, Joeri K; Kanterakis, Alexandros; Roos, Erik T; Lops, Joris; Thorisson, Gudmundur A; Arends, Danny; Byelas, George; Muilu, Juha; Brookes, Anthony J; de Brock, Engbert O; Jansen, Ritsert C; Parkinson, Helen
2010-12-21
There is a huge demand on bioinformaticians to provide their biologists with user friendly and scalable software infrastructures to capture, exchange, and exploit the unprecedented amounts of new *omics data. We here present MOLGENIS, a generic, open source, software toolkit to quickly produce the bespoke MOLecular GENetics Information Systems needed. The MOLGENIS toolkit provides bioinformaticians with a simple language to model biological data structures and user interfaces. At the push of a button, MOLGENIS' generator suite automatically translates these models into a feature-rich, ready-to-use web application including database, user interfaces, exchange formats, and scriptable interfaces. Each generator is a template of SQL, JAVA, R, or HTML code that would require much effort to write by hand. This 'model-driven' method ensures reuse of best practices and improves quality because the modeling language and generators are shared between all MOLGENIS applications, so that errors are found quickly and improvements are shared easily by a re-generation. A plug-in mechanism ensures that both the generator suite and generated product can be customized just as much as hand-written software. In recent years we have successfully evaluated the MOLGENIS toolkit for the rapid prototyping of many types of biomedical applications, including next-generation sequencing, GWAS, QTL, proteomics and biobanking. Writing 500 lines of model XML typically replaces 15,000 lines of hand-written programming code, which allows for quick adaptation if the information system is not yet to the biologist's satisfaction. Each application generated with MOLGENIS comes with an optimized database back-end, user interfaces for biologists to manage and exploit their data, programming interfaces for bioinformaticians to script analysis tools in R, Java, SOAP, REST/JSON and RDF, a tab-delimited file format to ease upload and exchange of data, and detailed technical documentation. Existing databases can be quickly enhanced with MOLGENIS generated interfaces using the 'ExtractModel' procedure. The MOLGENIS toolkit provides bioinformaticians with a simple model to quickly generate flexible web platforms for all possible genomic, molecular and phenotypic experiments with a richness of interfaces not provided by other tools. All the software and manuals are available free as LGPLv3 open source at http://www.molgenis.org.
20 CFR 218.14 - When a child annuity begins.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false When a child annuity begins. 218.14 Section... ANNUITY BEGINNING AND ENDING DATES When an Annuity Begins § 218.14 When a child annuity begins. (a) A child annuity begins on the later of either the date chosen by the applicant or the earliest date...
20 CFR 218.14 - When a child annuity begins.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true When a child annuity begins. 218.14 Section... ANNUITY BEGINNING AND ENDING DATES When an Annuity Begins § 218.14 When a child annuity begins. (a) A child annuity begins on the later of either the date chosen by the applicant or the earliest date...
20 CFR 218.14 - When a child annuity begins.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false When a child annuity begins. 218.14 Section... ANNUITY BEGINNING AND ENDING DATES When an Annuity Begins § 218.14 When a child annuity begins. (a) A child annuity begins on the later of either the date chosen by the applicant or the earliest date...
20 CFR 218.14 - When a child annuity begins.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true When a child annuity begins. 218.14 Section... ANNUITY BEGINNING AND ENDING DATES When an Annuity Begins § 218.14 When a child annuity begins. (a) A child annuity begins on the later of either the date chosen by the applicant or the earliest date...
20 CFR 218.11 - When a spouse annuity begins.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false When a spouse annuity begins. 218.11 Section... ANNUITY BEGINNING AND ENDING DATES When an Annuity Begins § 218.11 When a spouse annuity begins. (a) A spouse annuity begins on the later of either the date chosen by the applicant or the earliest date...
NASA Astrophysics Data System (ADS)
Waters, Tim; Kashi, Amit; Proga, Daniel; Eracleous, Michael; Barth, Aaron J.; Greene, Jenny
2016-08-01
The latest analysis efforts in reverberation mapping are beginning to allow reconstruction of echo images (or velocity-delay maps) that encode information about the structure and kinematics of the broad line region (BLR) in active galactic nuclei (AGNs). Such maps can constrain sophisticated physical models for the BLR. The physical picture of the BLR is often theorized to be a photoionized wind launched from the AGN accretion disk. Previously we showed that the line-driven disk wind solution found in an earlier simulation by Proga and Kallman is virialized over a large distance from the disk. This finding implies that, according to this model, black hole masses can be reliably estimated through reverberation mapping techniques. However, predictions of echo images expected from line-driven disk winds are not available. Here, after presenting the necessary radiative transfer methodology, we carry out the first calculations of such predictions. We find that the echo images are quite similar to other virialized BLR models such as randomly orbiting clouds and thin Keplerian disks. We conduct a parameter survey exploring how echo images, line profiles, and transfer functions depend on both the inclination angle and the line opacity. We find that the line profiles are almost always single peaked, while transfer functions tend to have tails extending to large time delays. The outflow, despite being primarily equatorially directed, causes an appreciable blueshifted excess on both the echo image and line profile when seen from lower inclinations (I≲ 45^\\circ ). This effect may be observable in low ionization lines such as {{H}}β .
Polymers for Drug Delivery Systems
Liechty, William B.; Kryscio, David R.; Slaughter, Brandon V.; Peppas, Nicholas A.
2012-01-01
Polymers have played an integral role in the advancement of drug delivery technology by providing controlled release of therapeutic agents in constant doses over long periods, cyclic dosage, and tunable release of both hydrophilic and hydrophobic drugs. From early beginnings using off-the-shelf materials, the field has grown tremendously, driven in part by the innovations of chemical engineers. Modern advances in drug delivery are now predicated upon the rational design of polymers tailored for specific cargo and engineered to exert distinct biological functions. In this review, we highlight the fundamental drug delivery systems and their mathematical foundations and discuss the physiological barriers to drug delivery. We review the origins and applications of stimuli-responsive polymer systems and polymer therapeutics such as polymer-protein and polymer-drug conjugates. The latest developments in polymers capable of molecular recognition or directing intracellular delivery are surveyed to illustrate areas of research advancing the frontiers of drug delivery. PMID:22432577
Flow and Noise Control: Toward a Closer Linkage
NASA Technical Reports Server (NTRS)
Thomas, Russell H.; Choudhari, Meelan M.; Joslin, Ronald D.
2002-01-01
Motivated by growing demands for aircraft noise reduction and for revolutionary new aerovehicle concepts, the late twentieth century witnessed the beginning of a shift from single-discipline research, toward an increased emphasis on harnessing the potential of flow and noise control as implemented in a more fully integrated, multidisciplinary framework. At the same time, technologies for developing radically new aerovehicles, which promise quantum leap benefits in cost, safety and performance benefits with environmental friendliness, have appeared on the horizon. Transitioning new technologies to commercial applications will also require coupling further advances in traditional areas of aeronautics with intelligent exploitation of nontraditional and interdisciplinary technologies. Physics-based modeling and simulation are crucial enabling capabilities for synergistic linkage of flow and noise control. In these very fundamental ways, flow and noise control are being driven to be more closely linked during the early design phases of a vehicle concept for optimal and mutual noise and performance benefits.
Guiding brine shrimp through mazes by solving reaction diffusion equations
NASA Astrophysics Data System (ADS)
Singal, Krishma; Fenton, Flavio
Excitable systems driven by reaction diffusion equations have been shown to not only find solutions to mazes but to also to find the shortest path between the beginning and the end of the maze. In this talk we describe how we can use the Fitzhugh-Nagumo model, a generic model for excitable media, to solve a maze by varying the basin of attraction of its two fixed points. We demonstrate how two dimensional mazes are solved numerically using a Java Applet and then accelerated to run in real time by using graphic processors (GPUs). An application of this work is shown by guiding phototactic brine shrimp through a maze solved by the algorithm. Once the path is obtained, an Arduino directs the shrimp through the maze using lights from LEDs placed at the floor of the Maze. This method running in real time could be eventually used for guiding robots and cars through traffic.
Why we need more basic biology research, not less.
Botstein, David
2012-11-01
Much of the spectacular progress in biomedical science over the last half-century is the direct consequence of the work of thousands of basic scientists whose primary goal was understanding of the fundamental working of living things. Despite this, many politicians, funders, and even scientists have come to believe that the pace of successful applications to medical diagnosis and therapy is limited by our willingness to focus directly on human health, rather than a continuing deficit of understanding. By this theory, curiosity-driven research, aimed at understanding, is no longer important or even useful. What is advocated instead is "translational" research aimed directly at treating disease. I believe this idea to be deeply mistaken. Recent history suggests instead that what we have learned in the last 50 years is only the beginning. The way forward is to invest more in basic science, not less.
Weng, Yi-Hao; Chen, Chiehfeng; Kuo, Ken N; Yang, Chun-Yuh; Lo, Heng-Lien; Chen, Kee-Hsin; Chiu, Ya-Wen
2015-01-01
Background Although evidence-based practice (EBP) has been widely investigated, few studies have investigated its correlation with a clinical nursing ladder system. The current national study evaluates whether EBP implementation has been incorporated into the clinical ladder system. Methods A cross-sectional questionnaire survey was conducted nationwide of registered nurses among regional hospitals of Taiwan in January to April 2011. Subjects were categorized into beginning nurses (N1 and N2) and advanced nurses (N3 and N4) by the clinical ladder system. Multivariate logistic regression model was used to adjust for possible confounding demographic factors. Results Valid postal questionnaires were collected from 4,206 nurses, including 2,028 N1, 1,595 N2, 412 N3, and 171 N4 nurses. Advanced nurses were more aware of EBP than beginning nurses (p < 0.001; 90.7% vs. 78.0%). In addition, advanced nurses were more likely to hold positive beliefs about and attitudes toward EBP (p < 0.001) and possessed more sufficient knowledge of and skills in EBP (p < 0.001). Furthermore, they more often implemented EBP principles (p < 0.001) and accessed online evidence-based retrieval databases (p < 0.001). The most common motivation for using online databases was self-learning for advanced nurses and positional promotion for beginning nurses. Multivariate logistic regression analyses showed advanced nurses were more aware of EBP, had higher knowledge and skills of EBP, and more often implemented EBP than beginning nurses. Linking Evidence to Action The awareness of, beliefs in, attitudes toward, knowledge of, skills in, and behaviors of EBP among advanced nurses were better than those among beginning nurses. The data indicate that a clinical ladder system can serve as a useful means to enhance EBP implementation. PMID:25588625
75 FR 57544 - Defense Trade Advisory Group; Notice of Open Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-21
.... Truman Building, Washington, DC. Entry and registration will begin at 12:30 p.m. Please use the building... Visitor Access Control System (VACS-D) database. Please see the Privacy Impact Assessment for VACS-D at...
Information systems: the key to evidence-based health practice.
Rodrigues, R. J.
2000-01-01
Increasing prominence is being given to the use of best current evidence in clinical practice and health services and programme management decision-making. The role of information in evidence-based practice (EBP) is discussed, together with questions of how advanced information systems and technology (IS&T) can contribute to the establishment of a broader perspective for EBP. The author examines the development, validation and use of a variety of sources of evidence and knowledge that go beyond the well-established paradigm of research, clinical trials, and systematic literature review. Opportunities and challenges in the implementation and use of IS&T and knowledge management tools are examined for six application areas: reference databases, contextual data, clinical data repositories, administrative data repositories, decision support software, and Internet-based interactive health information and communication. Computerized and telecommunications applications that support EBP follow a hierarchy in which systems, tasks and complexity range from reference retrieval and the processing of relatively routine transactions, to complex "data mining" and rule-driven decision support systems. PMID:11143195
Computerized Design Synthesis (CDS), A database-driven multidisciplinary design tool
NASA Technical Reports Server (NTRS)
Anderson, D. M.; Bolukbasi, A. O.
1989-01-01
The Computerized Design Synthesis (CDS) system under development at McDonnell Douglas Helicopter Company (MDHC) is targeted to make revolutionary improvements in both response time and resource efficiency in the conceptual and preliminary design of rotorcraft systems. It makes the accumulated design database and supporting technology analysis results readily available to designers and analysts of technology, systems, and production, and makes powerful design synthesis software available in a user friendly format.
TALYS/TENDL verification and validation processes: Outcomes and recommendations
NASA Astrophysics Data System (ADS)
Fleming, Michael; Sublet, Jean-Christophe; Gilbert, Mark R.; Koning, Arjan; Rochman, Dimitri
2017-09-01
The TALYS-generated Evaluated Nuclear Data Libraries (TENDL) provide truly general-purpose nuclear data files assembled from the outputs of the T6 nuclear model codes system for direct use in both basic physics and engineering applications. The most recent TENDL-2015 version is based on both default and adjusted parameters of the most recent TALYS, TAFIS, TANES, TARES, TEFAL, TASMAN codes wrapped into a Total Monte Carlo loop for uncertainty quantification. TENDL-2015 contains complete neutron-incident evaluations for all target nuclides with Z ≤116 with half-life longer than 1 second (2809 isotopes with 544 isomeric states), up to 200 MeV, with covariances and all reaction daughter products including isomers of half-life greater than 100 milliseconds. With the added High Fidelity Resonance (HFR) approach, all resonances are unique, following statistical rules. The validation of the TENDL-2014/2015 libraries against standard, evaluated, microscopic and integral cross sections has been performed against a newly compiled UKAEA database of thermal, resonance integral, Maxwellian averages, 14 MeV and various accelerator-driven neutron source spectra. This has been assembled using the most up-to-date, internationally-recognised data sources including the Atlas of Resonances, CRC, evaluated EXFOR, activation databases, fusion, fission and MACS. Excellent agreement was found with a small set of errors within the reference databases and TENDL-2014 predictions.
Standardization of databases for AMDB taxi routing functions
NASA Astrophysics Data System (ADS)
Pschierer, C.; Sindlinger, A.; Schiefele, J.
2010-04-01
Input, management, and display of taxi routes on airport moving map displays (AMM) have been covered in various studies in the past. The demonstrated applications are typically based on Aerodrome Mapping Databases (AMDB). Taxi routing functions require specific enhancements, typically in the form of a graph network with nodes and edges modeling all connectivities within an airport, which are not supported by the current AMDB standards. Therefore, the data schemas and data content have been defined specifically for the purpose and test scenarios of these studies. A standardization of the data format for taxi routing information is a prerequisite for turning taxi routing functions into production. The joint RTCA/EUROCAE special committee SC-217, responsible for updating and enhancing the AMDB standards DO-272 [1] and DO-291 [2], is currently in the process of studying different alternatives and defining reasonable formats. Requirements for taxi routing data are primarily driven by depiction concepts for assigned and cleared taxi routes, but also by database size and the economic feasibility. Studied concepts are similar to the ones described in the GDF (geographic data files) specification [3], which is used in most car navigation systems today. They include - A highly aggregated graph network of complex features - A modestly aggregated graph network of simple features - A non-explicit topology of plain AMDB taxi guidance line elements This paper introduces the different concepts and their advantages and disadvantages.
Golfing with protons: using research grade simulation algorithms for online games
NASA Astrophysics Data System (ADS)
Harold, J.
2004-12-01
Scientists have long known the power of simulations. By modeling a system in a computer, researchers can experiment at will, developing an intuitive sense of how a system behaves. The rapid increase in the power of personal computers, combined with technologies such as Flash, Shockwave and Java, allow us to bring research simulations into the education world by creating exploratory environments for the public. This approach is illustrated by a project funded by a small grant from NSF's Informal Science Education program, through an opportunity that provides education supplements to existing research awards. Using techniques adapted from a magnetospheric research program, several Flash based interactives have been developed that allow web site visitors to explore the motion of particles in the Earth's magnetosphere. These pieces were folded into a larger Space Weather Center web project at the Space Science Institute (www.spaceweathercenter.org). Rather than presenting these interactives as plasma simulations per se, the research algorithms were used to create games such as "Magneto Mini Golf", where the balls are protons moving in combined electric and magnetic fields. The "holes" increase in complexity, beginning with no fields and progressing towards a simple model of Earth's magnetosphere. The emphasis of the activity is gameplay, but because it is at its core a plasma simulation, the user develops an intuitive sense of charged particle motion as they progress. Meanwhile, the pieces contain embedded assessments that are measurable through a database driven tracking system. Mining that database not only provides helpful usability information, but allows us to examine whether users are meeting the learning goals of the activities. We will discuss the development and evaluation results of the project, as well as the potential for these types of activities to shift the expectations of what a web site can and should provide educationally.
Predictive Models and Computational Embryology
EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...
TechTracS: NASA's commercial technology management system
NASA Astrophysics Data System (ADS)
Barquinero, Kevin; Cannon, Douglas
1996-03-01
The Commercial Technology Mission is a primary NASA mission, comparable in importance to those in aeronautics and space. This paper will discuss TechTracS, NASA Commercial Technology Management System that has been put into place in FY 1995 to implement this mission. This system is designed to identify and capture the NASA technologies which have commercial potential into an off-the-shelf database application, and then track the technologies' progress in realizing the commercial potential through collaborations with industry. The management system consists of four stages. The first is to develop an inventory database of the agency's entire technology portfolio and assess it for relevance to the commercial marketplace. Those technologies that are identified as having commercial potential will then be actively marketed to appropriate industries—this is the second stage. The third stage is when a NASA-industry partnership is entered into for the purposes of commercializing the technology. The final stage is to track the technology's success or failure in the marketplace. The collection of this information in TechTracS enables metrics evaluation and can accelerate the establishment on direct contacts between and NASA technologist and an industry technologist. This connection is the beginning of the technology commercialization process.
General Aviation Interior Noise. Part 1; Source/Path Identification
NASA Technical Reports Server (NTRS)
Unruh, James F.; Till, Paul D.; Palumbo, Daniel L. (Technical Monitor)
2002-01-01
There were two primary objectives of the research effort reported herein. The first objective was to identify and evaluate noise source/path identification technology applicable to single engine propeller driven aircraft that can be used to identify interior noise sources originating from structure-borne engine/propeller vibration, airborne propeller transmission, airborne engine exhaust noise, and engine case radiation. The approach taken to identify the contributions of each of these possible sources was first to conduct a Principal Component Analysis (PCA) of an in-flight noise and vibration database acquired on a Cessna Model 182E aircraft. The second objective was to develop and evaluate advanced technology for noise source ranking of interior panel groups such as the aircraft windshield, instrument panel, firewall, and door/window panels within the cabin of a single engine propeller driven aircraft. The technology employed was that of Acoustic Holography (AH). AH was applied to the test aircraft by acquiring a series of in-flight microphone array measurements within the aircraft cabin and correlating the measurements via PCA. The source contributions of the various panel groups leading to the array measurements were then synthesized by solving the inverse problem using the boundary element model.
Bio and health informatics meets cloud : BioVLab as an example.
Chae, Heejoon; Jung, Inuk; Lee, Hyungro; Marru, Suresh; Lee, Seong-Whan; Kim, Sun
2013-01-01
The exponential increase of genomic data brought by the advent of the next or the third generation sequencing (NGS) technologies and the dramatic drop in sequencing cost have driven biological and medical sciences to data-driven sciences. This revolutionary paradigm shift comes with challenges in terms of data transfer, storage, computation, and analysis of big bio/medical data. Cloud computing is a service model sharing a pool of configurable resources, which is a suitable workbench to address these challenges. From the medical or biological perspective, providing computing power and storage is the most attractive feature of cloud computing in handling the ever increasing biological data. As data increases in size, many research organizations start to experience the lack of computing power, which becomes a major hurdle in achieving research goals. In this paper, we review the features of publically available bio and health cloud systems in terms of graphical user interface, external data integration, security and extensibility of features. We then discuss about issues and limitations of current cloud systems and conclude with suggestion of a biological cloud environment concept, which can be defined as a total workbench environment assembling computational tools and databases for analyzing bio/medical big data in particular application domains.
3 CFR 8856 - Proclamation 8856 of August 31, 2012. National Wilderness Month, 2012
Code of Federal Regulations, 2013 CFR
2013-01-01
... world to begin new lives and develop thriving communities on our lands. Today, our wilderness areas... jobs in tourism and recreation. Our open spaces are more precious today than ever before, and it is... foundation for a comprehensive, community-driven conservation strategy that continues to engage Americans in...
Ethics in Teaching for Democracy and Social Justice
ERIC Educational Resources Information Center
Hytten, Kathy
2015-01-01
In this essay, I offer provocations toward an ethics of teaching for democracy and social justice. I argue that while driven by compelling macro social and political visions, social justice teachers do not pay sufficient attention to the moral dimensions of micro, classroom-level interactions in their work. I begin by describing social justice…
40 CFR 86.1237-85 - Dynamometer runs.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Dynamometer runs. 86.1237-85 Section... Methanol-Fueled Heavy-Duty Vehicles § 86.1237-85 Dynamometer runs. (a) The vehicle shall be either driven... the diurnal loss test and beginning of the hot soak preparation run shall not exceed 3 minutes, and...
Data-Driven Robust Control Design: Unfalsified Control
2006-12-01
only be determined by fresh information which we shall no doubt find waiting for us.” Sherlock Holmes Arthur Conan Doyle 1.0 INTRODUCTION Though the...begins to twist facts to suit theories instead of theories to suit facts.” Sherlock Holmes Arthur Conan Doyle 6.0 ACKNOWLEDGMENT I thank my current and
Beginning Teachers' Use of Resources to Enact and Learn from Ambitious Instruction
ERIC Educational Resources Information Center
Stroupe, David
2016-01-01
I investigated how five first-year teachers--all peers from the same science methods class framed around ambitious instruction--used resources to plan and learn in schools that promoted pedagogy anchored around information delivery. The participants engaged in different cycles of resource-driven learning based on the instructional framework they…
A Brief History of ... Semiconductors
ERIC Educational Resources Information Center
Jenkins, Tudor
2005-01-01
The development of studies in semiconductor materials is traced from its beginnings with Michael Faraday in 1833 to the production of the first silicon transistor in 1954, which heralded the age of silicon electronics and microelectronics. Prior to the advent of band theory, work was patchy and driven by needs of technology. However, the arrival…
ERIC Educational Resources Information Center
Malrieu, Denise
1983-01-01
This overview of the ERIC system begins with a brief history of the system; a description of the types and numbers of materials contained in the database; sources of types of information for educators that are not processed by ERIC; and the various publications and reference materials produced by and for the system. The analysis of ERIC usage in…
1981-10-29
are implemented, respectively, in the files "W-Update," "W-combine" and RW-Copy," listed in the appendix. The appendix begins with a typescript of an...the typescript ) and the copying process (steps 45 and 46) are shown as human actions in the typescript , but can be performed easily by a "master...for Natural Language, M. Marcus, MIT Press, 1980. I 29 APPENDIX: DATABASE UPDATING EXPERIMENT 30 CONTENTS Typescript of an experiment in Rosie
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iversen, C.M.; Powell, A.S.; McCormack, M.L.
The second version of the Fine-Root Ecology Database is available for download! Download the full FRED 2.0 data set, user guidance document, map, and list of data sources here. Prior to downloading the data, please read and follow the Data Use Guidelines, and it's worth checking out some tips for using FRED before you begin your analyses. Also, see here for an updating list of corrections to FRED 2.0.
Mungall, Christopher J; Emmert, David B
2007-07-01
A few years ago, FlyBase undertook to design a new database schema to store Drosophila data. It would fully integrate genomic sequence and annotation data with bibliographic, genetic, phenotypic and molecular data from the literature representing a distillation of the first 100 years of research on this major animal model system. In developing this new integrated schema, FlyBase also made a commitment to ensure that its design was generic, extensible and available as open source, so that it could be employed as the core schema of any model organism data repository, thereby avoiding redundant software development and potentially increasing interoperability. Our question was whether we could create a relational database schema that would be successfully reused. Chado is a relational database schema now being used to manage biological knowledge for a wide variety of organisms, from human to pathogens, especially the classes of information that directly or indirectly can be associated with genome sequences or the primary RNA and protein products encoded by a genome. Biological databases that conform to this schema can interoperate with one another, and with application software from the Generic Model Organism Database (GMOD) toolkit. Chado is distinctive because its design is driven by ontologies. The use of ontologies (or controlled vocabularies) is ubiquitous across the schema, as they are used as a means of typing entities. The Chado schema is partitioned into integrated subschemas (modules), each encapsulating a different biological domain, and each described using representations in appropriate ontologies. To illustrate this methodology, we describe here the Chado modules used for describing genomic sequences. GMOD is a collaboration of several model organism database groups, including FlyBase, to develop a set of open-source software for managing model organism data. The Chado schema is freely distributed under the terms of the Artistic License (http://www.opensource.org/licenses/artistic-license.php) from GMOD (www.gmod.org).
Urban roadway congestion : annual report
DOT National Transportation Integrated Search
1998-01-01
The annual traffic congestion study is an effort to monitor roadway congestion in major urban areas in the United States. The comparisons to other areas and to previous experiences in each area are facilitated by a database that begins in 1982 and in...
Monitoring SLAC High Performance UNIX Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC
2005-12-15
Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia.more » Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.« less
Assumption-versus data-based approaches to summarizing species' ranges.
Peterson, A Townsend; Navarro-Sigüenza, Adolfo G; Gordillo, Alejandro
2018-06-01
For conservation decision making, species' geographic distributions are mapped using various approaches. Some such efforts have downscaled versions of coarse-resolution extent-of-occurrence maps to fine resolutions for conservation planning. We examined the quality of the extent-of-occurrence maps as range summaries and the utility of refining those maps into fine-resolution distributional hypotheses. Extent-of-occurrence maps tend to be overly simple, omit many known and well-documented populations, and likely frequently include many areas not holding populations. Refinement steps involve typological assumptions about habitat preferences and elevational ranges of species, which can introduce substantial error in estimates of species' true areas of distribution. However, no model-evaluation steps are taken to assess the predictive ability of these models, so model inaccuracies are not noticed. Whereas range summaries derived by these methods may be useful in coarse-grained, global-extent studies, their continued use in on-the-ground conservation applications at fine spatial resolutions is not advisable in light of reliance on assumptions, lack of real spatial resolution, and lack of testing. In contrast, data-driven techniques that integrate primary data on biodiversity occurrence with remotely sensed data that summarize environmental dimensions (i.e., ecological niche modeling or species distribution modeling) offer data-driven solutions based on a minimum of assumptions that can be evaluated and validated quantitatively to offer a well-founded, widely accepted method for summarizing species' distributional patterns for conservation applications. © 2016 Society for Conservation Biology.
Generalized Database Management System Support for Numeric Database Environments.
ERIC Educational Resources Information Center
Dominick, Wayne D.; Weathers, Peggy G.
1982-01-01
This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…
26 CFR 1.3-1 - Application of optional tax.
Code of Federal Regulations, 2012 CFR
2012-04-01
.... Tables XVI through XXX apply for taxable years beginning after December 31, 1970. The standard deduction... deduction for Tables XVI through XXX, applicable to taxable years beginning in 1971, is 13 percent. For an...
26 CFR 1.3-1 - Application of optional tax.
Code of Federal Regulations, 2011 CFR
2011-04-01
.... Tables XVI through XXX apply for taxable years beginning after December 31, 1970. The standard deduction... deduction for Tables XVI through XXX, applicable to taxable years beginning in 1971, is 13 percent. For an...
26 CFR 1.3-1 - Application of optional tax.
Code of Federal Regulations, 2013 CFR
2013-04-01
.... Tables XVI through XXX apply for taxable years beginning after December 31, 1970. The standard deduction... deduction for Tables XVI through XXX, applicable to taxable years beginning in 1971, is 13 percent. For an...
26 CFR 1.3-1 - Application of optional tax.
Code of Federal Regulations, 2014 CFR
2014-04-01
.... Tables XVI through XXX apply for taxable years beginning after December 31, 1970. The standard deduction... deduction for Tables XVI through XXX, applicable to taxable years beginning in 1971, is 13 percent. For an...
26 CFR 1.3-1 - Application of optional tax.
Code of Federal Regulations, 2010 CFR
2010-04-01
.... Tables XVI through XXX apply for taxable years beginning after December 31, 1970. The standard deduction... deduction for Tables XVI through XXX, applicable to taxable years beginning in 1971, is 13 percent. For an...
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali
2017-01-01
Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports.
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali
2017-01-01
Objectives Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Methods Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Results Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. Conclusion The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports. PMID:28166263
Niiranen, Teemu J; Asayama, Kei; Thijs, Lutgarde; Johansson, Jouni K; Ohkubo, Takayoshi; Kikuya, Masahiro; Boggia, José; Hozawa, Atsushi; Sandoya, Edgardo; Stergiou, George S; Tsuji, Ichiro; Jula, Antti M; Imai, Yutaka; Staessen, Jan A
2013-01-01
The lack of outcome-driven operational thresholds limits the clinical application of home blood pressure (BP) measurement. Our objective was to determine an outcome-driven reference frame for home BP measurement. We measured home and clinic BP in 6470 participants (mean age, 59.3 years; 56.9% women; 22.4% on antihypertensive treatment) recruited in Ohasama, Japan (n=2520); Montevideo, Uruguay (n=399); Tsurugaya, Japan (n=811); Didima, Greece (n=665); and nationwide in Finland (n=2075). In multivariable-adjusted analyses of individual subject data, we determined home BP thresholds, which yielded 10-year cardiovascular risks similar to those associated with stages 1 (120/80 mm Hg) and 2 (130/85 mm Hg) prehypertension, and stages 1 (140/90 mm Hg) and 2 (160/100 mm Hg) hypertension on clinic measurement. During 8.3 years of follow-up (median), 716 cardiovascular end points, 294 cardiovascular deaths, 393 strokes, and 336 cardiac events occurred in the whole cohort; in untreated participants these numbers were 414, 158, 225, and 194, respectively. In the whole cohort, outcome-driven systolic/diastolic thresholds for the home BP corresponding with stages 1 and 2 prehypertension and stages 1 and 2 hypertension were 121.4/77.7, 127.4/79.9, 133.4/82.2, and 145.4/86.8 mm Hg; in 5018 untreated participants, these thresholds were 118.5/76.9, 125.2/79.7, 131.9/82.4, and 145.3/87.9 mm Hg, respectively. Rounded thresholds for stages 1 and 2 prehypertension and stages 1 and 2 hypertension amounted to 120/75, 125/80, 130/85, and 145/90 mm Hg, respectively. Population-based outcome-driven thresholds for home BP are slightly lower than those currently proposed in hypertension guidelines. Our current findings could inform guidelines and help clinicians in diagnosing and managing patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buche, D. L.; Perry, S.
This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects.
Predictive Models and Computational Toxicology (II IBAMTOX)
EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...
A Tactical Framework for Cyberspace Situational Awareness
2010-06-01
Command & Control 1. VOIP Telephone 2. Internet Chat 3. Web App ( TBMCS ) 4. Email 5. Web App (PEX) 6. Database (CAMS) 7. Database (ARMS) 8...Database (LogMod) 9. Resource (WWW) 10. Application (PFPS) Mission Planning 1. Application (PFPS) 2. Email 3. Web App ( TBMCS ) 4. Internet Chat...1. Web App (PEX) 2. Database (ARMS) 3. Web App ( TBMCS ) 4. Email 5. Database (CAMS) 6. VOIP Telephone 7. Application (PFPS) 8. Internet Chat 9
Perkins, Matthew B; Jensen, Peter S; Jaccard, James; Gollwitzer, Peter; Oettingen, Gabriele; Pappadopulos, Elizabeth; Hoagwood, Kimberly E
2007-03-01
Despite major recent research advances, large gaps exist between accepted mental health knowledge and clinicians' real-world practices. Although hundreds of studies have successfully utilized basic behavioral science theories to understand, predict, and change patients' health behaviors, the extent to which these theories-most notably the theory of reasoned action (TRA) and its extension, the theory of planned behavior (TPB)-have been applied to understand and change clinician behavior is unclear. This article reviews the application of theory-driven approaches to understanding and changing clinician behaviors. MEDLINE and PsycINFO databases were searched, along with bibliographies, textbooks on health behavior or public health, and references from experts, to find article titles that describe theory-driven approaches (TRA or TPB) to understanding and modifying health professionals' behavior. A total of 19 articles that detailed 20 studies described the use of TRA or TPB and clinicians' behavior. Eight articles describe the use of TRA or TPB with physicians, four relate to nurses, three relate to pharmacists, and two relate to health workers. Only two articles applied TRA or TPB to mental health clinicians. The body of work shows that different constructs of TRA or TPB predict intentions and behavior among different groups of clinicians and for different behaviors and guidelines. The number of studies on this topic is extremely limited, but they offer a rationale and a direction for future research as well as a theoretical basis for increasing the specificity and efficiency of clinician-targeted interventions.
The future application of GML database in GIS
NASA Astrophysics Data System (ADS)
Deng, Yuejin; Cheng, Yushu; Jing, Lianwen
2006-10-01
In 2004, the Geography Markup Language (GML) Implementation Specification (version 3.1.1) was published by Open Geospatial Consortium, Inc. Now more and more applications in geospatial data sharing and interoperability depend on GML. The primary purpose of designing GML is for exchange and transportation of geo-information by standard modeling and encoding of geography phenomena. However, the problems of how to organize and access lots of GML data effectively arise in applications. The research on GML database focuses on these problems. The effective storage of GML data is a hot topic in GIS communities today. GML Database Management System (GDBMS) mainly deals with the problem of storage and management of GML data. Now two types of XML database, namely Native XML Database, and XML-Enabled Database are classified. Since GML is an application of the XML standard to geographic data, the XML database system can also be used for the management of GML. In this paper, we review the status of the art of XML database, including storage, index and query languages, management systems and so on, then move on to the GML database. At the end, the future prospect of GML database in GIS application is presented.
Helsens, Kenny; Colaert, Niklaas; Barsnes, Harald; Muth, Thilo; Flikka, Kristian; Staes, An; Timmerman, Evy; Wortelkamp, Steffi; Sickmann, Albert; Vandekerckhove, Joël; Gevaert, Kris; Martens, Lennart
2010-03-01
MS-based proteomics produces large amounts of mass spectra that require processing, identification and possibly quantification before interpretation can be undertaken. High-throughput studies require automation of these various steps, and management of the data in association with the results obtained. We here present ms_lims (http://genesis.UGent.be/ms_lims), a freely available, open-source system based on a central database to automate data management and processing in MS-driven proteomics analyses.
Applications of Database Machines in Library Systems.
ERIC Educational Resources Information Center
Salmon, Stephen R.
1984-01-01
Characteristics and advantages of database machines are summarized and their applications to library functions are described. The ability to attach multiple hosts to the same database and flexibility in choosing operating and database management systems for different functions without loss of access to common database are noted. (EJS)
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-01-10
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2012-04-17
Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.
ERIC Educational Resources Information Center
Forrest, Melanie D.
This curriculum guide is intended for Missouri teachers teaching a course in database applications for high school students enrolled in marketing and cooperative education. The curriculum presented includes learning activities in which students are taught to analyze database tables containing the types of data typically encountered by employees…
NASA Astrophysics Data System (ADS)
Svetský, Štefan; Moravčík, Oliver; Rusková, Dagmar; Balog, Karol; Sakál, Peter; Tanuška, Pavol
2011-01-01
The article describes a five-year period of Technology Enhanced Learning (TEL) implementation at the Faculty of Materials Science and Technology (MTF) in Trnava. It is a part of the challenges put forward by the 7th Framework Programme (ICT research in FP7) focused on "how information and communication technologies can be used to support learning and teaching". The empirical research during the years 2006-2008 was focused on technology-driven support of teaching, i. e. the development of VLE (Virtual Learning Environment) and the development of database applications such as instruments developed simultaneously with the information support of the project, and tested and applied directly in the teaching of bachelor students. During this period, the MTF also participated in the administration of the FP7 KEPLER project proposal in the international consortium of 20 participants. In the following period of 2009-2010, the concept of educational activities automation systematically began to develop. Within this concept, the idea originated to develop a universal multi-purpose system BIKE based on the batch processing knowledge paradigm. This allowed to focus more on educational approach, i.e. TEL educational-driven and to finish the programming of the Internet application - network for feedback (communication between teachers and students). Thanks to this specialization, the results of applications in the teaching at MTF could gradually be presented at the international conferences focused on computer-enhanced engineering education. TEL was implemented at a detached workplace and four institutes involving more than 600 students-bachelors and teachers of technical subjects. Four study programmes were supported, including technical English language. Altogether, the results have been presented via 16 articles in five countries, including the EU level (IGIP-SEFI).
Applications of GIS and database technologies to manage a Karst Feature Database
Gao, Y.; Tipping, R.G.; Alexander, E.C.
2006-01-01
This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.
Laser pulse shape design for laser-indirect-driven quasi-isentropic compression experiments
NASA Astrophysics Data System (ADS)
Xue, Quanxi; Jiang, Shaoen; Wang, Zhebin; Wang, Feng; Zhao, Xueqing; Ding, Yongkun
2018-02-01
Laser pulse shape design is a key work in the design of indirect-laser-driven experiments, especially for long pulse laser driven quasi-isentropic compression experiments. A method for designing such a laser pulse shape is given here. What's more, application experiments were performed, and the results of a typical shot are presented. At last of this article, the details of the application of the method are discussed, such as the equation parameter choice, radiation ablation pressure expression, and approximations in the method. The application shows that the method can provide reliable descriptions of the energy distribution in a hohlraum target; thus, it can be used in the design of long-pulse laser driven quasi-isentropic compression experiments and even other indirect-laser-driven experiments.
CHIP Demonstrator: Semantics-Driven Recommendations and Museum Tour Generation
NASA Astrophysics Data System (ADS)
Aroyo, Lora; Stash, Natalia; Wang, Yiwen; Gorgels, Peter; Rutledge, Lloyd
The main objective of the CHIP project is to demonstrate how Semantic Web technologies can be deployed to provide personalized access to digital museum collections. We illustrate our approach with the digital database ARIA of the Rijksmuseum Amsterdam. For the semantic enrichment of the Rijksmuseum ARIA database we collaborated with the CATCH STITCH project to produce mappings to Iconclass, and with the MultimediaN E-culture project to produce the RDF/OWL of the ARIA and Adlib databases. The main focus of CHIP is on exploring the potential of applying adaptation techniques to provide personalized experience for the museum visitors both on the Web site and in the museum.
The Gene Set Builder: collation, curation, and distribution of sets of genes
Yusuf, Dimas; Lim, Jonathan S; Wasserman, Wyeth W
2005-01-01
Background In bioinformatics and genomics, there are many applications designed to investigate the common properties for a set of genes. Often, these multi-gene analysis tools attempt to reveal sequential, functional, and expressional ties. However, while tremendous effort has been invested in developing tools that can analyze a set of genes, minimal effort has been invested in developing tools that can help researchers compile, store, and annotate gene sets in the first place. As a result, the process of making or accessing a set often involves tedious and time consuming steps such as finding identifiers for each individual gene. These steps are often repeated extensively to shift from one identifier type to another; or to recreate a published set. In this paper, we present a simple online tool which – with the help of the gene catalogs Ensembl and GeneLynx – can help researchers build and annotate sets of genes quickly and easily. Description The Gene Set Builder is a database-driven, web-based tool designed to help researchers compile, store, export, and share sets of genes. This application supports the 17 eukaryotic genomes found in version 32 of the Ensembl database, which includes species from yeast to human. User-created information such as sets and customized annotations are stored to facilitate easy access. Gene sets stored in the system can be "exported" in a variety of output formats – as lists of identifiers, in tables, or as sequences. In addition, gene sets can be "shared" with specific users to facilitate collaborations or fully released to provide access to published results. The application also features a Perl API (Application Programming Interface) for direct connectivity to custom analysis tools. A downloadable Quick Reference guide and an online tutorial are available to help new users learn its functionalities. Conclusion The Gene Set Builder is an Ensembl-facilitated online tool designed to help researchers compile and manage sets of genes in a user-friendly environment. The application can be accessed via . PMID:16371163
Racicki, Stephanie; Gerwin, Sarah; Diclaudio, Stacy; Reinmann, Samuel; Donaldson, Megan
2013-05-01
The purpose of this systematic review was to assess the effectiveness of conservative physical therapy management of cervicogenic headache (CGH). CGH affects 22-25% of the adult population with females being four times more affected than men. CGHs are thought to arise from musculoskeletal impairments in the neck with symptoms most commonly consisting of suboccipital neck pain, dizziness, and lightheadedness. Currently, both invasive and non-invasive techniques are available to address these symptoms; however, the efficacy of non-invasive treatment techniques has yet to be established. Computerized searches of CINAHL, ProQuest, PubMed, MEDLINE, and SportDiscus, were performed to obtain a qualitative analysis of the literature. Inclusion criteria were: randomized controlled trial design, population diagnosed with CGH using the International Headache Society classification, at least one baseline measurement and one outcomes measure, and assessment of a conservative technique. Physiotherapy evidence-based database scale was utilized for quality assessment. One computerized database search and two hand searches yielded six articles. Of the six included randomized controlled trials, all were considered to be of 'good quality' utilizing the physiotherapy evidence-based database scale. The interventions utilized were: therapist-driven cervical manipulation and mobilization, self-applied cervical mobilization, cervico-scapular strengthening, and therapist-driven cervical and thoracic manipulation. With the exception of one study, all reported reduction in pain and disability, as well as improvement in function. Calculated effect sizes allowed comparison of intervention groups between studies. A combination of therapist-driven cervical manipulation and mobilization with cervico-scapular strengthening was most effective for decreasing pain outcomes in those with CGH.
Pathways of proton transfer in the light-driven pump bacteriorhodopsin
NASA Technical Reports Server (NTRS)
Lanyi, J. K.
1993-01-01
The mechanism of proton transport in the light-driven pump bacteriorhodopsin is beginning to be understood. Light causes the all-trans to 13-cis isomerization of the retinal chromophore. This sets off a sequential and directed series of transient decreases in the pKa's of a) the retinal Schiff base, b) an extracellular proton release complex which includes asp-85, and c) a cytoplasmic proton uptake complex which includes asp-96. The timing of these pKa changes during the photoreaction cycle causes sequential proton transfers which result in the net movement of a proton across the protein, from the cytoplasmic to the extracellular surface.
Safety in numbers: extinction arising from predator-driven Allee effects.
Gregory, Stephen D; Courchamp, Franck
2010-05-01
Experimental evidence of extinction via an Allee effect (AE) is a priority as more species become threatened by human activity. Kramer & Drake (2010) begin the International Year of Biodiversity with the important--but double-edged--demonstration that predators can induce an AE in their prey. The good news is that their experiments help bridge the knowledge gap between theoretical and empirical AEs. The bad news is that this predator-driven AE precipitates the prey extinction via a demographic AE. Although their findings will be sensitive to departures from their experimental protocol, this link between predation and population extinction could have important consequences for many prey species.
Experimental and early investigational drugs for androgenetic alopecia.
Guo, Hongwei; Gao, Wendi Victor; Endo, Hiromi; McElwee, Kevin John
2017-08-01
Treatments for androgenetic alopecia constitute a multi-billion-dollar industry, however, currently available therapeutic options have variable efficacy. Consequently, in recent years small biotechnology companies and academic research laboratories have begun to investigate new or improved treatment methods. Research and development approaches include improved formulations and modes of application for current drugs, new drug development, development of cell-based treatments, and medical devices for modulation of hair growth. Areas covered: Here we review the essential pathways of androgenetic alopecia pathogenesis and collate the current and emerging therapeutic strategies using journal publications databases and clinical trials databases to gather information about active research on new treatments. Expert opinion: We propose that topically applied medications, or intra-dermal injected or implanted materials, are preferable treatment modalities, minimizing side effect risks as compared to systemically applied treatments. Evidence in support of new treatments is limited. However, we suggest therapeutics which reverse the androgen-driven inhibition of hair follicle signaling pathways, such as prostaglandin analogs and antagonists, platelet-rich plasma (PRP), promotion of skin angiogenesis and perfusion, introduction of progenitor cells for hair regeneration, and more effective ways of transplanting hair, are the likely near future direction of androgenetic alopecia treatment development.
Arnold, Katrin; Scheibe, Madlen; Müller, Olaf; Schmitt, Jochen
2016-11-01
The limited number of telemedicine applications being transferred to standard medical care in Germany may to some extent be explained by deficits in the current evaluation practice. Effectiveness and cost effectiveness can only be demonstrated to decision makers and potential users with methodologically sound and fully published evaluations. There is a lack of well-founded and mandatory standards for adequate, comparable evaluations of telemedicine applications. As part of the project CCS Telehealth Eastern Saxony (CCS THOS), a systematic review on evaluation concepts for telemedicine applications (search period until September 2014, databases Medline, Embase, HTA-Database, DARE, NHS EED) as well as an additional selective literature search were conducted. Suggestions for evaluation fundamentals were derived from the results. These suggestions were subjected to a formal consensus process (nominal group process) with relevant stakeholder groups (healthcare payers, healthcare providers, health policy representatives, researchers). 19 papers were included in the systematic review. In accordance with the predefined inclusion criteria, each presented an evaluation concept for telemedicine applications that was based upon a systematic review and/or a consensus process. Via a formal consensus process, the suggestions for evaluation principles derived from the review and the selective literature search (23 papers) resulted in ten agreed evaluation principles. Eight of them were unanimously agreed upon, two were arrived at with one abstention each. The principles enclose criteria for the planning, conduct and reporting of telemedicine evaluations. Adherence to them is obligatory for users of the telemedical infrastructure provided by CCS THOS. Furthermore, right from the beginning the intention was very much for these principles to be seized upon by other projects and initiatives. The agreed evaluation principles for telemedicine applications are the first in Germany to be based both upon evidence and consensus. Due to the methodology of development, they have a strong scientific and health policy legitimation. Therefore, and because of their general applicability, adherence to these principles beyond the context of the telemedicine platform developed within CCS THOS is recommended, namely throughout the German telemedicine scene. Copyright © 2016. Published by Elsevier GmbH.
Benefits of an Object-oriented Database Representation for Controlled Medical Terminologies
Gu, Huanying; Halper, Michael; Geller, James; Perl, Yehoshua
1999-01-01
Objective: Controlled medical terminologies (CMTs) have been recognized as important tools in a variety of medical informatics applications, ranging from patient-record systems to decision-support systems. Controlled medical terminologies are typically organized in semantic network structures consisting of tens to hundreds of thousands of concepts. This overwhelming size and complexity can be a serious barrier to their maintenance and widespread utilization. The authors propose the use of object-oriented databases to address the problems posed by the extensive scope and high complexity of most CMTs for maintenance personnel and general users alike. Design: The authors present a methodology that allows an existing CMT, modeled as a semantic network, to be represented as an equivalent object-oriented database. Such a representation is called an object-oriented health care terminology repository (OOHTR). Results: The major benefit of an OOHTR is its schema, which provides an important layer of structural abstraction. Using the high-level view of a CMT afforded by the schema, one can gain insight into the CMT's overarching organization and begin to better comprehend it. The authors' methodology is applied to the Medical Entities Dictionary (MED), a large CMT developed at Columbia-Presbyterian Medical Center. Examples of how the OOHTR schema facilitated updating, correcting, and improving the design of the MED are presented. Conclusion: The OOHTR schema can serve as an important abstraction mechanism for enhancing comprehension of a large CMT, and thus promotes its usability. PMID:10428002
Development of a Global Fire Weather Database
NASA Technical Reports Server (NTRS)
Field, R. D.; Spessa, A. C.; Aziz, N. A.; Camia, A.; Cantin, A.; Carr, R.; de Groot, W. J.; Dowdy, A. J.; Flannigan, M. D.; Manomaiphiboon, K.;
2015-01-01
The Canadian Forest Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations, beginning in 1980, called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5 latitude by 2/3 longitude. Input weather data were obtained from the NASA Modern Era Retrospective- Analysis for Research and Applications (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded data sets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia, Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DCD1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously identified in MERRA's precipitation, and they reinforce the need to consider alternative sources of precipitation data. GFWED can be used for analyzing historical relationships between fire weather and fire activity at continental and global scales, in identifying large-scale atmosphere-ocean controls on fire weather, and calibration of FWI-based fire prediction models.
Sherwin, Trevor; Gilhotra, Amardeep K
2006-02-01
Literature databases are an ever-expanding resource available to the field of medical sciences. Understanding how to use such databases efficiently is critical for those involved in research. However, for the uninitiated, getting started is a major hurdle to overcome and for the occasional user, the finer points of database searching remain an unacquired skill. In the fifth and final article in this series aimed at those embarking on ophthalmology and vision science research, we look at how the beginning researcher can start to use literature databases and, by using a stepwise approach, how they can optimize their use. This instructional paper gives a hypothetical example of a researcher writing a review article and how he or she acquires the necessary scientific literature for the article. A prototype search of the Medline database is used to illustrate how even a novice might swiftly acquire the skills required for a medium-level search. It provides examples and key tips that can increase the proficiency of the occasional user. Pitfalls of database searching are discussed, as are the limitations of which the user should be aware.
Attending and Responding to Student Thinking in Science
ERIC Educational Resources Information Center
Levin, Daniel M.; Grant, Terrence; Hammer, David
2012-01-01
We present a class discussion that took place in the second author's high school biology class. Working from video data that we transcribed, studied, and analyzed closely, we recount how the question "Is air matter?" posed at the beginning of a unit on photosynthesis led to student-driven inquiry and learning. This case study illustrates what we…
Compliance, Commitment, and Capacity: Examining Districts' Responses to No Child Left Behind
ERIC Educational Resources Information Center
Terry, Kellie
2010-01-01
Evolving purposes for the United States educational system have driven legislative policy over the past 40 years, beginning with the Elementary and Secondary Education Act of 1965, reauthorized as the No Child Left Behind (NCLB) Act in 2002. However, researchers have demonstrated US policy intents are often unrealized in educational practice,…
Social and Emotional Learning and Equity in School Discipline
ERIC Educational Resources Information Center
Gregory, Anne; Fergus, Edward
2017-01-01
Beginning as early as preschool, race and gender are intertwined with the way US schools mete out discipline. In particular, black students and male students are much more likely than others to be suspended or expelled--punishments that we know can hold them back academically. These disparities, and the damage they can cause, have driven recent…
Driven by Affect to Explore Asteroids, the Moon, and Science Education
ERIC Educational Resources Information Center
Dingatantrige Perera, Jude Viranga
2017-01-01
Affect is a domain of psychology that includes attitudes, emotions, interests, and values. My own affect influenced the choice of topics for my dissertation. After examining asteroid interiors and the Moon's thermal evolution, I discuss the role of affect in online science education. I begin with asteroids, which are collections of smaller objects…
International Students with Dependent Children: The Reproduction of Gender Norms
ERIC Educational Resources Information Center
Brooks, Rachel
2015-01-01
Extant research on family migration for education has focused almost exclusively on the education of children. We thus know very little about family migration when it is driven by the educational projects of parents. To begin to redress this gap, this paper explores the experiences of families who have moved to the United Kingdom primarily to…
Establishing intensivist-driven ultrasound at the PICU bedside--it's about time*.
Su, Erik; Pustavoitau, Aliaksei; Hirshberg, Elliotte L; Nishisaki, Akira; Conlon, Thomas; Kantor, David B; Weber, Mark D; Godshall, Aaron J; Burzynski, Jeffrey H; Thompson, Ann E
2014-09-01
To discuss pediatric intensivist-driven ultrasound and the exigent need for research and practice definitions pertaining to its implementation within pediatric critical care, specifically addressing issues in ultrasound-guided vascular access and intensivist-driven echocardiography. Intensivist-driven ultrasound improves procedure safety and reduces time to diagnosis in clinical ultrasound applications, as demonstrated primarily in adult patients. Translating these applications to the PICU requires thoughtful integration of the technology into practice and would best be informed by dedicated ultrasound research in critically ill children.
New Software for Ensemble Creation in the Spitzer-Space-Telescope Operations Database
NASA Technical Reports Server (NTRS)
Laher, Russ; Rector, John
2004-01-01
Some of the computer pipelines used to process digital astronomical images from NASA's Spitzer Space Telescope require multiple input images, in order to generate high-level science and calibration products. The images are grouped into ensembles according to well documented ensemble-creation rules by making explicit associations in the operations Informix database at the Spitzer Science Center (SSC). The advantage of this approach is that a simple database query can retrieve the required ensemble of pipeline input images. New and improved software for ensemble creation has been developed. The new software is much faster than the existing software because it uses pre-compiled database stored-procedures written in Informix SPL (SQL programming language). The new software is also more flexible because the ensemble creation rules are now stored in and read from newly defined database tables. This table-driven approach was implemented so that ensemble rules can be inserted, updated, or deleted without modifying software.
The Steward Observatory asteroid relational database
NASA Technical Reports Server (NTRS)
Sykes, Mark V.; Alvarezdelcastillo, Elizabeth M.
1991-01-01
The Steward Observatory Asteroid Relational Database (SOARD) was created as a flexible tool for undertaking studies of asteroid populations and sub-populations, to probe the biases intrinsic to asteroid databases, to ascertain the completeness of data pertaining to specific problems, to aid in the development of observational programs, and to develop pedagogical materials. To date, SOARD has compiled an extensive list of data available on asteroids and made it accessible through a single menu-driven database program. Users may obtain tailored lists of asteroid properties for any subset of asteroids or output files which are suitable for plotting spectral data on individual asteroids. The program has online help as well as user and programmer documentation manuals. The SOARD already has provided data to fulfill requests by members of the astronomical community. The SOARD continues to grow as data is added to the database and new features are added to the program.
Ontology to relational database transformation for web application development and maintenance
NASA Astrophysics Data System (ADS)
Mahmudi, Kamal; Inggriani Liem, M. M.; Akbar, Saiful
2018-03-01
Ontology is used as knowledge representation while database is used as facts recorder in a KMS (Knowledge Management System). In most applications, data are managed in a database system and updated through the application and then they are transformed to knowledge as needed. Once a domain conceptor defines the knowledge in the ontology, application and database can be generated from the ontology. Most existing frameworks generate application from its database. In this research, ontology is used for generating the application. As the data are updated through the application, a mechanism is designed to trigger an update to the ontology so that the application can be rebuilt based on the newest ontology. By this approach, a knowledge engineer has a full flexibility to renew the application based on the latest ontology without dependency to a software developer. In many cases, the concept needs to be updated when the data changed. The framework is built and tested in a spring java environment. A case study was conducted to proof the concepts.
Effective Use of Java Data Objects in Developing Database Applications; Advantages and Disadvantages
2004-06-01
DATA OBJECTS IN DEVELOPING DATABASE APPLICATIONS. ADVANTAGES AND DISADVANTAGES Paschalis Zilidis June 2004 Thesis Advisor: Thomas...Objects in Developing Database Applications. Advantages and Disadvantages 6. AUTHOR(S) Paschalis ZILIDIS 5. FUNDING NUMBERS 7. PERFORMING...database for the backend datastore. The major disadvantage of this approach is the well-known “impedance mismatch” in which some form of mapping is
Verheggen, Kenneth; Raeder, Helge; Berven, Frode S; Martens, Lennart; Barsnes, Harald; Vaudel, Marc
2017-09-13
Sequence database search engines are bioinformatics algorithms that identify peptides from tandem mass spectra using a reference protein sequence database. Two decades of development, notably driven by advances in mass spectrometry, have provided scientists with more than 30 published search engines, each with its own properties. In this review, we present the common paradigm behind the different implementations, and its limitations for modern mass spectrometry datasets. We also detail how the search engines attempt to alleviate these limitations, and provide an overview of the different software frameworks available to the researcher. Finally, we highlight alternative approaches for the identification of proteomic mass spectrometry datasets, either as a replacement for, or as a complement to, sequence database search engines. © 2017 Wiley Periodicals, Inc.
Development and validation of a turbulent-mix model for variable-density and compressible flows.
Banerjee, Arindam; Gore, Robert A; Andrews, Malcolm J
2010-10-01
The modeling of buoyancy driven turbulent flows is considered in conjunction with an advanced statistical turbulence model referred to as the BHR (Besnard-Harlow-Rauenzahn) k-S-a model. The BHR k-S-a model is focused on variable-density and compressible flows such as Rayleigh-Taylor (RT), Richtmyer-Meshkov (RM), and Kelvin-Helmholtz (KH) driven mixing. The BHR k-S-a turbulence mix model has been implemented in the RAGE hydro-code, and model constants are evaluated based on analytical self-similar solutions of the model equations. The results are then compared with a large test database available from experiments and direct numerical simulations (DNS) of RT, RM, and KH driven mixing. Furthermore, we describe research to understand how the BHR k-S-a turbulence model operates over a range of moderate to high Reynolds number buoyancy driven flows, with a goal of placing the modeling of buoyancy driven turbulent flows at the same level of development as that of single phase shear flows.
NASA Technical Reports Server (NTRS)
1990-01-01
In his July 1989 space policy speech, President Bush proposed a long range continuing commitment to space exploration and development. Included in his goals were the establishment of permanent lunar and Mars habitats and the development of extended duration space transportation. In both cases, a major issue is the availability of qualified sensor technologies for use in real-time monitoring and control of integrated physical/chemical/biological (p/c/b) Environmental Control and Life Support Systems (ECLSS). The purpose of this study is to determine the most promising instrumentation technologies for future ECLSS applications. The study approach is as follows: 1. Precursor ECLSS Subsystem Technology Trade Study - A database of existing and advanced Atmosphere Revitalization (AR) and Water Recovery and Management (WRM) ECLSS subsystem technologies was created. A trade study was performed to recommend AR and WRM subsystem technologies for future lunar and Mars mission scenarios. The purpose of this trade study was to begin defining future ECLSS instrumentation requirements as a precursor to determining the instrumentation technologies that will be applicable to future ECLS systems. 2. Instrumentation Survey - An instrumentation database of Chemical, Microbial, Conductivity, Humidity, Flowrate, Pressure, and Temperature sensors was created. Each page of the sensor database report contains information for one type of sensor, including a description of the operating principles, specifications, and the reference(s) from which the information was obtained. This section includes a cursory look at the history of instrumentation on U.S. spacecraft. 3. Results and Recommendations - Instrumentation technologies were recommended for further research and optimization based on a consideration of both of the above sections. A sensor or monitor technology was recommended based on its applicability to future ECLS systems, as defined by the ECLSS Trade Study (1), and on whether its characteristics were considered favorable relative to similar instrumentation technologies (competitors), as determined from the Instrumentation Survey (2). The instrumentation technologies recommended by this study show considerable potential for development and promise significant returns if research efforts are invested.
Ultra-Structure database design methodology for managing systems biology data and analyses
Maier, Christopher W; Long, Jeffrey G; Hemminger, Bradley M; Giddings, Morgan C
2009-01-01
Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping). Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find Ultra-Structure offers substantial benefits for biological information systems, the largest being the integration of diverse information sources into a common framework. This facilitates systems biology research by integrating data from disparate high-throughput techniques. It also enables us to readily incorporate new data types, sources, and domain knowledge with no change to the database structure or associated computer code. Ultra-Structure may be a significant step towards solving the hard problem of data management and integration in the systems biology era. PMID:19691849
Survey of Machine Learning Methods for Database Security
NASA Astrophysics Data System (ADS)
Kamra, Ashish; Ber, Elisa
Application of machine learning techniques to database security is an emerging area of research. In this chapter, we present a survey of various approaches that use machine learning/data mining techniques to enhance the traditional security mechanisms of databases. There are two key database security areas in which these techniques have found applications, namely, detection of SQL Injection attacks and anomaly detection for defending against insider threats. Apart from the research prototypes and tools, various third-party commercial products are also available that provide database activity monitoring solutions by profiling database users and applications. We present a survey of such products. We end the chapter with a primer on mechanisms for responding to database anomalies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herndon, J.N.
1992-05-01
The field of remote technology is continuing to evolve to support man`s efforts to perform tasks in hostile environments. Remote technology has roots which reach into the early history of man. Fireplace pokers, blacksmith`s tongs, and periscopes are examples of the beginnings of remote technology. The technology which we recognize today has evolved over the last 45-plus years to support human operations in hostile environments such as nuclear fission and fusion, space, underwater, hazardous chemical, and hazardous manufacturing. The four major categories of approach to remote technology have been (1) protective clothing and equipment for direct human entry, (2) extendedmore » reach tools using distance for safety, (3) telemanipulators with barriers for safety, and (4) teleoperators incorporating mobility with distance and/or barriers for safety. The government and commercial nuclear industry has driven the development of the majority of the actual teleoperator hardware available today. This hardware has been developed due to the unsatisfactory performance of the protective-clothing approach in many hostile applications. Systems which have been developed include crane/impact wrench systems, unilateral power manipulators, mechanical master/slaves, and servomanipulators. Work for space applications has been primarily research oriented with few successful space applications, although the shuttle`s remote manipulator system has been successful. In the last decade, underwater applications have moved forward significantly, with the offshore oil industry and military applications providing the primary impetus. This document consists of viewgraphs and subtitled figures.« less
History of remote operations and robotics in nuclear facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herndon, J.N.
1992-01-01
The field of remote technology is continuing to evolve to support man's efforts to perform tasks in hostile environments. Remote technology has roots which reach into the early history of man. Fireplace pokers, blacksmith's tongs, and periscopes are examples of the beginnings of remote technology. The technology which we recognize today has evolved over the last 45-plus years to support human operations in hostile environments such as nuclear fission and fusion, space, underwater, hazardous chemical, and hazardous manufacturing. The four major categories of approach to remote technology have been (1) protective clothing and equipment for direct human entry, (2) extendedmore » reach tools using distance for safety, (3) telemanipulators with barriers for safety, and (4) teleoperators incorporating mobility with distance and/or barriers for safety. The government and commercial nuclear industry has driven the development of the majority of the actual teleoperator hardware available today. This hardware has been developed due to the unsatisfactory performance of the protective-clothing approach in many hostile applications. Systems which have been developed include crane/impact wrench systems, unilateral power manipulators, mechanical master/slaves, and servomanipulators. Work for space applications has been primarily research oriented with few successful space applications, although the shuttle's remote manipulator system has been successful. In the last decade, underwater applications have moved forward significantly, with the offshore oil industry and military applications providing the primary impetus. This document consists of viewgraphs and subtitled figures.« less
SITE TECHNOLOGY CAPSULE: GIS\\KEY ENVIRONMENTAL DATA MANAGEMENT SYSTEM
GIS/Key™ is a comprehensive environmental database management system that integrates site data and graphics, enabling the user to create geologic cross-sections; boring logs; potentiometric, isopleth, and structure maps; summary tables; and hydrographs. GIS/Key™ is menu-driven an...
The Supernovae Analysis Application (SNAP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayless, Amanda J.; Fryer, Christopher Lee; Wollaeger, Ryan Thomas
The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginningmore » to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.« less
The Supernovae Analysis Application (SNAP)
Bayless, Amanda J.; Fryer, Christopher Lee; Wollaeger, Ryan Thomas; ...
2017-09-06
The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginningmore » to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.« less
Evaluating Land-Atmosphere Interactions with the North American Soil Moisture Database
NASA Astrophysics Data System (ADS)
Giles, S. M.; Quiring, S. M.; Ford, T.; Chavez, N.; Galvan, J.
2015-12-01
The North American Soil Moisture Database (NASMD) is a high-quality observational soil moisture database that was developed to study land-atmosphere interactions. It includes over 1,800 monitoring stations the United States, Canada and Mexico. Soil moisture data are collected from multiple sources, quality controlled and integrated into an online database (soilmoisture.tamu.edu). The period of record varies substantially and only a few of these stations have an observation record extending back into the 1990s. Daily soil moisture observations have been quality controlled using the North American Soil Moisture Database QAQC algorithm. The database is designed to facilitate observationally-driven investigations of land-atmosphere interactions, validation of the accuracy of soil moisture simulations in global land surface models, satellite calibration/validation for SMOS and SMAP, and an improved understanding of how soil moisture influences climate on seasonal to interannual timescales. This paper provides some examples of how the NASMD has been utilized to enhance understanding of land-atmosphere interactions in the U.S. Great Plains.
Real-Time Payload Control and Monitoring on the World Wide Web
NASA Technical Reports Server (NTRS)
Sun, Charles; Windrem, May; Givens, John J. (Technical Monitor)
1998-01-01
World Wide Web (W3) technologies such as the Hypertext Transfer Protocol (HTTP) and the Java object-oriented programming environment offer a powerful, yet relatively inexpensive, framework for distributed application software development. This paper describes the design of a real-time payload control and monitoring system that was developed with W3 technologies at NASA Ames Research Center. Based on Java Development Toolkit (JDK) 1.1, the system uses an event-driven "publish and subscribe" approach to inter-process communication and graphical user-interface construction. A C Language Integrated Production System (CLIPS) compatible inference engine provides the back-end intelligent data processing capability, while Oracle Relational Database Management System (RDBMS) provides the data management function. Preliminary evaluation shows acceptable performance for some classes of payloads, with Java's portability and multimedia support identified as the most significant benefit.
Best kept secrets ... First Coast Systems, Inc. (FCS).
Andrew, W F
1991-04-01
The FCS/APaCS system is a viable option for small-to medium-size hospitals (up to 400 beds). The table-driven system takes full advantage of IBM AS/400 computer architecture. A comprehensive application set, provided in an integrated database environment, is adaptable to multi-facility environments. Price/performance appears to be competitive. Commitment to IBM AS/400 environment assures cost-effective hardware platforms backed by IBM support and resources. As an IBM Health Industry Business Partner, FCS (and its clients) benefits from IBM's well-known commitment to quality and service. Corporate emphasis on user involvement and satisfaction, along with a commitment to quality and service for the APaCS systems, assures clients of "leading edge" capabilities in this evolutionary healthcare delivery environment. FCS/APaCS will be a strong contender in selected marketing environments.
Abductive Equivalential Translation and its application to Natural Language Database Interfacing
NASA Astrophysics Data System (ADS)
Rayner, Manny
1994-05-01
The thesis describes a logical formalization of natural-language database interfacing. We assume the existence of a ``natural language engine'' capable of mediating between surface linguistic string and their representations as ``literal'' logical forms: the focus of interest will be the question of relating ``literal'' logical forms to representations in terms of primitives meaningful to the underlying database engine. We begin by describing the nature of the problem, and show how a variety of interface functionalities can be considered as instances of a type of formal inference task which we call ``Abductive Equivalential Translation'' (AET); functionalities which can be reduced to this form include answering questions, responding to commands, reasoning about the completeness of answers, answering meta-questions of type ``Do you know...'', and generating assertions and questions. In each case, a ``linguistic domain theory'' (LDT) Γ and an input formula F are given, and the goal is to construct a formula with certain properties which is equivalent to F, given Γ and a set of permitted assumptions. If the LDT is of a certain specified type, whose formulas are either conditional equivalences or Horn-clauses, we show that the AET problem can be reduced to a goal-directed inference method. We present an abstract description of this method, and sketch its realization in Prolog. The relationship between AET and several problems previously discussed in the literature is discussed. In particular, we show how AET can provide a simple and elegant solution to the so-called ``Doctor on Board'' problem, and in effect allows a ``relativization'' of the Closed World Assumption. The ideas in the thesis have all been implemented concretely within the SRI CLARE project, using a real projects and payments database. The LDT for the example database is described in detail, and examples of the types of functionality that can be achieved within the example domain are presented.
Metnitz, P G; Laback, P; Popow, C; Laback, O; Lenz, K; Hiesmayr, M
1995-01-01
Patient Data Management Systems (PDMS) for ICUs collect, present and store clinical data. Various intentions make analysis of those digitally stored data desirable, such as quality control or scientific purposes. The aim of the Intensive Care Data Evaluation project (ICDEV), was to provide a database tool for the analysis of data recorded at various ICUs at the University Clinics of Vienna. General Hospital of Vienna, with two different PDMSs used: CareVue 9000 (Hewlett Packard, Andover, USA) at two ICUs (one medical ICU and one neonatal ICU) and PICIS Chart+ (PICIS, Paris, France) at one Cardiothoracic ICU. CONCEPT AND METHODS: Clinically oriented analysis of the data collected in a PDMS at an ICU was the beginning of the development. After defining the database structure we established a client-server based database system under Microsoft Windows NI and developed a user friendly data quering application using Microsoft Visual C++ and Visual Basic; ICDEV was successfully installed at three different ICUs, adjustment to the different PDMS configurations were done within a few days. The database structure developed by us enables a powerful query concept representing an 'EXPERT QUESTION COMPILER' which may help to answer almost any clinical questions. Several program modules facilitate queries at the patient, group and unit level. Results from ICDEV-queries are automatically transferred to Microsoft Excel for display (in form of configurable tables and graphs) and further processing. The ICDEV concept is configurable for adjustment to different intensive care information systems and can be used to support computerized quality control. However, as long as there exists no sufficient artifact recognition or data validation software for automatically recorded patient data, the reliability of these data and their usage for computer assisted quality control remain unclear and should be further studied.
Sea-Level Change in the Russian Arctic Since the Last Glacial Maximum
NASA Astrophysics Data System (ADS)
Horton, B.; Baranskaya, A.; Khan, N.; Romanenko, F. A.
2017-12-01
Relative sea-level (RSL) databases that span the Last Glacial Maximum (LGM) to present have been used to infer changes in climate, regional ice sheet variations, the rate and geographic source of meltwater influx, and the rheological structure of the solid Earth. Here, we have produced a quality-controlled RSL database for the Russian Arctic since the LGM. The database contains 394 index points, which locate the position of RSL in time and space, and 244 limiting points, which constrain the minimum or maximum limit of former sea level. In the western part of the Russian Arctic (Barents and White seas,) RSL was driven by glacial isostatic adjustment (GIA) due to deglaciation of the Scandinavian ice sheet, which covered the Baltic crystalline shield at the LGM. RSL data from isolation basins show rapid RSL from 80-100 m at 11-12 ka BP to 15-25 m at 4-5 ka BP. In the Arctic Islands of Franz-Joseph Land and Novaya Zemlya, RSL data from dated driftwood in raised beaches show a gradual fall from 25-35 m at 9-10 ka BP to 5-10 m at 3 ka BP. In the Russian plain, situated at the margins of the formerly glaciated Baltic crystalline shield, RSL data from raised beaches and isolation basins show an early Holocene rise from less than -20 m at 9-11 ka BP before falling in the late Holocene, illustrating the complex interplay between ice-equivalent meltwater input and GIA. The Western Siberian Arctic (Yamal and Gydan Peninsulas, Beliy Island and islands of the Kara Sea) was not glaciated at the LGM. Sea-level data from marine and salt-marsh deposits show RSL rise at the beginning of the Holocene to a mid-Holocene highstand of 1-5 m at 5-1 ka BP. A similar, but more complex RSL pattern is shown for Eastern Siberia. RSL data from the Laptev Sea shelf show RSL at -40- -45 m and 11-14 ka BP. RSL data from the Lena Delta and Tiksi region have a highstand from 5 to 1 ka BP. The research is supported by RSF project 17-77-10130
Construction Activities Prior to Issuance of a PSD Permit with Respect to Begin Actual Construction
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Protein Information Resource: a community resource for expert annotation of protein data
Barker, Winona C.; Garavelli, John S.; Hou, Zhenglin; Huang, Hongzhan; Ledley, Robert S.; McGarvey, Peter B.; Mewes, Hans-Werner; Orcutt, Bruce C.; Pfeiffer, Friedhelm; Tsugita, Akira; Vinayaka, C. R.; Xiao, Chunlin; Yeh, Lai-Su L.; Wu, Cathy
2001-01-01
The Protein Information Resource, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the most comprehensive and expertly annotated protein sequence database in the public domain, the PIR-International Protein Sequence Database. To provide timely and high quality annotation and promote database interoperability, the PIR-International employs rule-based and classification-driven procedures based on controlled vocabulary and standard nomenclature and includes status tags to distinguish experimentally determined from predicted protein features. The database contains about 200 000 non-redundant protein sequences, which are classified into families and superfamilies and their domains and motifs identified. Entries are extensively cross-referenced to other sequence, classification, genome, structure and activity databases. The PIR web site features search engines that use sequence similarity and database annotation to facilitate the analysis and functional identification of proteins. The PIR-International databases and search tools are accessible on the PIR web site at http://pir.georgetown.edu/ and at the MIPS web site at http://www.mips.biochem.mpg.de. The PIR-International Protein Sequence Database and other files are also available by FTP. PMID:11125041
Flux-driven simulations of turbulence collapse
Park, G. Y.; Kim, S. S.; Jhang, Hogun; ...
2015-03-12
In this study, using self-consistent three-dimensional nonlinear simulations of tokamak turbulence, we show that an edge transport barrier (ETB) forms naturally due to mean E x B shear feedback through evolving pressure gradient once input power exceeds a threshold value. The temporal evolution and development of the transition are elucidated. Profiles, turbulence-driven flows and neoclassical coefficients are evolved self-consistently. A slow power ramp-up simulation shows that ETB transition is triggered by the turbulence-driven flows via an intermediate phase which involves coherent oscillation of turbulence intensity and E x B flow shear. A novel observation of the evolution is that themore » turbulence collapses and the ETB transition begins when R T > 1 at t = t R (R T : normalized Reynolds power), while the conventional transition criterion (ω E x B > γlin) is satisfied only after t = t C (> t R), when the mean ow shear grows due to positive feedback.« less
Coalescence of Fluid-Driven Fractures
NASA Astrophysics Data System (ADS)
O'Keeffe, Niall; Zheng, Zhong; Huppert, Herbert; Linden, Paul
2017-11-01
We present an experimental study on the coalescence of two in-plane fluid-driven penny-shaped fractures in a brittle elastic medium. Initially, two fluid-driven fractures propagate independently of each other in the same plane. Then when the radial extent of each fracture reaches a certain distance the fractures begin to interact and coalesce. This coalescence forms a bridge between the fractures and then, in an intermediate period following the contact of the two fractures, most growth is observed to focus along this bridge, perpendicular to the line connecting the injection sources. We analyse the growth and shape of this bridge at various stages after coalescence and the transitions between different stages of growth. We also investigate the influence of the injection rate, the distance between two injection points, the viscosity of the fluid and the Young's modulus of the elastic medium on the coalescence of the fractures.
NASA Astrophysics Data System (ADS)
Friberg, P. A.; Luis, R. S.; Quintiliani, M.; Lisowski, S.; Hunter, S.
2014-12-01
Recently, a novel set of modules has been included in the Open Source Earthworm seismic data processing system, supporting the use of web applications. These include the Mole sub-system, for storing relevant event data in a MySQL database (see M. Quintiliani and S. Pintore, SRL, 2013), and an embedded webserver, Moleserv, for serving such data to web clients in QuakeML format. These modules have enabled, for the first time using Earthworm, the use of web applications for seismic data processing. These can greatly simplify the operation and maintenance of seismic data processing centers by having one or more servers providing the relevant data as well as the data processing applications themselves to client machines running arbitrary operating systems.Web applications with secure online web access allow operators to work anywhere, without the often cumbersome and bandwidth hungry use of secure shell or virtual private networks. Furthermore, web applications can seamlessly access third party data repositories to acquire additional information, such as maps. Finally, the usage of HTML email brought the possibility of specialized web applications, to be used in email clients. This is the case of EWHTMLEmail, which produces event notification emails that are in fact simple web applications for plotting relevant seismic data.Providing web services as part of Earthworm has enabled a number of other tools as well. One is ISTI's EZ Earthworm, a web based command and control system for an otherwise command line driven system; another is a waveform web service. The waveform web service serves Earthworm data to additional web clients for plotting, picking, and other web-based processing tools. The current Earthworm waveform web service hosts an advanced plotting capability for providing views of event-based waveforms from a Mole database served by Moleserve.The current trend towards the usage of cloud services supported by web applications is driving improvements in JavaScript, css and HTML, as well as faster and more efficient web browsers, including mobile. It is foreseeable that in the near future, web applications are as powerful and efficient as native applications. Hence the work described here has been the first step towards bringing the Open Source Earthworm seismic data processing system to this new paradigm.
Enlightening the life sciences: the history of halobacterial and microbial rhodopsin research.
Grote, Mathias; O'Malley, Maureen A
2011-11-01
The history of research on microbial rhodopsins offers a novel perspective on the history of the molecular life sciences. Events in this history play important roles in the development of fields such as general microbiology, membrane research, bioenergetics, metagenomics and, very recently, neurobiology. New concepts, techniques, methods and fields have arisen as a result of microbial rhodopsin investigations. In addition, the history of microbial rhodopsins sheds light on the dynamic connections between basic and applied science, and hypothesis-driven and data-driven approaches. The story begins with the late nineteenth century discovery of microorganisms on salted fish and leads into ecological and taxonomical studies of halobacteria in hypersaline environments. These programmes were built on by the discovery of bacteriorhodopsin in organisms that are part of what is now known as the archaeal genus Halobacterium. The transfer of techniques from bacteriorhodopsin studies to the metagenomic discovery of proteorhodopsin in 2000 further extended the field. Microbial rhodopsins have also been used as model systems to understand membrane protein structure and function, and they have become the target of technological applications such as optogenetics and nanotechnology. Analysing the connections between these historical episodes provides a rich example of how science works over longer time periods, especially with regard to the transfer of materials, methods and concepts between different research fields. © 2011 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.
BDVC (Bimodal Database of Violent Content): A database of violent audio and video
NASA Astrophysics Data System (ADS)
Rivera Martínez, Jose Luis; Mijes Cruz, Mario Humberto; Rodríguez Vázqu, Manuel Antonio; Rodríguez Espejo, Luis; Montoya Obeso, Abraham; García Vázquez, Mireya Saraí; Ramírez Acosta, Alejandro Álvaro
2017-09-01
Nowadays there is a trend towards the use of unimodal databases for multimedia content description, organization and retrieval applications of a single type of content like text, voice and images, instead bimodal databases allow to associate semantically two different types of content like audio-video, image-text, among others. The generation of a bimodal database of audio-video implies the creation of a connection between the multimedia content through the semantic relation that associates the actions of both types of information. This paper describes in detail the used characteristics and methodology for the creation of the bimodal database of violent content; the semantic relationship is stablished by the proposed concepts that describe the audiovisual information. The use of bimodal databases in applications related to the audiovisual content processing allows an increase in the semantic performance only and only if these applications process both type of content. This bimodal database counts with 580 audiovisual annotated segments, with a duration of 28 minutes, divided in 41 classes. Bimodal databases are a tool in the generation of applications for the semantic web.
The National Information Infrastructure: Agenda for Action.
ERIC Educational Resources Information Center
Department of Commerce, Washington, DC. Information Infrastructure Task Force.
The National Information Infrastructure (NII) is planned as a web of communications networks, computers, databases, and consumer electronics that will put vast amounts of information at the users' fingertips. Private sector firms are beginning to develop this infrastructure, but essential roles remain for the Federal Government. The National…
A data-driven wavelet-based approach for generating jumping loads
NASA Astrophysics Data System (ADS)
Chen, Jun; Li, Guo; Racic, Vitomir
2018-06-01
This paper suggests an approach to generate human jumping loads using wavelet transform and a database of individual jumping force records. A total of 970 individual jumping force records of various frequencies were first collected by three experiments from 147 test subjects. For each record, every jumping pulse was extracted and decomposed into seven levels by wavelet transform. All the decomposition coefficients were stored in an information database. Probability distributions of jumping cycle period, contact ratio and energy of the jumping pulse were statistically analyzed. Inspired by the theory of DNA recombination, an approach was developed by interchanging the wavelet coefficients between different jumping pulses. To generate a jumping force time history with N pulses, wavelet coefficients were first selected randomly from the database at each level. They were then used to reconstruct N pulses by the inverse wavelet transform. Jumping cycle periods and contract ratios were then generated randomly based on their probabilistic functions. These parameters were assigned to each of the N pulses which were in turn scaled by the amplitude factors βi to account for energy relationship between successive pulses. The final jumping force time history was obtained by linking all the N cycles end to end. This simulation approach can preserve the non-stationary features of the jumping load force in time-frequency domain. Application indicates that this approach can be used to generate jumping force time history due to single people jumping and also can be extended further to stochastic jumping loads due to groups and crowds.
Development strategies for the satellite flight software on-board Meteosat Third Generation
NASA Astrophysics Data System (ADS)
Tipaldi, Massimo; Legendre, Cedric; Koopmann, Olliver; Ferraguto, Massimo; Wenker, Ralf; D'Angelo, Gianni
2018-04-01
Nowadays, satellites are becoming increasingly software dependent. Satellite Flight Software (FSW), that is to say, the application software running on the satellite main On-Board Computer (OBC), plays a relevant role in implementing complex space mission requirements. In this paper, we examine relevant technical approaches and programmatic strategies adopted for the development of the Meteosat Third Generation Satellite (MTG) FSW. To begin with, we present its layered model-based architecture, and the means for ensuring a robust and reliable interaction among the FSW components. Then, we focus on the selection of an effective software development life cycle model. In particular, by combining plan-driven and agile approaches, we can fulfill the need of having preliminary SW versions. They can be used for the elicitation of complex system-level requirements as well as for the initial satellite integration and testing activities. Another important aspect can be identified in the testing activities. Indeed, very demanding quality requirements have to be fulfilled in satellite SW applications. This manuscript proposes a test automation framework, which uses an XML-based test procedure language independent of the underlying test environment. Finally, a short overview of the MTG FSW sizing and timing budgets concludes the paper.
5 CFR 534.406 - Conversion to the SES pay system.
Code of Federal Regulations, 2010 CFR
2010-01-01
... to the SES pay system. (a) On the first day of the first applicable pay period beginning on or after... rate of basic pay that is equal to the employee's rate of basic pay, plus any applicable locality-based... first day of the first applicable pay period beginning on or after January 1, 2004. If an SES member's...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 12 2010-04-01 2010-04-01 false Common parent agent for subsidiaries applicable... TAXES Regulations Applicable to Taxable Years Beginning Before June 28, 2002 § 1.1502-77A Common parent...) Scope of agency of common parent corporation. The common parent, for all purposes (other than the making...
Application of Bounded Linear Stability Analysis Method for Metrics-Driven Adaptive Control
NASA Technical Reports Server (NTRS)
Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje
2009-01-01
This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics-driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a second order system that represents a pitch attitude control of a generic transport aircraft. The analysis shows that the system with the metrics-conforming variable adaptive gain becomes more robust to unmodeled dynamics or time delay. The effect of analysis time-window for BLSA is also evaluated in order to meet the stability margin criteria.
RefPrimeCouch—a reference gene primer CouchApp
Silbermann, Jascha; Wernicke, Catrin; Pospisil, Heike; Frohme, Marcus
2013-01-01
To support a quantitative real-time polymerase chain reaction standardization project, a new reference gene database application was required. The new database application was built with the explicit goal of simplifying not only the development process but also making the user interface more responsive and intuitive. To this end, CouchDB was used as the backend with a lightweight dynamic user interface implemented client-side as a one-page web application. Data entry and curation processes were streamlined using an OpenRefine-based workflow. The new RefPrimeCouch database application provides its data online under an Open Database License. Database URL: http://hpclife.th-wildau.de:5984/rpc/_design/rpc/view.html PMID:24368831
RefPrimeCouch--a reference gene primer CouchApp.
Silbermann, Jascha; Wernicke, Catrin; Pospisil, Heike; Frohme, Marcus
2013-01-01
To support a quantitative real-time polymerase chain reaction standardization project, a new reference gene database application was required. The new database application was built with the explicit goal of simplifying not only the development process but also making the user interface more responsive and intuitive. To this end, CouchDB was used as the backend with a lightweight dynamic user interface implemented client-side as a one-page web application. Data entry and curation processes were streamlined using an OpenRefine-based workflow. The new RefPrimeCouch database application provides its data online under an Open Database License. Database URL: http://hpclife.th-wildau.de:5984/rpc/_design/rpc/view.html.
Adopting a corporate perspective on databases. Improving support for research and decision making.
Meistrell, M; Schlehuber, C
1996-03-01
The Veterans Health Administration (VHA) is at the forefront of designing and managing health care information systems that accommodate the needs of clinicians, researchers, and administrators at all levels. Rather than using one single-site, centralized corporate database VHA has constructed several large databases with different configurations to meet the needs of users with different perspectives. The largest VHA database is the Decentralized Hospital Computer Program (DHCP), a multisite, distributed data system that uses decoupled hospital databases. The centralization of DHCP policy has promoted data coherence, whereas the decentralization of DHCP management has permitted system development to be done with maximum relevance to the users'local practices. A more recently developed VHA data system, the Event Driven Reporting system (EDR), uses multiple, highly coupled databases to provide workload data at facility, regional, and national levels. The EDR automatically posts a subset of DHCP data to local and national VHA management. The development of the EDR illustrates how adoption of a corporate perspective can offer significant database improvements at reasonable cost and with modest impact on the legacy system.
Feature maps driven no-reference image quality prediction of authentically distorted images
NASA Astrophysics Data System (ADS)
Ghadiyaram, Deepti; Bovik, Alan C.
2015-03-01
Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.
A simple method for serving Web hypermaps with dynamic database drill-down
Boulos, Maged N Kamel; Roudsari, Abdul V; Carson, Ewart R
2002-01-01
Background HealthCyberMap aims at mapping parts of health information cyberspace in novel ways to deliver a semantically superior user experience. This is achieved through "intelligent" categorisation and interactive hypermedia visualisation of health resources using metadata, clinical codes and GIS. HealthCyberMap is an ArcView 3.1 project. WebView, the Internet extension to ArcView, publishes HealthCyberMap ArcView Views as Web client-side imagemaps. The basic WebView set-up does not support any GIS database connection, and published Web maps become disconnected from the original project. A dedicated Internet map server would be the best way to serve HealthCyberMap database-driven interactive Web maps, but is an expensive and complex solution to acquire, run and maintain. This paper describes HealthCyberMap simple, low-cost method for "patching" WebView to serve hypermaps with dynamic database drill-down functionality on the Web. Results The proposed solution is currently used for publishing HealthCyberMap GIS-generated navigational information maps on the Web while maintaining their links with the underlying resource metadata base. Conclusion The authors believe their map serving approach as adopted in HealthCyberMap has been very successful, especially in cases when only map attribute data change without a corresponding effect on map appearance. It should be also possible to use the same solution to publish other interactive GIS-driven maps on the Web, e.g., maps of real world health problems. PMID:12437788
Fast T Wave Detection Calibrated by Clinical Knowledge with Annotation of P and T Waves.
Elgendi, Mohamed; Eskofier, Bjoern; Abbott, Derek
2015-07-21
There are limited studies on the automatic detection of T waves in arrhythmic electrocardiogram (ECG) signals. This is perhaps because there is no available arrhythmia dataset with annotated T waves. There is a growing need to develop numerically-efficient algorithms that can accommodate the new trend of battery-driven ECG devices. Moreover, there is also a need to analyze long-term recorded signals in a reliable and time-efficient manner, therefore improving the diagnostic ability of mobile devices and point-of-care technologies. Here, the T wave annotation of the well-known MIT-BIH arrhythmia database is discussed and provided. Moreover, a simple fast method for detecting T waves is introduced. A typical T wave detection method has been reduced to a basic approach consisting of two moving averages and dynamic thresholds. The dynamic thresholds were calibrated using four clinically known types of sinus node response to atrial premature depolarization (compensation, reset, interpolation, and reentry). The determination of T wave peaks is performed and the proposed algorithm is evaluated on two well-known databases, the QT and MIT-BIH Arrhythmia databases. The detector obtained a sensitivity of 97.14% and a positive predictivity of 99.29% over the first lead of the validation databases (total of 221,186 beats). We present a simple yet very reliable T wave detection algorithm that can be potentially implemented on mobile battery-driven devices. In contrast to complex methods, it can be easily implemented in a digital filter design.
GIS\\KEY™ ENVIRONMENTAL DATA MANAGEMENT SYSTEM - INNOVATIVE TECHNOLOGY EVALUATION REPORT
GIS/Key™ is a comprehensive environmental database management system that integrates site data and graphics, enabling the user to create geologic cross-sections; boring logs; potentiometric, isopleth, and structure maps; summary tables; and hydrographs. GIS/Key™ is menu-driven an...
ERIC Educational Resources Information Center
Stevenson, Joseph Martin; Payne, Alfredda Hunt
2016-01-01
This chapter describes how data analysis and data-driven decision making were critical for designing, developing, and assessing a new academic program. The authors--one, the program's founder; the other, an alumna--begin by highlighting some of the elements in the program's incubation and, subsequently, describe some of the components for data…
Creolizing Educational Practices
ERIC Educational Resources Information Center
Gordon, Jane Anna
2018-01-01
Author Jane Anna Gordon begins this commentary by saying that early in her academic career she was struck by the dual character of schools as places that can damage and waste the human potential of some on one hand, and that can and should be put in the service of liberation on the other. She writes that this point was driven home to her through…
Nanotechnology in a Globalized World: Strategic Assessments of an Emerging Technology
2014-06-01
neoclassical microeconomic tenets underpinning the market-driven, laissez faire view.180 These theories argue that technology, rather than capital and foreign...V. Technological Innovation and Leadership in a Globalized World .................................................... 59 National...nanotechnology with respect to U.S. national security and leadership and means for managing them, the report begins with an examination of some of nanotech’s
Restoring natural fire regimes to the Sierra Nevada in an era of global change
Jon E. Keeley; Nathan L. Stephenson
2000-01-01
A conceptual model of fire and forest restoration and maintenance is presented. The process must begin with clearly articulated goals and depends upon derivation of science-driven models that describe the natural or desired conditions. Evaluating the extent to which contemporary landscapes depart from the model is a prerequisite to determining the need for restoration...
Is Non-Completion a Failure or a New Beginning? Research Non-Completion from a Student's Perspective
ERIC Educational Resources Information Center
McCormack, Coralie
2005-01-01
Today's performance-driven model of higher degree research has constructed student withdrawal and non-completion as failure. This failure is often internalized by the student as their own failure. This paper draws on a longitudinal study that examined the experiences of four female Master's by Research degree students--Anna, Carla, Grace and…
ERIC Educational Resources Information Center
Mutchler, Sue E.; Pollard, Joyce S.
As they work to develop integrated, community-driven service systems that meet the constellation of needs of children and families, several states are beginning to develop new governance structures at the local level. This paper describes the ways in which states are creating or supporting linkages among education, health, and human services. A…
ERIC Educational Resources Information Center
Sato, Eriko; Chen, Julian Cheng Chiang; Jourdain, Sarah
2017-01-01
The development of distance learning courses for less commonly taught languages (LCTLs) often meets with instructional challenges, especially for Asian LCTLs with their distinct non-Roman characters and structures. This study documents the implementation of a fully online, elementary Japanese course at Stony Brook University. The curriculum was…
ERIC Educational Resources Information Center
Olowe, Peter Kayode; Kutelu, Bukola Olaronke
2014-01-01
Children of the present age are born into the world that is highly driven by Information and Communication Technology (ICT). They begin to manipulate ICT materials as soon as they grow old enough to manipulate things. There is need therefore to provide ICT-learning experiences that can aid their holistic development. To do this, early childhood…
Machine Learning and Deep Learning Models to Predict Runoff Water Quantity and Quality
NASA Astrophysics Data System (ADS)
Bradford, S. A.; Liang, J.; Li, W.; Murata, T.; Simunek, J.
2017-12-01
Contaminants can be rapidly transported at the soil surface by runoff to surface water bodies. Physically-based models, which are based on the mathematical description of main hydrological processes, are key tools for predicting surface water impairment. Along with physically-based models, data-driven models are becoming increasingly popular for describing the behavior of hydrological and water resources systems since these models can be used to complement or even replace physically based-models. In this presentation we propose a new data-driven model as an alternative to a physically-based overland flow and transport model. First, we have developed a physically-based numerical model to simulate overland flow and contaminant transport (the HYDRUS-1D overland flow module). A large number of numerical simulations were carried out to develop a database containing information about the impact of various input parameters (weather patterns, surface topography, vegetation, soil conditions, contaminants, and best management practices) on runoff water quantity and quality outputs. This database was used to train data-driven models. Three different methods (Neural Networks, Support Vector Machines, and Recurrence Neural Networks) were explored to prepare input- output functional relations. Results demonstrate the ability and limitations of machine learning and deep learning models to predict runoff water quantity and quality.
John F. Caratti
2006-01-01
The FIREMON database software allows users to enter data, store, analyze, and summarize plot data, photos, and related documents. The FIREMON database software consists of a Java application and a Microsoft® Access database. The Java application provides the user interface with FIREMON data through data entry forms, data summary reports, and other data management tools...
3D visualization of molecular structures in the MOGADOC database
NASA Astrophysics Data System (ADS)
Vogt, Natalja; Popov, Evgeny; Rudert, Rainer; Kramer, Rüdiger; Vogt, Jürgen
2010-08-01
The MOGADOC database (Molecular Gas-Phase Documentation) is a powerful tool to retrieve information about compounds which have been studied in the gas-phase by electron diffraction, microwave spectroscopy and molecular radio astronomy. Presently the database contains over 34,500 bibliographic references (from the beginning of each method) for about 10,000 inorganic, organic and organometallic compounds and structural data (bond lengths, bond angles, dihedral angles, etc.) for about 7800 compounds. Most of the implemented molecular structures are given in a three-dimensional (3D) presentation. To create or edit and visualize the 3D images of molecules, new tools (special editor and Java-based 3D applet) were developed. Molecular structures in internal coordinates were converted to those in Cartesian coordinates.
[Data validation methods and discussion on Chinese materia medica resource survey].
Zhang, Yue; Ma, Wei-Feng; Zhang, Xiao-Bo; Zhu, Shou-Dong; Guo, Lan-Ping; Wang, Xing-Xing
2013-07-01
From the beginning of the fourth national survey of the Chinese materia medica resources, there were 22 provinces have conducted pilots. The survey teams have reported immense data, it put forward the very high request to the database system construction. In order to ensure the quality, it is necessary to check and validate the data in database system. Data validation is important methods to ensure the validity, integrity and accuracy of census data. This paper comprehensively introduce the data validation system of the fourth national survey of the Chinese materia medica resources database system, and further improve the design idea and programs of data validation. The purpose of this study is to promote the survey work smoothly.
Design of pressure-driven microfluidic networks using electric circuit analogy.
Oh, Kwang W; Lee, Kangsun; Ahn, Byungwook; Furlani, Edward P
2012-02-07
This article reviews the application of electric circuit methods for the analysis of pressure-driven microfluidic networks with an emphasis on concentration- and flow-dependent systems. The application of circuit methods to microfluidics is based on the analogous behaviour of hydraulic and electric circuits with correlations of pressure to voltage, volumetric flow rate to current, and hydraulic to electric resistance. Circuit analysis enables rapid predictions of pressure-driven laminar flow in microchannels and is very useful for designing complex microfluidic networks in advance of fabrication. This article provides a comprehensive overview of the physics of pressure-driven laminar flow, the formal analogy between electric and hydraulic circuits, applications of circuit theory to microfluidic network-based devices, recent development and applications of concentration- and flow-dependent microfluidic networks, and promising future applications. The lab-on-a-chip (LOC) and microfluidics community will gain insightful ideas and practical design strategies for developing unique microfluidic network-based devices to address a broad range of biological, chemical, pharmaceutical, and other scientific and technical challenges.
Seventy-five years of vegetation treatments on public rangelands in the Great Basin of North America
Pilliod, David S.; Welty, Justin; Toevs, Gordon R.
2017-01-01
On the Ground Land treatments occurring over millions of hectares of public rangelands in the Great Basin over the last 75 years represent one of the largest vegetation manipulation and restoration efforts in the world.The ability to use legacy data from land treatments in adaptive management and ecological research has improved with the creation of the Land Treatment Digital Library (LTDL), a spatially explicit database of land treatments conducted by the U.S. Bureau of Land Management.The LTDL contains information on over 9,000 confirmed land treatments in the Great Basin, composed of seedings (58%), vegetation control treatments (24%), and other types of vegetation or soil manipulations (18%).The potential application of land treatment legacy data for adaptive management or as natural experiments for retrospective analyses of effects of land management actions on physical, hydrologic, and ecologic patterns and processes is considerable and just beginning to be realized.
Applying ecological and evolutionary theory to cancer: a long and winding road.
Thomas, Frédéric; Fisher, Daniel; Fort, Philippe; Marie, Jean-Pierre; Daoust, Simon; Roche, Benjamin; Grunau, Christoph; Cosseau, Céline; Mitta, Guillaume; Baghdiguian, Stephen; Rousset, François; Lassus, Patrice; Assenat, Eric; Grégoire, Damien; Missé, Dorothée; Lorz, Alexander; Billy, Frédérique; Vainchenker, William; Delhommeau, François; Koscielny, Serge; Itzykson, Raphael; Tang, Ruoping; Fava, Fanny; Ballesta, Annabelle; Lepoutre, Thomas; Krasinska, Liliana; Dulic, Vjekoslav; Raynaud, Peggy; Blache, Philippe; Quittau-Prevostel, Corinne; Vignal, Emmanuel; Trauchessec, Hélène; Perthame, Benoit; Clairambault, Jean; Volpert, Vitali; Solary, Eric; Hibner, Urszula; Hochberg, Michael E
2013-01-01
Since the mid 1970s, cancer has been described as a process of Darwinian evolution, with somatic cellular selection and evolution being the fundamental processes leading to malignancy and its many manifestations (neoangiogenesis, evasion of the immune system, metastasis, and resistance to therapies). Historically, little attention has been placed on applications of evolutionary biology to understanding and controlling neoplastic progression and to prevent therapeutic failures. This is now beginning to change, and there is a growing international interest in the interface between cancer and evolutionary biology. The objective of this introduction is first to describe the basic ideas and concepts linking evolutionary biology to cancer. We then present four major fronts where the evolutionary perspective is most developed, namely laboratory and clinical models, mathematical models, databases, and techniques and assays. Finally, we discuss several of the most promising challenges and future prospects in this interdisciplinary research direction in the war against cancer.
Application of adult attachment theory to group member transference and the group therapy process.
Markin, Rayna D; Marmarosh, Cheri
2010-03-01
Although clinical researchers have applied attachment theory to client conceptualization and treatment in individual therapy, few researchers have applied this theory to group therapy. The purpose of this article is to begin to apply theory and research on adult dyadic and group attachment styles to our understanding of group dynamics and processes in adult therapy groups. In particular, we set forth theoretical propositions on how group members' attachment styles affect relationships within the group. Specifically, this article offers some predictions on how identifying group member dyadic and group attachment styles could help leaders predict member transference within the therapy group. Implications of group member attachment for the selection and composition of a group and the different group stages are discussed. Recommendations for group clinicians and researchers are offered. PsycINFO Database Record (c) 2010 APA, all rights reserved
Non-Invasive Mechanical Ventilation in Critically Ill Trauma Patients: A Systematic Review
Yıldırım, Fatma; Ferrari, Giovanni; Antonelli, Andrea; Delis, Pablo Bayoumy; Gündüz, Murat; Karcz, Marcin; Papadakos, Peter; Cosentini, Roberto; Dikmen, Yalım; Esquinas, Antonio M.
2018-01-01
There is limited literature on non-invasive mechanical ventilation (NIMV) in patients with polytrauma-related acute respiratory failure (ARF). Despite an increasing worldwide application, there is still scarce evidence of significant NIMV benefits in this specific setting, and no clear recommendations are provided. We performed a systematic review, and a search of clinical databases including MEDLINE and EMBASE was conducted from the beginning of 1990 until today. Although the benefits in reducing the intubation rate, morbidity and mortality are unclear, NIMV may be useful and does not appear to be associated with harm when applied in properly selected patients with moderate ARF at an earlier stage of injury by experienced teams and in appropriate settings under strict monitoring. In the presence of these criteria, NIMV is worth attempting, but only if endotracheal intubation is promptly available because non-responders to NIMV are burdened by an increased mortality when intubation is delayed. PMID:29744242
Benson, Dennis A; Karsch-Mizrachi, Ilene; Lipman, David J; Ostell, James; Wheeler, David L
2008-01-01
GenBank (R) is a comprehensive database that contains publicly available nucleotide sequences for more than 260 000 named organisms, obtained primarily through submissions from individual laboratories and batch submissions from large-scale sequencing projects. Most submissions are made using the web-based BankIt or standalone Sequin programs and accession numbers are assigned by GenBank staff upon receipt. Daily data exchange with the European Molecular Biology Laboratory Nucleotide Sequence Database in Europe and the DNA Data Bank of Japan ensures worldwide coverage. GenBank is accessible through NCBI's retrieval system, Entrez, which integrates data from the major DNA and protein sequence databases along with taxonomy, genome, mapping, protein structure and domain information, and the biomedical journal literature via PubMed. BLAST provides sequence similarity searches of GenBank and other sequence databases. Complete bimonthly releases and daily updates of the GenBank database are available by FTP. To access GenBank and its related retrieval and analysis services, begin at the NCBI Homepage: www.ncbi.nlm.nih.gov.
Benson, Dennis A.; Karsch-Mizrachi, Ilene; Lipman, David J.; Ostell, James; Wheeler, David L.
2008-01-01
GenBank (R) is a comprehensive database that contains publicly available nucleotide sequences for more than 260 000 named organisms, obtained primarily through submissions from individual laboratories and batch submissions from large-scale sequencing projects. Most submissions are made using the web-based BankIt or standalone Sequin programs and accession numbers are assigned by GenBank staff upon receipt. Daily data exchange with the European Molecular Biology Laboratory Nucleotide Sequence Database in Europe and the DNA Data Bank of Japan ensures worldwide coverage. GenBank is accessible through NCBI's retrieval system, Entrez, which integrates data from the major DNA and protein sequence databases along with taxonomy, genome, mapping, protein structure and domain information, and the biomedical journal literature via PubMed. BLAST provides sequence similarity searches of GenBank and other sequence databases. Complete bimonthly releases and daily updates of the GenBank database are available by FTP. To access GenBank and its related retrieval and analysis services, begin at the NCBI Homepage: www.ncbi.nlm.nih.gov PMID:18073190
Stabilizing effect of helical current drive on tearing modes
NASA Astrophysics Data System (ADS)
Yuan, Y.; Lu, X. Q.; Dong, J. Q.; Gong, X. Y.; Zhang, R. B.
2018-01-01
The effect of helical driven current on the m = 2/n = 1 tearing mode is studied numerically in a cylindrical geometry using the method of reduced magneto-hydro-dynamic simulation. The results show that the local persistent helical current drive from the beginning time can be applied to control the tearing modes, and will cause a rebound effect called flip instability when the driven current reaches a certain value. The current intensity threshold value for the occurrence of flip instability is about 0.00087I0. The method of controlling the development of tearing mode with comparative economy is given. If the local helical driven current is discontinuous, the magnetic island can be controlled within a certain range, and then, the tearing modes stop growing; thus, the flip instability can be avoided. We also find that the flip instability will become impatient with delay injection of the driven current because the high order harmonics have been developed in the original O-point. The tearing mode instability can be controlled by using the electron cyclotron current drive to reduce the gradient of the current intensity on the rational surfaces.
The New Library, A Hybrid Organization.
ERIC Educational Resources Information Center
Waaijers, Leo
This paper discusses changes in technology in libraries over the last decade, beginning with an overview of the impact of databases, the Internet, and the World Wide Web on libraries. The integration of technology at Delft University of Technology (Netherlands) is described, including use of scanning technology, fax, and e-mail for document…
Student Learning in Higher Education: A Commentary
ERIC Educational Resources Information Center
Richardson, John T. E.
2017-01-01
This commentary begins by summarizing the five contributions to this special issue and briefly recapping the background to the topic of student learning in higher education. Narrative and systematic reviews are compared, and the relative value of different bibliographic databases in the context of systematic reviews is assessed. The importance of…
The Implications of Well-Formedness on Web-Based Educational Resources.
ERIC Educational Resources Information Center
Mohler, James L.
Within all institutions, Web developers are beginning to utilize technologies that make sites more than static information resources. Databases such as XML (Extensible Markup Language) and XSL (Extensible Stylesheet Language) are key technologies that promise to extend the Web beyond the "information storehouse" paradigm and provide…
Application-Driven Educational Game to Assist Young Children in Learning English Vocabulary
ERIC Educational Resources Information Center
Chen, Zhi-Hong; Lee, Shu-Yu
2018-01-01
This paper describes the development of an educational game, named My-Pet-Shop, to enhance young children's learning of English vocabulary. The educational game is underpinned by an application-driven model, which consists of three components: application scenario, subject learning, and learning regulation. An empirical study is further conducted…
Hydrogen Leak Detection Sensor Database
NASA Technical Reports Server (NTRS)
Baker, Barton D.
2010-01-01
This slide presentation reviews the characteristics of the Hydrogen Sensor database. The database is the result of NASA's continuing interest in and improvement of its ability to detect and assess gas leaks in space applications. The database specifics and a snapshot of an entry in the database are reviewed. Attempts were made to determine the applicability of each of the 65 sensors for ground and/or vehicle use.
Benson, Dennis A.; Karsch-Mizrachi, Ilene; Lipman, David J.; Ostell, James; Wheeler, David L.
2007-01-01
GenBank (R) is a comprehensive database that contains publicly available nucleotide sequences for more than 240 000 named organisms, obtained primarily through submissions from individual laboratories and batch submissions from large-scale sequencing projects. Most submissions are made using the web-based BankIt or standalone Sequin programs and accession numbers are assigned by GenBank staff upon receipt. Daily data exchange with the EMBL Data Library in Europe and the DNA Data Bank of Japan ensures worldwide coverage. GenBank is accessible through NCBI's retrieval system, Entrez, which integrates data from the major DNA and protein sequence databases along with taxonomy, genome, mapping, protein structure and domain information, and the biomedical journal literature via PubMed. BLAST provides sequence similarity searches of GenBank and other sequence databases. Complete bimonthly releases and daily updates of the GenBank database are available by FTP. To access GenBank and its related retrieval and analysis services, begin at the NCBI Homepage (). PMID:17202161
ICG: a wiki-driven knowledgebase of internal control genes for RT-qPCR normalization.
Sang, Jian; Wang, Zhennan; Li, Man; Cao, Jiabao; Niu, Guangyi; Xia, Lin; Zou, Dong; Wang, Fan; Xu, Xingjian; Han, Xiaojiao; Fan, Jinqi; Yang, Ye; Zuo, Wanzhu; Zhang, Yang; Zhao, Wenming; Bao, Yiming; Xiao, Jingfa; Hu, Songnian; Hao, Lili; Zhang, Zhang
2018-01-04
Real-time quantitative PCR (RT-qPCR) has become a widely used method for accurate expression profiling of targeted mRNA and ncRNA. Selection of appropriate internal control genes for RT-qPCR normalization is an elementary prerequisite for reliable expression measurement. Here, we present ICG (http://icg.big.ac.cn), a wiki-driven knowledgebase for community curation of experimentally validated internal control genes as well as their associated experimental conditions. Unlike extant related databases that focus on qPCR primers in model organisms (mainly human and mouse), ICG features harnessing collective intelligence in community integration of internal control genes for a variety of species. Specifically, it integrates a comprehensive collection of more than 750 internal control genes for 73 animals, 115 plants, 12 fungi and 9 bacteria, and incorporates detailed information on recommended application scenarios corresponding to specific experimental conditions, which, collectively, are of great help for researchers to adopt appropriate internal control genes for their own experiments. Taken together, ICG serves as a publicly editable and open-content encyclopaedia of internal control genes and accordingly bears broad utility for reliable RT-qPCR normalization and gene expression characterization in both model and non-model organisms. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Automatic labeling and characterization of objects using artificial neural networks
NASA Technical Reports Server (NTRS)
Campbell, William J.; Hill, Scott E.; Cromp, Robert F.
1989-01-01
Existing NASA supported scientific data bases are usually developed, managed and populated in a tedious, error prone and self-limiting way in terms of what can be described in a relational Data Base Management System (DBMS). The next generation Earth remote sensing platforms, i.e., Earth Observation System, (EOS), will be capable of generating data at a rate of over 300 Mbs per second from a suite of instruments designed for different applications. What is needed is an innovative approach that creates object-oriented databases that segment, characterize, catalog and are manageable in a domain-specific context and whose contents are available interactively and in near-real-time to the user community. Described here is work in progress that utilizes an artificial neural net approach to characterize satellite imagery of undefined objects into high-level data objects. The characterized data is then dynamically allocated to an object-oriented data base where it can be reviewed and assessed by a user. The definition, development, and evolution of the overall data system model are steps in the creation of an application-driven knowledge-based scientific information system.
A Framework for Mapping User-Designed Forms to Relational Databases
ERIC Educational Resources Information Center
Khare, Ritu
2011-01-01
In the quest for database usability, several applications enable users to design custom forms using a graphical interface, and forward engineer the forms into new databases. The path-breaking aspect of such applications is that users are completely shielded from the technicalities of database creation. Despite this innovation, the process of…
DOT National Transportation Integrated Search
2014-01-01
The Maine Department of Transportation (MaineDOT) has noted poor correlation between predicted pile resistances : calculated using commonly accepted design methods and measured pile resistance from dynamic pile load tests (also : referred to as high ...
Bar-Code System for a Microbiological Laboratory
NASA Technical Reports Server (NTRS)
Law, Jennifer; Kirschner, Larry
2007-01-01
A bar-code system has been assembled for a microbiological laboratory that must examine a large number of samples. The system includes a commercial bar-code reader, computer hardware and software components, plus custom-designed database software. The software generates a user-friendly, menu-driven interface.
DOT National Transportation Integrated Search
2010-06-01
The New Jersey Crash Record Geocoding Initiative was designed as a provisional measure to address missing crash locations. The purpose of the initiative was twofold. Primarily, students worked to locate crashes that had no location information after ...
DOT National Transportation Integrated Search
2014-01-01
The Maine Department of Transportation (MaineDOT) has noted poor correlation between predicted pile resistances : calculated using commonly accepted design methods and measured pile resistance from dynamic pile load tests (also : referred to as high ...
Modeling of developmental toxicology presents a significant challenge to computational toxicology due to endpoint complexity and lack of data coverage. These challenges largely account for the relatively few modeling successes using the structure–activity relationship (SAR) parad...
Database-driven web interface automating gyrokinetic simulations for validation
NASA Astrophysics Data System (ADS)
Ernst, D. R.
2010-11-01
We are developing a web interface to connect plasma microturbulence simulation codes with experimental data. The website automates the preparation of gyrokinetic simulations utilizing plasma profile and magnetic equilibrium data from TRANSP analysis of experiments, read from MDSPLUS over the internet. This database-driven tool saves user sessions, allowing searches of previous simulations, which can be restored to repeat the same analysis for a new discharge. The website includes a multi-tab, multi-frame, publication quality java plotter Webgraph, developed as part of this project. Input files can be uploaded as templates and edited with context-sensitive help. The website creates inputs for GS2 and GYRO using a well-tested and verified back-end, in use for several years for the GS2 code [D. R. Ernst et al., Phys. Plasmas 11(5) 2637 (2004)]. A centralized web site has the advantage that users receive bug fixes instantaneously, while avoiding the duplicated effort of local compilations. Possible extensions to the database to manage run outputs, toward prototyping for the Fusion Simulation Project, are envisioned. Much of the web development utilized support from the DoE National Undergraduate Fellowship program [e.g., A. Suarez and D. R. Ernst, http://meetings.aps.org/link/BAPS.2005.DPP.GP1.57.
Configuring the Orion Guidance, Navigation, and Control Flight Software for Automated Sequencing
NASA Technical Reports Server (NTRS)
Odegard, Ryan G.; Siliwinski, Tomasz K.; King, Ellis T.; Hart, Jeremy J.
2010-01-01
The Orion Crew Exploration Vehicle is being designed with greater automation capabilities than any other crewed spacecraft in NASA s history. The Guidance, Navigation, and Control (GN&C) flight software architecture is designed to provide a flexible and evolvable framework that accommodates increasing levels of automation over time. Within the GN&C flight software, a data-driven approach is used to configure software. This approach allows data reconfiguration and updates to automated sequences without requiring recompilation of the software. Because of the great dependency of the automation and the flight software on the configuration data, the data management is a vital component of the processes for software certification, mission design, and flight operations. To enable the automated sequencing and data configuration of the GN&C subsystem on Orion, a desktop database configuration tool has been developed. The database tool allows the specification of the GN&C activity sequences, the automated transitions in the software, and the corresponding parameter reconfigurations. These aspects of the GN&C automation on Orion are all coordinated via data management, and the database tool provides the ability to test the automation capabilities during the development of the GN&C software. In addition to providing the infrastructure to manage the GN&C automation, the database tool has been designed with capabilities to import and export artifacts for simulation analysis and documentation purposes. Furthermore, the database configuration tool, currently used to manage simulation data, is envisioned to evolve into a mission planning tool for generating and testing GN&C software sequences and configurations. A key enabler of the GN&C automation design, the database tool allows both the creation and maintenance of the data artifacts, as well as serving the critical role of helping to manage, visualize, and understand the data-driven parameters both during software development and throughout the life of the Orion project.
Accommodation Responds to Optical Vergence and Not Defocus Blur Alone.
Del Águila-Carrasco, Antonio J; Marín-Franch, Iván; Bernal-Molina, Paula; Esteve-Taboada, José J; Kruger, Philip B; Montés-Micó, Robert; López-Gil, Norberto
2017-03-01
To determine whether changes in wavefront spherical curvature (optical vergence) are a directional cue for accommodation. Nine subjects participated in this experiment. The accommodation response to a monochromatic target was measured continuously with a custom-made adaptive optics system while astigmatism and higher-order aberrations were corrected in real time. There were two experimental open-loop conditions: vergence-driven condition, where the deformable mirror provided sinusoidal changes in defocus at the retina between -1 and +1 diopters (D) at 0.2 Hz; and blur-driven condition, in which the level of defocus at the retina was always 0 D, but a sinusoidal defocus blur between -1 and +1 D at 0.2 Hz was simulated in the target. Right before the beginning of each trial, the target was moved to an accommodative demand of 2 D. Eight out of nine subjects showed sinusoidal responses for the vergence-driven condition but not for the blur-driven condition. Their average (±SD) gain for the vergence-driven condition was 0.50 (±0.28). For the blur-driven condition, average gain was much smaller at 0.07 (±0.03). The ninth subject showed little to no response for both conditions, with average gain <0.08. Vergence-driven condition gain was significantly different from blur-driven condition gain (P = 0.004). Accommodation responds to optical vergence, even without feedback, and not to changes in defocus blur alone. These results suggest the presence of a retinal mechanism that provides a directional cue for accommodation from optical vergence.
The development of variable MLM editor and TSQL translator based on Arden Syntax in Taiwan.
Liang, Yan Ching; Chang, Polun
2003-01-01
The Arden Syntax standard has been utilized in the medical informatics community in several countries during the past decade. It is never used in nursing in Taiwan. We try to develop a system that acquire medical expert knowledge in Chinese and translates data and logic slot into TSQL Language. The system implements TSQL translator interpreting database queries referred to in the knowledge modules. The decision-support systems in medicine are data driven system where TSQL triggers as inference engine can be used to facilitate linking to a database.
NASA Astrophysics Data System (ADS)
Abdullah, Oday I.; Schlattmann, Josef; Senatore, Adolfo; Al-Shabibi, Abdullah M.
2018-05-01
The designers of friction clutch systems in vehicular applications should always take into account a number of essential criteria. The friction clutch should be able to transfer the torque from the driving shaft to the driven one within a short time and minimum amount of shocks and vibrations to make the engagement (disengagement) as gentle as possible. Furthermore, it is well known that high surface temperatures were noticed during the beginning of engagement period due to slipping between the contacting elements of the friction clutch system with ensuing heat generation. The transient thermoelastic problem of multi-disc systems has been deeply investigated by many scientists and researchers using numerical techniques such as finite element method. In this analysis, the influence of the sliding speed on the thermoelastic behavior when the initial heat generated is constant was studied. For this purpose an axisymmetric finite element models were developed and used in the simulation shown in the paper.
Auditory Neuroimaging with fMRI and PET
Talavage, Thomas M.; Gonzalez-Castillo, Javier; Scott, Sophie K.
2013-01-01
For much of the past 30 years, investigations of auditory perception and language have been enhanced or even driven by the use of functional neuroimaging techniques that specialize in localization of central responses. Beginning with investigations using positron emission tomography (PET) and gradually shifting primarily to usage of functional magnetic resonance imaging (fMRI), auditory neuroimaging has greatly advanced our understanding of the organization and response properties of brain regions critical to the perception of and communication with the acoustic world in which we live. As the complexity of the questions being addressed has increased, the techniques, experiments and analyses applied have also become more nuanced and specialized. A brief review of the history of these investigations sets the stage for an overview and analysis of how these neuroimaging modalities are becoming ever more effective tools for understanding the auditory brain. We conclude with a brief discussion of open methodological issues as well as potential clinical applications for auditory neuroimaging. PMID:24076424
Targeting cancer metabolism: dietary and pharmacological interventions
Vernieri, Claudio; Casola, Stefano; Foiani, Marco; Pietrantonio, Filippo; de Braud, Filippo; Longo, Valter
2016-01-01
Most tumors display oncogene-driven reprogramming of several metabolic pathways, which are crucial to sustain their growth and proliferation. In recent years, both dietary and pharmacological approaches that target deregulated tumor metabolism are beginning to be considered for clinical applications. Dietary interventions exploit the ability of nutrient-restricted conditions to exert broad biological effects, protecting normal cells, organs and systems, while sensitizing a wide variety of cancer cells to cytotoxic therapies. On the other hand, drugs targeting enzymes or metabolites of crucial metabolic pathways can be highly specific and effective, but must be matched with a responsive tumor, which might rapidly adapt. In this Review, we illustrate how dietary and pharmacological therapies differ in their effect on tumor growth, proliferation and metabolism, and discuss the available preclinical and clinical evidence in favor or against each of them. We also indicate, when appropriate, how to optimize future investigations on metabolic therapies on the basis of tumor- and patient-related characteristics. PMID:27872127
The new geographic information system in ETVA VI.PE.
NASA Astrophysics Data System (ADS)
Xagoraris, Zafiris; Soulis, George
2016-08-01
ETVA VI.PE. S.A. is a member of the Piraeus Bank Group of Companies and its activities include designing, developing, exploiting and managing Industrial Areas throughout Greece. Inside ETVA VI.PE.'s thirty-one Industrial Parks there are currently 2,500 manufacturing companies established, with 40,000 employees and € 2.5 billion of invested funds. In each one of the industrial areas ETVA VI.PE guarantees the companies industrial lots of land (sites) with propitious building codes and complete infrastructure networks of water supply, sewerage, paved roads, power supply, communications, cleansing services, etc. The development of Geographical Information System for ETVA VI.PE.'s Industrial Parks started at the beginning of 1992 and consists of three subsystems: Cadastre, that manages the information for the land acquisition of Industrial Areas; Street Layout - Sites, that manages the sites sold to manufacturing companies; Networks, that manages the infrastructure networks (roads, water supply, sewerage etc). The mapping of each Industrial Park is made incorporating state-of-the-art photogrammetric, cartographic and surveying methods and techniques. Passing through the phases of initial design (hybrid GIS) and system upgrade (integrated Gis solution with spatial database), the system is currently operating on a new upgrade (integrated gIS solution with spatial database) that includes redesigning and merging the system's database schemas, along with the creation of central security policies, and the development of a new web GIS application for advanced data entry, highly customisable and standard reports, and dynamic interactive maps. The new GIS bring the company to advanced levels of productivity and introduce the new era for decision making and business management.
Introducing the Global Fire WEather Database (GFWED)
NASA Astrophysics Data System (ADS)
Field, R. D.
2015-12-01
The Canadian Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations beginning in 1980 called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5° latitude by 2/3° longitude. Input weather data were obtained from the NASA Modern Era Retrospective-Analysis for Research (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded datasets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia, Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA-based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DC=1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously-identified in MERRA's precipitation and reinforce the need to consider alternative sources of precipitation data. GFWED is being used by researchers around the world for analyzing historical relationships between fire weather and fire activity at large scales, in identifying large-scale atmosphere-ocean controls on fire weather, and calibration of FWI-based fire prediction models. These applications will be discussed. More information on GFWED can be found at http://data.giss.nasa.gov/impacts/gfwed/
Life cycle-based water assessment of a hand dishwashing product: opportunities and limitations.
Van Hoof, Gert; Buyle, Bea; Kounina, Anna; Humbert, Sebastien
2013-10-01
It is only recently that life cycle-based indicators have been used to evaluate products from a water use impact perspective. The applicability of some of these methods has been primarily demonstrated on agricultural materials or products, because irrigation requirements in food production can be water-intensive. In view of an increasing interest on life cycle-based water indicators from different products, we ran a study on a hand dishwashing product. A number of water assessment methods were applied with the purpose of identifying both product improvement opportunities, as well as understanding the potential for underlying database and methodological improvements. The study covered the entire life cycle of the product and focused on environmental issues related to water use, looking in-depth at inventory, midpoint, and endpoint methods. "Traditional" water emission driven methods, such as freshwater eutrophication, were excluded from the analysis. The use of a single formula with the same global supply chain, manufactured in 1 location was evaluated in 2 countries with different water scarcity conditions. The study shows differences ranging up to 4 orders in magnitude for indicators with similar units associated with different water use types (inventory methods) and different cause-effect chain models (midpoint and endpoint impact categories). No uncertainty information was available on the impact assessment methods, whereas uncertainty from stochastic variability was not available at the time of study. For the majority of the indicators studied, the contribution from the consumer use stage is the most important (>90%), driven by both direct water use (dishwashing process) as well as indirect water use (electricity generation to heat the water). Creating consumer awareness on how the product is used, particularly in water-scarce areas, is the largest improvement opportunity for a hand dishwashing product. However, spatial differentiation in the inventory and impact assessment model may lead to very different results for the product used under exactly the same consumer use conditions, making the communication of results a real challenge. From a practitioner's perspective, the data collection step in relation to the goal and scope of the study sets high requirements for both foreground and background data. In particular, databases covering a broad spectrum of inventory data with spatially differentiated water use information are lacking. For some impact methods, it is unknown as to whether or not characterization factors should be spatially differentiated, which creates uncertainty in their interpretation and applicability. Finally, broad application of life cycle-based water assessment will require further development of commercial life cycle assessment software. © 2013 SETAC.
Schurr, K.M.; Cox, S.E.
1994-01-01
The Pesticide-Application Data-Base Management System was created as a demonstration project and was tested with data submitted to the Washington State Department of Agriculture by pesticide applicators from a small geographic area. These data were entered into the Department's relational data-base system and uploaded into the system's ARC/INFO files. Locations for pesticide applica- tions are assigned within the Public Land Survey System grids, and ARC/INFO programs in the Pesticide-Application Data-Base Management System can subdivide each survey section into sixteen idealized quarter-quarter sections for display map grids. The system provides data retrieval and geographic information system plotting capabilities from a menu of seven basic retrieval options. Additionally, ARC/INFO coverages can be created from the retrieved data when required for particular applications. The Pesticide-Application Data-Base Management System, or the general principles used in the system, could be adapted to other applica- tions or to other states.
ERIC Educational Resources Information Center
Wirth, Arthur G.
As the United States and the rest of the world moves into an electronic-driven postindustrial revolution that is replacing the factory industrialism of the beginning of the century, new realities call for change in the workplace and the education system. Declining wages, increased unemployment, and a lowered standard of living have occurred as the…
ERIC Educational Resources Information Center
Naughton, Michael; de la Cruz, Rachelle
2016-01-01
We begin with the argument that if universities are to form and educate future business leaders with a disciplined sensitivity to those who suffer from both material and spiritual poverty, they will be most successful when they draw upon a mission that has a deeper root system than generic values or instrumental rationality. Recognizing that…
78 FR 79412 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-30
... Defense Finance and Accounting Service proposes to alter a system of records, T7205, General Accounting and Finance System--Report Database for Financial Statements, in its inventory of record systems... transaction-driven financial statements in support of Defense Finance and Accounting Service financial mission...
Mobile Phone Health Applications for the Federal Sector.
Burrows, Christin S; Weigel, Fred K
2016-01-01
As the US healthcare system moves toward a mobile care model, mobile phones will play a significant role in the future of healthcare delivery. Today, 90% of American adults own a mobile phone and 64% own a smartphone, yet many healthcare organizations are only beginning to explore the opportunities in which mobile phones can improve and streamline care. After searching Google Scholar, the Association for Computing Machinery Database, and PubMed for articles related to mobile phone health applications and cell phone text message health, we selected articles and studies related to the application of mobile phones in healthcare. From our initial review, we identified the potential application areas and continued to refine our search, identifying a total of 55 articles for additional review and analysis. From the literature, we identified 3 main themes for mobile phone implementation in improving healthcare: primary, preventive, and population health. We recommend federal health leaders pursue the value and potential in these areas; not only because 90% of Americans already own mobile phones, but also because mobile phone integration can provide substantial access and potential cost savings. From the positive findings of multiple studies in primary, preventive, and population health, we propose a 5-year federal implementation plan to integrate mobile phone capabilities into federal healthcare delivery. Our proposal has the potential to improve access, reduce costs, and increase patient satisfaction, therefore changing the way the federal sector delivers healthcare by 2021.
The design of moral education website for college students based on ASP.NET
NASA Astrophysics Data System (ADS)
Sui, Chunling; Du, Ruiqing
2012-01-01
Moral education website offers an available solution to low transmission speed and small influence areas of traditional moral education. The aim of this paper is to illustrate the design of one moral education website and the advantages of using it to help moral teaching. The reason for moral education website was discussed at the beginning of this paper. Development tools were introduced. The system design was illustrated with module design and database design. How to access data in SQL Server database are discussed in details. Finally a conclusion was made based on the discussions in this paper.
2000-05-31
Grey Literature Network Service ( Farace , Dominic,1997) as, “that which is produced on all levels of government, academics, business and industry in... literature is available, on-line, to scientific workers throughout the world, for a world scientific database.” These reports served as the base to begin...all the world’s formal scientific literature is available, on-line, to scientific workers throughout the world, for a world scientific database
BioMart Central Portal: an open database network for the biological community
Guberman, Jonathan M.; Ai, J.; Arnaiz, O.; Baran, Joachim; Blake, Andrew; Baldock, Richard; Chelala, Claude; Croft, David; Cros, Anthony; Cutts, Rosalind J.; Di Génova, A.; Forbes, Simon; Fujisawa, T.; Gadaleta, E.; Goodstein, D. M.; Gundem, Gunes; Haggarty, Bernard; Haider, Syed; Hall, Matthew; Harris, Todd; Haw, Robin; Hu, S.; Hubbard, Simon; Hsu, Jack; Iyer, Vivek; Jones, Philip; Katayama, Toshiaki; Kinsella, R.; Kong, Lei; Lawson, Daniel; Liang, Yong; Lopez-Bigas, Nuria; Luo, J.; Lush, Michael; Mason, Jeremy; Moreews, Francois; Ndegwa, Nelson; Oakley, Darren; Perez-Llamas, Christian; Primig, Michael; Rivkin, Elena; Rosanoff, S.; Shepherd, Rebecca; Simon, Reinhard; Skarnes, B.; Smedley, Damian; Sperling, Linda; Spooner, William; Stevenson, Peter; Stone, Kevin; Teague, J.; Wang, Jun; Wang, Jianxin; Whitty, Brett; Wong, D. T.; Wong-Erasmus, Marie; Yao, L.; Youens-Clark, Ken; Yung, Christina; Zhang, Junjun; Kasprzyk, Arek
2011-01-01
BioMart Central Portal is a first of its kind, community-driven effort to provide unified access to dozens of biological databases spanning genomics, proteomics, model organisms, cancer data, ontology information and more. Anybody can contribute an independently maintained resource to the Central Portal, allowing it to be exposed to and shared with the research community, and linking it with the other resources in the portal. Users can take advantage of the common interface to quickly utilize different sources without learning a new system for each. The system also simplifies cross-database searches that might otherwise require several complicated steps. Several integrated tools streamline common tasks, such as converting between ID formats and retrieving sequences. The combination of a wide variety of databases, an easy-to-use interface, robust programmatic access and the array of tools make Central Portal a one-stop shop for biological data querying. Here, we describe the structure of Central Portal and show example queries to demonstrate its capabilities. Database URL: http://central.biomart.org. PMID:21930507
The Steward Observatory asteroid relational database
NASA Technical Reports Server (NTRS)
Sykes, Mark V.; Alvarezdelcastillo, Elizabeth M.
1992-01-01
The Steward Observatory Asteroid Relational Database (SOARD) was created as a flexible tool for undertaking studies of asteroid populations and sub-populations, to probe the biases intrinsic to asteroid databases, to ascertain the completeness of data pertaining to specific problems, to aid in the development of observational programs, and to develop pedagogical materials. To date SOARD has compiled an extensive list of data available on asteroids and made it accessible through a single menu-driven database program. Users may obtain tailored lists of asteroid properties for any subset of asteroids or output files which are suitable for plotting spectral data on individual asteroids. A browse capability allows the user to explore the contents of any data file. SOARD offers, also, an asteroid bibliography containing about 13,000 references. The program has online help as well as user and programmer documentation manuals. SOARD continues to provide data to fulfill requests by members of the astronomical community and will continue to grow as data is added to the database and new features are added to the program.
NASA Astrophysics Data System (ADS)
Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.
2012-12-01
At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.
SPACEWAY: Providing affordable and versatile communication solutions
NASA Astrophysics Data System (ADS)
Fitzpatrick, E. J.
1995-08-01
By the end of this decade, Hughes' SPACEWAY network will provide the first interactive 'bandwidth on demand' communication services for a variety of applications. High quality digital voice, interactive video, global access to multimedia databases, and transborder workgroup computing will make SPACEWAY an essential component of the computer-based workplace of the 21st century. With relatively few satellites to construct, insure, and launch -- plus extensive use of cost-effective, tightly focused spot beams on the world's most populated areas -- the high capacity SPACEWAY system can pass its significant cost savings onto its customers. The SPACEWAY network is different from other proposed global networks in that its geostationary orbit location makes it a truly market driven system: each satellite will make available extensive telecom services to hundreds of millions of people within the continuous view of that satellite, providing immediate capacity within a specific region of the world.
A century of progress in industrial and organizational psychology: Discoveries and the next century.
Salas, Eduardo; Kozlowski, Steve W J; Chen, Gilad
2017-03-01
In a century of research published in the Journal of Applied Psychology , we have seen significant advances in our science. The results of this science have broad applications to the workplace and implications for improving organizational effectiveness through a variety of avenues. Research has focused on understanding constructs, relationships, and processes at multiple levels, including individual, team, and organizational. A plethora of research methods and questions have driven this work, resulting in a nuanced understanding of what matters in the workplace. In this paper, we synthesize the most salient discoveries, findings, and/or conclusions in 19 domains. We seek to summarize the progress that has been made and highlight the most salient directions for future work such that the next century of research in industrial and organizational psychological science can be as impactful as the first century has been. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Apollo: a community resource for genome annotation editing
Ed, Lee; Nomi, Harris; Mark, Gibson; Raymond, Chetty; Suzanna, Lewis
2009-01-01
Summary: Apollo is a genome annotation-editing tool with an easy to use graphical interface. It is a component of the GMOD project, with ongoing development driven by the community. Recent additions to the software include support for the generic feature format version 3 (GFF3), continuous transcriptome data, a full Chado database interface, integration with remote services for on-the-fly BLAST and Primer BLAST analyses, graphical interfaces for configuring user preferences and full undo of all edit operations. Apollo's user community continues to grow, including its use as an educational tool for college and high-school students. Availability: Apollo is a Java application distributed under a free and open source license. Installers for Windows, Linux, Unix, Solaris and Mac OS X are available at http://apollo.berkeleybop.org, and the source code is available from the SourceForge CVS repository at http://gmod.cvs.sourceforge.net/gmod/apollo. Contact: elee@berkeleybop.org PMID:19439563
SPACEWAY: Providing affordable and versatile communication solutions
NASA Technical Reports Server (NTRS)
Fitzpatrick, E. J.
1995-01-01
By the end of this decade, Hughes' SPACEWAY network will provide the first interactive 'bandwidth on demand' communication services for a variety of applications. High quality digital voice, interactive video, global access to multimedia databases, and transborder workgroup computing will make SPACEWAY an essential component of the computer-based workplace of the 21st century. With relatively few satellites to construct, insure, and launch -- plus extensive use of cost-effective, tightly focused spot beams on the world's most populated areas -- the high capacity SPACEWAY system can pass its significant cost savings onto its customers. The SPACEWAY network is different from other proposed global networks in that its geostationary orbit location makes it a truly market driven system: each satellite will make available extensive telecom services to hundreds of millions of people within the continuous view of that satellite, providing immediate capacity within a specific region of the world.
Apollo: a community resource for genome annotation editing.
Lee, Ed; Harris, Nomi; Gibson, Mark; Chetty, Raymond; Lewis, Suzanna
2009-07-15
Apollo is a genome annotation-editing tool with an easy to use graphical interface. It is a component of the GMOD project, with ongoing development driven by the community. Recent additions to the software include support for the generic feature format version 3 (GFF3), continuous transcriptome data, a full Chado database interface, integration with remote services for on-the-fly BLAST and Primer BLAST analyses, graphical interfaces for configuring user preferences and full undo of all edit operations. Apollo's user community continues to grow, including its use as an educational tool for college and high-school students. Apollo is a Java application distributed under a free and open source license. Installers for Windows, Linux, Unix, Solaris and Mac OS X are available at http://apollo.berkeleybop.org, and the source code is available from the SourceForge CVS repository at http://gmod.cvs.sourceforge.net/gmod/apollo.
Assessing practice-based learning and improvement.
Lynch, Deirdre C; Swing, Susan R; Horowitz, Sheldon D; Holt, Kathleen; Messer, Joseph V
2004-01-01
Practice-based learning and improvement (PBLI) is 1 of 6 general competencies expected of physicians who graduate from an accredited residency education program in the United States and is an anticipated requirement for those who wish to maintain certification by the member boards of the American Board of Medical Specialties. This article describes methods used to assess PBLI. Six electronic databases were searched using several search terms pertaining to PBLI. The review indicated that 4 assessment methods have been used to assess some or all steps of PBLI: portfolios, projects, patient record and chart review, and performance ratings. Each method is described, examples of application are provided, and validity, reliability, and feasibility characteristics are discussed. Portfolios may be the most useful approach to assess residents' PBLI abilities. Active participation in peer-driven performance improvement initiatives may be a valuable approach to confirm practicing physician involvement in PBLI.
20 CFR 404.332 - When wife's and husband's benefits begin and end.
Code of Federal Regulations, 2014 CFR
2014-04-01
..., father, mother, parent or disabled child. Your benefits will end if you remarry the insured who is not... first month covered by your application in which you meet all the other requirements for entitlement... person becomes entitled, your benefits cannot begin before January 1985 based on an application filed no...
20 CFR 404.332 - When wife's and husband's benefits begin and end.
Code of Federal Regulations, 2012 CFR
2012-04-01
..., father, mother, parent or disabled child. Your benefits will end if you remarry the insured who is not... first month covered by your application in which you meet all the other requirements for entitlement... person becomes entitled, your benefits cannot begin before January 1985 based on an application filed no...
20 CFR 404.332 - When wife's and husband's benefits begin and end.
Code of Federal Regulations, 2013 CFR
2013-04-01
..., father, mother, parent or disabled child. Your benefits will end if you remarry the insured who is not... first month covered by your application in which you meet all the other requirements for entitlement... person becomes entitled, your benefits cannot begin before January 1985 based on an application filed no...
20 CFR 404.332 - When wife's and husband's benefits begin and end.
Code of Federal Regulations, 2011 CFR
2011-04-01
..., father, mother, parent or disabled child. Your benefits will end if you remarry the insured who is not... first month covered by your application in which you meet all the other requirements for entitlement... person becomes entitled, your benefits cannot begin before January 1985 based on an application filed no...
100 years of training and development research: What we know and where we should go.
Bell, Bradford S; Tannenbaum, Scott I; Ford, J Kevin; Noe, Raymond A; Kraiger, Kurt
2017-03-01
Training and development research has a long tradition within applied psychology dating back to the early 1900s. Over the years, not only has interest in the topic grown but there have been dramatic changes in both the science and practice of training and development. In the current article, we examine the evolution of training and development research using articles published in the Journal of Applied Psychology ( JAP ) as a primary lens to analyze what we have learned and to identify where future research is needed. We begin by reviewing the timeline of training and development research in JAP from 1918 to the present in order to elucidate the critical trends and advances that define each decade. These trends include the emergence of more theory-driven training research, greater consideration of the role of the trainee and training context, examination of learning that occurs outside the classroom, and understanding training's impact across different levels of analysis. We then examine in greater detail the evolution of 4 key research themes: training criteria, trainee characteristics, training design and delivery, and the training context. In each area, we describe how the focus of research has shifted over time and highlight important developments. We conclude by offering several ideas for future training and development research. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Buta, Ronald J.
2017-11-01
Rings are important and characteristic features of disc-shaped galaxies. This paper is the first in a series that re-visits galactic rings with the goals of further understanding the nature of the features and for examining their role in the secular evolution of galaxy structure. The series begins with a new sample of 3962 galaxies drawn from the Galaxy Zoo 2 citizen science data base, selected because zoo volunteers recognized a ring-shaped pattern in the morphology as seen in Sloan Digital Sky Survey colour images. The galaxies are classified within the framework of the Comprehensive de Vaucouleurs revised Hubble-Sandage system. It is found that zoo volunteers cued on the same kinds of ring-like features that were recognized in the 1995 Catalogue of Southern Ringed Galaxies. This paper presents the full catalogue of morphological classifications, comparisons with other sources of classifications and some histograms designed mainly to highlight the content of the catalogue. The advantages of the sample are its large size and the generally good quality of the images; the main disadvantage is the low physical resolution that limits the detectability of linearly small rings such as nuclear rings. The catalogue includes mainly inner and outer disc rings and lenses. Cataclysmic (`encounter-driven') rings (such as ring and polar ring galaxies) are recognized in less than 1 per cent of the sample.
A Decade of Family Literacy: Programs, Outcomes, and Future Prospects. Information Series.
ERIC Educational Resources Information Center
Padak, Nancy; Sapin, Connie; Baycich, Dianna
This paper reviews and synthesizes reports about family literacy programs and practices, focusing on outcomes for adult learners. Emphasis is on resources available in the ERIC database beginning in 1990. Section 1 on programs reviews sometimes conflicting definitions of family literacy and finds that a common thread is strengthening…
Factors Influencing the First-Year Persistence of First Generation College Students.
ERIC Educational Resources Information Center
Duggan, Michael
The factors that influence the first-year persistence of first generation college students at four-year institutions were studied using data from the Beginning Postsecondary Students (BPS) database. The BPS is a longitudinal study of first-time students in the 1995 National Postsecondary Student Aid Study. First generation students are those whose…
Spoken Language Production in Young Adults: Examining Syntactic Complexity
ERIC Educational Resources Information Center
Nippold, Marilyn A.; Frantz-Kaspar, Megan W.; Vigeland, Laura M.
2017-01-01
Purpose: In this study, we examined syntactic complexity in the spoken language samples of young adults. Its purpose was to contribute to the expanding knowledge base in later language development and to begin building a normative database of language samples that potentially could be used to evaluate young adults with known or suspected language…
Garbage in, Garbage Stays: How ERPs Could Improve Our Data-Quality Issues
ERIC Educational Resources Information Center
Riccardi, Richard I.
2009-01-01
As universities begin to implement business intelligence tools such as end-user reporting, data warehousing, and dashboard indicators, data quality becomes an even greater and more public issue. With automated tools taking nightly snapshots of the database, the faulty data grow exponentially, propagating as another layer of the data warehouse.…
Yang, Chunguang G; Granite, Stephen J; Van Eyk, Jennifer E; Winslow, Raimond L
2006-11-01
Protein identification using MS is an important technique in proteomics as well as a major generator of proteomics data. We have designed the protein identification data object model (PDOM) and developed a parser based on this model to facilitate the analysis and storage of these data. The parser works with HTML or XML files saved or exported from MASCOT MS/MS ions search in peptide summary report or MASCOT PMF search in protein summary report. The program creates PDOM objects, eliminates redundancy in the input file, and has the capability to output any PDOM object to a relational database. This program facilitates additional analysis of MASCOT search results and aids the storage of protein identification information. The implementation is extensible and can serve as a template to develop parsers for other search engines. The parser can be used as a stand-alone application or can be driven by other Java programs. It is currently being used as the front end for a system that loads HTML and XML result files of MASCOT searches into a relational database. The source code is freely available at http://www.ccbm.jhu.edu and the program uses only free and open-source Java libraries.
NASA Technical Reports Server (NTRS)
Walls, Laurie K.; Kirk, Daniel; deLuis, Kavier; Haberbusch, Mark S.
2011-01-01
As space programs increasingly investigate various options for long duration space missions the accurate prediction of propellant behavior over long periods of time in microgravity environment has become increasingly imperative. This has driven the development of a detailed, physics-based understanding of slosh behavior of cryogenic propellants over a range of conditions and environments that are relevant for rocket and space storage applications. Recent advancements in computational fluid dynamics (CFD) models and hardware capabilities have enabled the modeling of complex fluid behavior in microgravity environment. Historically, launch vehicles with moderate duration upper stage coast periods have contained very limited instrumentation to quantify propellant stratification and boil-off in these environments, thus the ability to benchmark these complex computational models is of great consequence. To benchmark enhanced CFD models, recent work focuses on establishing an extensive experimental database of liquid slosh under a wide range of relevant conditions. In addition, a mass gauging system specifically designed to provide high fidelity measurements for both liquid stratification and liquid/ullage position in a micro-gravity environment has been developed. This pUblication will summarize the various experimental programs established to produce this comprehensive database and unique flight measurement techniques.
Design of Knowledge Bases for Plant Gene Regulatory Networks.
Mukundi, Eric; Gomez-Cano, Fabio; Ouma, Wilberforce Zachary; Grotewold, Erich
2017-01-01
Developing a knowledge base that contains all the information necessary for the researcher studying gene regulation in a particular organism can be accomplished in four stages. This begins with defining the data scope. We describe here the necessary information and resources, and outline the methods for obtaining data. The second stage consists of designing the schema, which involves defining the entire arrangement of the database in a systematic plan. The third stage is the implementation, defined by actualization of the database by using software according to a predefined schema. The final stage is development, where the database is made available to users in a web-accessible system. The result is a knowledgebase that integrates all the information pertaining to gene regulation, and which is easily expandable and transferable.
Orthology for comparative genomics in the mouse genome database.
Dolan, Mary E; Baldarelli, Richard M; Bello, Susan M; Ni, Li; McAndrews, Monica S; Bult, Carol J; Kadin, James A; Richardson, Joel E; Ringwald, Martin; Eppig, Janan T; Blake, Judith A
2015-08-01
The mouse genome database (MGD) is the model organism database component of the mouse genome informatics system at The Jackson Laboratory. MGD is the international data resource for the laboratory mouse and facilitates the use of mice in the study of human health and disease. Since its beginnings, MGD has included comparative genomics data with a particular focus on human-mouse orthology, an essential component of the use of mouse as a model organism. Over the past 25 years, novel algorithms and addition of orthologs from other model organisms have enriched comparative genomics in MGD data, extending the use of orthology data to support the laboratory mouse as a model of human biology. Here, we describe current comparative data in MGD and review the history and refinement of orthology representation in this resource.
Partitioning medical image databases for content-based queries on a Grid.
Montagnat, J; Breton, V; E Magnin, I
2005-01-01
In this paper we study the impact of executing a medical image database query application on the grid. For lowering the total computation time, the image database is partitioned into subsets to be processed on different grid nodes. A theoretical model of the application complexity and estimates of the grid execution overhead are used to efficiently partition the database. We show results demonstrating that smart partitioning of the database can lead to significant improvements in terms of total computation time. Grids are promising for content-based image retrieval in medical databases.
Applications of the Cambridge Structural Database in organic chemistry and crystal chemistry.
Allen, Frank H; Motherwell, W D Samuel
2002-06-01
The Cambridge Structural Database (CSD) and its associated software systems have formed the basis for more than 800 research applications in structural chemistry, crystallography and the life sciences. Relevant references, dating from the mid-1970s, and brief synopses of these papers are collected in a database, DBUse, which is freely available via the CCDC website. This database has been used to review research applications of the CSD in organic chemistry, including supramolecular applications, and in organic crystal chemistry. The review concentrates on applications that have been published since 1990 and covers a wide range of topics, including structure correlation, conformational analysis, hydrogen bonding and other intermolecular interactions, studies of crystal packing, extended structural motifs, crystal engineering and polymorphism, and crystal structure prediction. Applications of CSD information in studies of crystal structure precision, the determination of crystal structures from powder diffraction data, together with applications in chemical informatics, are also discussed.
Fast T Wave Detection Calibrated by Clinical Knowledge with Annotation of P and T Waves
Elgendi, Mohamed; Eskofier, Bjoern; Abbott, Derek
2015-01-01
Background There are limited studies on the automatic detection of T waves in arrhythmic electrocardiogram (ECG) signals. This is perhaps because there is no available arrhythmia dataset with annotated T waves. There is a growing need to develop numerically-efficient algorithms that can accommodate the new trend of battery-driven ECG devices. Moreover, there is also a need to analyze long-term recorded signals in a reliable and time-efficient manner, therefore improving the diagnostic ability of mobile devices and point-of-care technologies. Methods Here, the T wave annotation of the well-known MIT-BIH arrhythmia database is discussed and provided. Moreover, a simple fast method for detecting T waves is introduced. A typical T wave detection method has been reduced to a basic approach consisting of two moving averages and dynamic thresholds. The dynamic thresholds were calibrated using four clinically known types of sinus node response to atrial premature depolarization (compensation, reset, interpolation, and reentry). Results The determination of T wave peaks is performed and the proposed algorithm is evaluated on two well-known databases, the QT and MIT-BIH Arrhythmia databases. The detector obtained a sensitivity of 97.14% and a positive predictivity of 99.29% over the first lead of the validation databases (total of 221,186 beats). Conclusions We present a simple yet very reliable T wave detection algorithm that can be potentially implemented on mobile battery-driven devices. In contrast to complex methods, it can be easily implemented in a digital filter design. PMID:26197321
First Look: TRADEMARKSCAN Database.
ERIC Educational Resources Information Center
Fernald, Anne Conway; Davidson, Alan B.
1984-01-01
Describes database produced by Thomson and Thomson and available on Dialog which contains over 700,000 records representing all active federal trademark registrations and applications for registrations filed in United States Patent and Trademark Office. A typical record, special features, database applications, learning to use TRADEMARKSCAN, and…
NASA Astrophysics Data System (ADS)
Wang, Fan
2018-03-01
One of the main directions of technology development in the 21st century is the development and application of new materials, and the key to the development of the new material industry lies in the industrial technology innovation. The gross scale of the new material industry in Hunan Province ranks the first array in China. Based on the present situation of Hunan’s new material industry, three modes of technology innovation alliance are put forward in this paper, namely the government-driven mode, the research-driven and the market-oriented mode. The government-driven mode is applicable to the major technology innovation fields with uncertain market prospect, high risk of innovation and government’s direct or indirect intervention;the research-driven mode is applicable to the key technology innovation fields with a high technology content; and the market-oriented mode is applicable to the general innovation fields in which enterprises have demands for technology innovation but such innovation must be achieved via cooperative research and development.
Exploring Techniques of Developing Writing Skill in IELTS Preparatory Courses: A Data-Driven Study
ERIC Educational Resources Information Center
Ostovar-Namaghi, Seyyed Ali; Safaee, Seyyed Esmail
2017-01-01
Being driven by the hypothetico-deductive mode of inquiry, previous studies have tested the effectiveness of theory-driven interventions under controlled experimental conditions to come up with universally applicable generalizations. To make a case in the opposite direction, this data-driven study aims at uncovering techniques and strategies…
Database architectures for Space Telescope Science Institute
NASA Astrophysics Data System (ADS)
Lubow, Stephen
1993-08-01
At STScI nearly all large applications require database support. A general purpose architecture has been developed and is in use that relies upon an extended client-server paradigm. Processing is in general distributed across three processes, each of which generally resides on its own processor. Database queries are evaluated on one such process, called the DBMS server. The DBMS server software is provided by a database vendor. The application issues database queries and is called the application client. This client uses a set of generic DBMS application programming calls through our STDB/NET programming interface. Intermediate between the application client and the DBMS server is the STDB/NET server. This server accepts generic query requests from the application and converts them into the specific requirements of the DBMS server. In addition, it accepts query results from the DBMS server and passes them back to the application. Typically the STDB/NET server is local to the DBMS server, while the application client may be remote. The STDB/NET server provides additional capabilities such as database deadlock restart and performance monitoring. This architecture is currently in use for some major STScI applications, including the ground support system. We are currently investigating means of providing ad hoc query support to users through the above architecture. Such support is critical for providing flexible user interface capabilities. The Universal Relation advocated by Ullman, Kernighan, and others appears to be promising. In this approach, the user sees the entire database as a single table, thereby freeing the user from needing to understand the detailed schema. A software layer provides the translation between the user and detailed schema views of the database. However, many subtle issues arise in making this transformation. We are currently exploring this scheme for use in the Hubble Space Telescope user interface to the data archive system (DADS).
Evaluating the Impact of Database Heterogeneity on Observational Study Results
Madigan, David; Ryan, Patrick B.; Schuemie, Martijn; Stang, Paul E.; Overhage, J. Marc; Hartzema, Abraham G.; Suchard, Marc A.; DuMouchel, William; Berlin, Jesse A.
2013-01-01
Clinical studies that use observational databases to evaluate the effects of medical products have become commonplace. Such studies begin by selecting a particular database, a decision that published papers invariably report but do not discuss. Studies of the same issue in different databases, however, can and do generate different results, sometimes with strikingly different clinical implications. In this paper, we systematically study heterogeneity among databases, holding other study methods constant, by exploring relative risk estimates for 53 drug-outcome pairs and 2 widely used study designs (cohort studies and self-controlled case series) across 10 observational databases. When holding the study design constant, our analysis shows that estimated relative risks range from a statistically significant decreased risk to a statistically significant increased risk in 11 of 53 (21%) of drug-outcome pairs that use a cohort design and 19 of 53 (36%) of drug-outcome pairs that use a self-controlled case series design. This exceeds the proportion of pairs that were consistent across databases in both direction and statistical significance, which was 9 of 53 (17%) for cohort studies and 5 of 53 (9%) for self-controlled case series. Our findings show that clinical studies that use observational databases can be sensitive to the choice of database. More attention is needed to consider how the choice of data source may be affecting results. PMID:23648805
Application of cloud database in the management of clinical data of patients with skin diseases.
Mao, Xiao-fei; Liu, Rui; DU, Wei; Fan, Xue; Chen, Dian; Zuo, Ya-gang; Sun, Qiu-ning
2015-04-01
To evaluate the needs and applications of using cloud database in the daily practice of dermatology department. The cloud database was established for systemic scleroderma and localized scleroderma. Paper forms were used to record the original data including personal information, pictures, specimens, blood biochemical indicators, skin lesions,and scores of self-rating scales. The results were input into the cloud database. The applications of the cloud database in the dermatology department were summarized and analyzed. The personal and clinical information of 215 systemic scleroderma patients and 522 localized scleroderma patients were included and analyzed using the cloud database. The disease status,quality of life, and prognosis were obtained by statistical calculations. The cloud database can efficiently and rapidly store and manage the data of patients with skin diseases. As a simple, prompt, safe, and convenient tool, it can be used in patients information management, clinical decision-making, and scientific research.
SQLGEN: a framework for rapid client-server database application development.
Nadkarni, P M; Cheung, K H
1995-12-01
SQLGEN is a framework for rapid client-server relational database application development. It relies on an active data dictionary on the client machine that stores metadata on one or more database servers to which the client may be connected. The dictionary generates dynamic Structured Query Language (SQL) to perform common database operations; it also stores information about the access rights of the user at log-in time, which is used to partially self-configure the behavior of the client to disable inappropriate user actions. SQLGEN uses a microcomputer database as the client to store metadata in relational form, to transiently capture server data in tables, and to allow rapid application prototyping followed by porting to client-server mode with modest effort. SQLGEN is currently used in several production biomedical databases.
New additions to the cancer precision medicine toolkit.
Mardis, Elaine R
2018-04-13
New computational and database-driven tools are emerging to aid in the interpretation of cancer genomic data as its use becomes more common in clinical evidence-based cancer medicine. Two such open source tools, published recently in Genome Medicine, provide important advances to address the clinical cancer genomics data interpretation bottleneck.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buche, D. L.
This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects. This report is second in a series of reports detailing this effort.
Content and Workflow Management for Library Websites: Case Studies
ERIC Educational Resources Information Center
Yu, Holly, Ed.
2005-01-01
Using database-driven web pages or web content management (WCM) systems to manage increasingly diverse web content and to streamline workflows is a commonly practiced solution recognized in libraries today. However, limited library web content management models and funding constraints prevent many libraries from purchasing commercially available…
Developments in Transnational Research Linkages: Evidence from U.S. Higher-Education Activity
ERIC Educational Resources Information Center
Koehn, Peter H.
2014-01-01
In our knowledge-driven era, multiple and mutual benefits accrue from transnational research linkages. The article identifies important directions in transnational research collaborations involving U.S. universities revealed by key dimensions of 369 projects profiled on a U.S. higher-education association's database. Project initiators, principal…
Internet Database Review: The FDA BBS.
ERIC Educational Resources Information Center
Tomaiuolo, Nicholas G.
1993-01-01
Describes the electronic bulletin board system (BBS) of the Food and Drug Administration (FDA) that is accessible through the Internet. Highlights include how to gain access; the menu-driven software; other electronic sources of FDA information; and adding value. Examples of the FDA BBS menu and the help screen are included. (LRW)
Eddy-driven stratification initiates North Atlantic spring phytoplankton blooms.
Mahadevan, Amala; D'Asaro, Eric; Lee, Craig; Perry, Mary Jane
2012-07-06
Springtime phytoplankton blooms photosynthetically fix carbon and export it from the surface ocean at globally important rates. These blooms are triggered by increased light exposure of the phytoplankton due to both seasonal light increase and the development of a near-surface vertical density gradient (stratification) that inhibits vertical mixing of the phytoplankton. Classically and in current climate models, that stratification is ascribed to a springtime warming of the sea surface. Here, using observations from the subpolar North Atlantic and a three-dimensional biophysical model, we show that the initial stratification and resulting bloom are instead caused by eddy-driven slumping of the basin-scale north-south density gradient, resulting in a patchy bloom beginning 20 to 30 days earlier than would occur by warming.
The wave numbers of supercritical surface tension driven Benard convection
NASA Technical Reports Server (NTRS)
Koschmieder, E. L.; Switzer, D. W.
1991-01-01
The cell size or the wave numbers of supercritical hexagonal convection cells in primarily surface tension driven convection on a uniformly heated plate was studied experimentally in thermal equilibrium in thin layers of silicone oil of large aspect ratio. It was found that the cell size decreases with increased temperature difference in the slightly supercritical range, and that the cell size is unique within the experimental error. It was also observed that the cell size reaches a minimum and begins to increase at larger temperature differences. This reversal of the rate of change of the wave number with temperature difference is attributed to influences of buoyancy on the fluid motion. The consequences of buoyancy were tested with three fluid layers of different depth.
The wavenumbers of supercritical surface-tension-driven Benard convection
NASA Technical Reports Server (NTRS)
Koschmieder, E. L.; Switzer, D. W.
1992-01-01
The cell size or the wavenumbers of supercritical hexagonal convection cells in primarily surface-tension-driven convection on a uniformly heated plate has been studied experimentally in thermal equilibrium in thin layers of silicone oil of large aspect ratio. It has been found that the cell size decreases with increased temperature difference in the slightly supercritical range, and that the cell size is unique within the experimental error. It has also been observed that the cell size reaches a minimum and begins to increase at larger temperature differences. This reversal of the rate of change of the wavenumber with temperature difference is attributed to influences of buoyancy on the fluid motion. The consequences of buoyancy have been tested with three fluid layers of different depth.
Lee, Patrick; Maynard, G.; Audet, T. L.; ...
2016-11-16
The dynamics of electron acceleration driven by laser wakefield is studied in detail using the particle-in-cell code WARP with the objective to generate high-quality electron bunches with narrow energy spread and small emittance, relevant for the electron injector of a multistage accelerator. Simulation results, using experimentally achievable parameters, show that electron bunches with an energy spread of ~11% can be obtained by using an ionization-induced injection mechanism in a mm-scale length plasma. By controlling the focusing of a moderate laser power and tailoring the longitudinal plasma density profile, the electron injection beginning and end positions can be adjusted, while themore » electron energy can be finely tuned in the last acceleration section.« less
Database on Demand: insight how to build your own DBaaS
NASA Astrophysics Data System (ADS)
Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio
2015-12-01
At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.
Potential Operating Orbits for Fission Electric Propulsion Systems Driven by the SAFE-400
NASA Technical Reports Server (NTRS)
Houts, Mike; Kos, Larry; Poston, David; Rodgers, Stephen L. (Technical Monitor)
2002-01-01
Safety must be ensured during all phases of space fission system design, development, fabrication, launch, operation, and shutdown. One potential space fission system application is fission electric propulsion (FEP), in which fission energy is converted into electricity and used to power high efficiency (Isp greater than 3000s) electric thrusters. For these types of systems it is important to determine which operational scenarios ensure safety while allowing maximum mission performance and flexibility. Space fission systems are essentially nonradioactive at launch, prior to extended operation at high power. Once high power operation begins, system radiological inventory steadily increases as fission products build up. For a given fission product isotope, the maximum radiological inventory is typically achieved once the system has operated for a length of time equivalent to several half-lives. After that time, the isotope decays at the same rate it is produced, and no further inventory builds in. For an FEP mission beginning in Earth orbit, altitude and orbital lifetime increase as the propulsion system operates. Two simultaneous effects of fission propulsion system operation are thus (1) increasing fission product inventory and (2) increasing orbital lifetime. Phrased differently, as fission products build up, more time is required for the fission products to naturally convert back into non-radioactive isotopes. Simultaneously, as fission products build up, orbital lifetime increases, providing more time for the fission products to naturally convert back into non-radioactive isotopes. Operational constraints required to ensure safety can thus be quantified.
Potential operating orbits for fission electric propulsion systems driven by the SAFE-400
NASA Astrophysics Data System (ADS)
Houts, Mike; Kos, Larry; Poston, David
2002-01-01
Safety must be ensured during all phases of space fission system design, development, fabrication, launch, operation, and shutdown. One potential space fission system application is fission electric propulsion (FEP), in which fission energy is converted into electricity and used to power high efficiency (Isp>3000s) electric thrusters. For these types of systems it is important to determine which operational scenarios ensure safety while allowing maximum mission performance and flexibility. Space fission systems are essentially non-radioactive at launch, prior to extended operation at high power. Once high power operation begins, system radiological inventory steadily increases as fission products build up. For a given fission product isotope, the maximum radiological inventory is typically achieved once the system has operated for a length of time equivalent to several half-lives. After that time, the isotope decays at the same rate it is produced, and no further inventory builds in. For an FEP mission beginning in Earth orbit, altitude and orbital lifetime increase as the propulsion system operates. Two simultaneous effects of fission propulsion system operation are thus (1) increasing fission product inventory and (2) increasing orbital lifetime. Phrased differently, as fission products build up, more time is required for the fission products to naturally convert back into non-radioactive isotopes. Simultaneously, as fission products build up, orbital lifetime increases, providing more time for the fission products to naturally convert back into non-radioactive isotopes. Operational constraints required to ensure safety can thus be quantified. .
DOT National Transportation Integrated Search
2002-02-26
This document, the Introduction to the Enhanced Logistics Intratheater Support Tool (ELIST) Mission Application and its Segments, satisfies the following objectives: : It identifies the mission application, known in brief as ELIST, and all seven ...
Mining Claim Activity on Federal Land in the United States
Causey, J. Douglas
2007-01-01
Several statistical compilations of mining claim activity on Federal land derived from the Bureau of Land Management's LR2000 database have previously been published by the U.S Geological Survey (USGS). The work in the 1990s did not include Arkansas or Florida. None of the previous reports included Alaska because it is stored in a separate database (Alaska Land Information System) and is in a different format. This report includes data for all states for which there are Federal mining claim records, beginning in 1976 and continuing to the present. The intent is to update the spatial and statistical data associated with this report on an annual basis, beginning with 2005 data. The statistics compiled from the databases are counts of the number of active mining claims in a section of land each year from 1976 to the present for all states within the United States. Claim statistics are subset by lode and placer types, as well as a dataset summarizing all claims including mill site and tunnel site claims. One table presents data by case type, case status, and number of claims in a section. This report includes a spatial database for each state in which mining claims were recorded, except North Dakota, which only has had two claims. A field is present that allows the statistical data to be joined to the spatial databases so that spatial displays and analysis can be done by using appropriate geographic information system (GIS) software. The data show how mining claim activity has changed in intensity, space, and time. Variations can be examined on a state, as well as a national level. The data are tied to a section of land, approximately 640 acres, which allows it to be used at regional, as well as local scale. The data only pertain to Federal land and mineral estate that was open to mining claim location at the time the claims were staked.
Chang, Chia-Ming; Yang, Yi-Ping; Chuang, Jen-Hua; Chuang, Chi-Mu; Lin, Tzu-Wei; Wang, Peng-Hui; Yu, Mu-Hsien
2017-01-01
The clinical characteristics of clear cell carcinoma (CCC) and endometrioid carcinoma EC) are concomitant with endometriosis (ES), which leads to the postulation of malignant transformation of ES to endometriosis-associated ovarian carcinoma (EAOC). Different deregulated functional areas were proposed accounting for the pathogenesis of EAOC transformation, and there is still a lack of a data-driven analysis with the accumulated experimental data in publicly-available databases to incorporate the deregulated functions involved in the malignant transformation of EOAC. We used the microarray gene expression datasets of ES, CCC and EC downloaded from the National Center for Biotechnology Information Gene Expression Omnibus (NCBI GEO) database. Then, we investigated the pathogenesis of EAOC by a data-driven, function-based analytic model with the quantified molecular functions defined by 1454 Gene Ontology (GO) term gene sets. This model converts the gene expression profiles to the functionome consisting of 1454 quantified GO functions, and then, the key functions involving the malignant transformation of EOAC can be extracted by a series of filters. Our results demonstrate that the deregulated oxidoreductase activity, metabolism, hormone activity, inflammatory response, innate immune response and cell-cell signaling play the key roles in the malignant transformation of EAOC. These results provide the evidence supporting the specific molecular pathways involved in the malignant transformation of EAOC. PMID:29113136
PROGRESS REPORT ON THE DSSTOX DATABASE NETWORK: NEWLY LAUNCHED WEBSITE, APPLICATIONS, FUTURE PLANS
Progress Report on the DSSTox Database Network: Newly Launched Website, Applications, Future Plans
Progress will be reported on development of the Distributed Structure-Searchable Toxicity (DSSTox) Database Network and the newly launched public website that coordinates and...
Applications of Technology to CAS Data-Base Production.
ERIC Educational Resources Information Center
Weisgerber, David W.
1984-01-01
Reviews the economic importance of applying computer technology to Chemical Abstracts Service database production from 1973 to 1983. Database building, technological applications for editorial processing (online editing, Author Index Manufacturing System), and benefits (increased staff productivity, reduced rate of increase of cost of services,…
A database application for wilderness character monitoring
Ashley Adams; Peter Landres; Simon Kingston
2012-01-01
The National Park Service (NPS) Wilderness Stewardship Division, in collaboration with the Aldo Leopold Wilderness Research Institute and the NPS Inventory and Monitoring Program, developed a database application to facilitate tracking and trend reporting in wilderness character. The Wilderness Character Monitoring Database allows consistent, scientifically based...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphrey, Walter R.
CMS is a Windows application for tracking chemical inventories. Partners will use this application to record chemicals that are stored on their site and to perform periodic inventories of those chemicals. The application records information about stored chemicals from user input via the keyboard and barcode readers and stores that information into a single-file database (SQLite). A simple user login mechanism is used to control access to functions in the application. A user interface is provided that allows users to search the database and update data in the database.
NASA Technical Reports Server (NTRS)
Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas
2009-01-01
This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.
USDA-ARS?s Scientific Manuscript database
Broadleaf weed control in onion is difficult in part due to a lack of postemergence herbicide options at an early growth stage of onions. Onion tolerance to sequential applications of oxyfluorfen (Goal-Tender) alone and with bromoxynil (Buctril) beginning at the 1-lf stage of onions was evaluated n...
26 CFR 1.512(a)-2 - Definition applicable to taxable years beginning before December 13, 1967.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Definition applicable to taxable years beginning before December 13, 1967. 1.512(a)-2 Section 1.512(a)-2 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Taxation of Business...
26 CFR 1.512(a)-2 - Definition applicable to taxable years beginning before December 13, 1967.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 7 2011-04-01 2009-04-01 true Definition applicable to taxable years beginning before December 13, 1967. 1.512(a)-2 Section 1.512(a)-2 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Taxation of Business...
Database constraints applied to metabolic pathway reconstruction tools.
Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi
2014-01-01
Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.
Amin, Waqas; Singh, Harpreet; Pople, Andre K.; Winters, Sharon; Dhir, Rajiv; Parwani, Anil V.; Becich, Michael J.
2010-01-01
Context: Tissue banking informatics deals with standardized annotation, collection and storage of biospecimens that can further be shared by researchers. Over the last decade, the Department of Biomedical Informatics (DBMI) at the University of Pittsburgh has developed various tissue banking informatics tools to expedite translational medicine research. In this review, we describe the technical approach and capabilities of these models. Design: Clinical annotation of biospecimens requires data retrieval from various clinical information systems and the de-identification of the data by an honest broker. Based upon these requirements, DBMI, with its collaborators, has developed both Oracle-based organ-specific data marts and a more generic, model-driven architecture for biorepositories. The organ-specific models are developed utilizing Oracle 9.2.0.1 server tools and software applications and the model-driven architecture is implemented in a J2EE framework. Result: The organ-specific biorepositories implemented by DBMI include the Cooperative Prostate Cancer Tissue Resource (http://www.cpctr.info/), Pennsylvania Cancer Alliance Bioinformatics Consortium (http://pcabc.upmc.edu/main.cfm), EDRN Colorectal and Pancreatic Neoplasm Database (http://edrn.nci.nih.gov/) and Specialized Programs of Research Excellence (SPORE) Head and Neck Neoplasm Database (http://spores.nci.nih.gov/current/hn/index.htm). The model-based architecture is represented by the National Mesothelioma Virtual Bank (http://mesotissue.org/). These biorepositories provide thousands of well annotated biospecimens for the researchers that are searchable through query interfaces available via the Internet. Conclusion: These systems, developed and supported by our institute, serve to form a common platform for cancer research to accelerate progress in clinical and translational research. In addition, they provide a tangible infrastructure and resource for exposing research resources and biospecimen services in collaboration with the clinical anatomic pathology laboratory information system (APLIS) and the cancer registry information systems. PMID:20922029
Amin, Waqas; Singh, Harpreet; Pople, Andre K; Winters, Sharon; Dhir, Rajiv; Parwani, Anil V; Becich, Michael J
2010-08-10
Tissue banking informatics deals with standardized annotation, collection and storage of biospecimens that can further be shared by researchers. Over the last decade, the Department of Biomedical Informatics (DBMI) at the University of Pittsburgh has developed various tissue banking informatics tools to expedite translational medicine research. In this review, we describe the technical approach and capabilities of these models. Clinical annotation of biospecimens requires data retrieval from various clinical information systems and the de-identification of the data by an honest broker. Based upon these requirements, DBMI, with its collaborators, has developed both Oracle-based organ-specific data marts and a more generic, model-driven architecture for biorepositories. The organ-specific models are developed utilizing Oracle 9.2.0.1 server tools and software applications and the model-driven architecture is implemented in a J2EE framework. The organ-specific biorepositories implemented by DBMI include the Cooperative Prostate Cancer Tissue Resource (http://www.cpctr.info/), Pennsylvania Cancer Alliance Bioinformatics Consortium (http://pcabc.upmc.edu/main.cfm), EDRN Colorectal and Pancreatic Neoplasm Database (http://edrn.nci.nih.gov/) and Specialized Programs of Research Excellence (SPORE) Head and Neck Neoplasm Database (http://spores.nci.nih.gov/current/hn/index.htm). The model-based architecture is represented by the National Mesothelioma Virtual Bank (http://mesotissue.org/). These biorepositories provide thousands of well annotated biospecimens for the researchers that are searchable through query interfaces available via the Internet. These systems, developed and supported by our institute, serve to form a common platform for cancer research to accelerate progress in clinical and translational research. In addition, they provide a tangible infrastructure and resource for exposing research resources and biospecimen services in collaboration with the clinical anatomic pathology laboratory information system (APLIS) and the cancer registry information systems.
2013-01-01
Background Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Results Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes: • Support for multi-component compounds (mixtures) • Import and export of SD-files • Optional security (authorization) For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures). Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. Conclusions By using a simple web application it was shown that Molecule Database Framework successfully abstracts chemical structure searches and SD-File import and export to simple method calls. The framework offers good search performance on a standard laptop without any database tuning. This is also due to the fact that chemical structure searches are paged and cached. Molecule Database Framework is available for download on the projects web page on bitbucket: https://bitbucket.org/kienerj/moleculedatabaseframework. PMID:24325762
Kiener, Joos
2013-12-11
Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework successfully abstracts chemical structure searches and SD-File import and export to simple method calls. The framework offers good search performance on a standard laptop without any database tuning. This is also due to the fact that chemical structure searches are paged and cached. Molecule Database Framework is available for download on the projects web page on bitbucket: https://bitbucket.org/kienerj/moleculedatabaseframework.
The Network Configuration of an Object Relational Database Management System
NASA Technical Reports Server (NTRS)
Diaz, Philip; Harris, W. C.
2000-01-01
The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.
Driving experience, crashes and traffic citations of teenage beginning drivers.
McCartt, Anne T; Shabanova, Veronika I; Leaf, William A
2003-05-01
Teenagers were surveyed by telephone every 6 months from their freshman to senior high school years (N=911). Self-reported crash involvements and citations were examined for each teenager's first year of licensure and first 3500 miles driven. Based on survival analysis, the risk of a first crash during the first month of licensure (0.053) was substantially higher than during any of the next 11 months (mean risk per month: 0.025). The likelihood of a first citation during the first month of licensure (0.023) also was higher than during any of the subsequent 11 months (mean risk per month: 0.012). Similarly, when viewed as a function of cumulative miles driven, the risk of a first crash or citation was highest during the first 500 miles driven after licensure. Fewer parental restrictions (e.g. no nighttime curfew) and a lower grade point average (GPA) were associated with a higher crash risk. Male gender, a lower GPA and living in a rural area were associated with a higher citation rate.
Can data-driven benchmarks be used to set the goals of healthy people 2010?
Allison, J; Kiefe, C I; Weissman, N W
1999-01-01
OBJECTIVES: Expert panels determined the public health goals of Healthy People 2000 subjectively. The present study examined whether data-driven benchmarks provide a better alternative. METHODS: We developed the "pared-mean" method to define from data the best achievable health care practices. We calculated the pared-mean benchmark for screening mammography from the 1994 National Health Interview Survey, using the metropolitan statistical area as the "provider" unit. Beginning with the best-performing provider and adding providers in descending sequence, we established the minimum provider subset that included at least 10% of all women surveyed on this question. The pared-mean benchmark is then the proportion of women in this subset who received mammography. RESULTS: The pared-mean benchmark for screening mammography was 71%, compared with the Healthy People 2000 goal of 60%. CONCLUSIONS: For Healthy People 2010, benchmarks derived from data reflecting the best available care provide viable alternatives to consensus-derived targets. We are currently pursuing additional refinements to the data-driven pared-mean benchmark approach. PMID:9987466
Palaeo-sea-level and palaeo-ice-sheet databases: problems, strategies, and perspectives
NASA Astrophysics Data System (ADS)
Düsterhus, André; Rovere, Alessio; Carlson, Anders E.; Horton, Benjamin P.; Klemann, Volker; Tarasov, Lev; Barlow, Natasha L. M.; Bradwell, Tom; Clark, Jorie; Dutton, Andrea; Gehrels, W. Roland; Hibbert, Fiona D.; Hijma, Marc P.; Khan, Nicole; Kopp, Robert E.; Sivan, Dorit; Törnqvist, Torbjörn E.
2016-04-01
Sea-level and ice-sheet databases have driven numerous advances in understanding the Earth system. We describe the challenges and offer best strategies that can be adopted to build self-consistent and standardised databases of geological and geochemical information used to archive palaeo-sea-levels and palaeo-ice-sheets. There are three phases in the development of a database: (i) measurement, (ii) interpretation, and (iii) database creation. Measurement should include the objective description of the position and age of a sample, description of associated geological features, and quantification of uncertainties. Interpretation of the sample may have a subjective component, but it should always include uncertainties and alternative or contrasting interpretations, with any exclusion of existing interpretations requiring a full justification. During the creation of a database, an approach based on accessibility, transparency, trust, availability, continuity, completeness, and communication of content (ATTAC3) must be adopted. It is essential to consider the community that creates and benefits from a database. We conclude that funding agencies should not only consider the creation of original data in specific research-question-oriented projects, but also include the possibility of using part of the funding for IT-related and database creation tasks, which are essential to guarantee accessibility and maintenance of the collected data.
NASA Astrophysics Data System (ADS)
Klein Goldewijk, K.
2008-12-01
More and more studies of global (climate) change are focusing on the past. Hundreds and thousands of years of land use, driven by population growth have left their trace/mark on the Earth's surface. We are only at the beginning to understand the complex relationship of human induced disturbances of the global environment, and the consequences for future climate. It is therefore essential that we get a clear picture/understanding of past relationships between population growth, land use and climate. In order to facilitate climate modelers to examine these relationships, the HYDE database has been updated and extended. The update of HYDE described here (Klein Goldewijk et al. 2006; Klein Goldewijk et al. 2007) includes several improvements compared to its predecessor: (i) the HYDE 2 version used a Boolean approach with a 30 minute degree resolution, while HYDE 3 uses fractional land use on a 5 minute resolution; (ii) more and better sub-national (population) data (Klein Goldewijk, 2005) to improve the historical (urban and rural) population maps as a basis for allocation of land cover; (iii) implementation of different allocation algorithms with time-dependent weighting maps for cropland and grassland; (iv) the period covered has now been extended from the emergence of agriculture (10,000 B.C) to present time (2,000 A.D.), with different time intervals. Examples of (future) use of the database is to help test the 'Ruddiman hypothesis', who proposed a theory that mankind already altered the global atmosphere much earlier than the start of the Industrial Revolution in the early 18th century (Ruddiman, 2003), which put forward the research question whether we detect a pre- Industrial Revolution anthropogenic signal, and how strong is that signal? References Klein Goldewijk, K. A.F. Bouwman and G. van Drecht, 2007. Mapping current global cropland and grassland distributions on a 5 by 5 minute resolution, Journal of Land Use Science, Vol 2(3): 167-190. Klein Goldewijk, K. and G. van Drecht, 2006. HYDE 3: Current and historical population and land cover. MNP (2006) (Edited by A.F. Bouwman, T. Kram and K. Klein Goldewijk), Integrated modelling of global environmental change. An overview of IMAGE 2.4. Netherlands Environmental Assessment Agency (MNP), Bilthoven, The Netherlands Klein Goldewijk, K. 2005. Three centuries of global population growth: A spatial referenced population density database for 1700 - 2000, Population and Environment, 26 (5): 343-367. Ruddiman, WF, 2003. The anthropogenic greenhouse era bagan thousands of years ago, Climatic Change, 61(3), 261-293.
Access to digital library databases in higher education: design problems and infrastructural gaps.
Oswal, Sushil K
2014-01-01
After defining accessibility and usability, the author offers a broad survey of the research studies on digital content databases which have thus far primarily depended on data drawn from studies conducted by sighted researchers with non-disabled users employing screen readers and low vision devices. This article aims at producing a detailed description of the difficulties confronted by blind screen reader users with online library databases which now hold most of the academic, peer-reviewed journal and periodical content essential for research and teaching in higher education. The approach taken here is borrowed from descriptive ethnography which allows the author to create a complete picture of the accessibility and usability problems faced by an experienced academic user of digital library databases and screen readers. The author provides a detailed analysis of the different aspects of accessibility issues in digital databases under several headers with a special focus on full-text PDF files. The author emphasizes that long-term studies with actual, blind screen reader users employing both qualitative and computerized research tools can yield meaningful data for the designers and developers to improve these databases to a level that they begin to provide an equal access to the blind.
Benson, Dennis A; Karsch-Mizrachi, Ilene; Lipman, David J; Ostell, James; Sayers, Eric W
2010-01-01
GenBank is a comprehensive database that contains publicly available nucleotide sequences for more than 300,000 organisms named at the genus level or lower, obtained primarily through submissions from individual laboratories and batch submissions from large-scale sequencing projects, including whole genome shotgun (WGS) and environmental sampling projects. Most submissions are made using the web-based BankIt or standalone Sequin programs, and accession numbers are assigned by GenBank staff upon receipt. Daily data exchange with the European Molecular Biology Laboratory Nucleotide Sequence Database in Europe and the DNA Data Bank of Japan ensures worldwide coverage. GenBank is accessible through the NCBI Entrez retrieval system, which integrates data from the major DNA and protein sequence databases along with taxonomy, genome, mapping, protein structure and domain information, and the biomedical journal literature via PubMed. BLAST provides sequence similarity searches of GenBank and other sequence databases. Complete bi-monthly releases and daily updates of the GenBank database are available by FTP. To access GenBank and its related retrieval and analysis services, begin at the NCBI homepage: www.ncbi.nlm.nih.gov.
Benson, Dennis A; Karsch-Mizrachi, Ilene; Lipman, David J; Ostell, James; Sayers, Eric W
2009-01-01
GenBank is a comprehensive database that contains publicly available nucleotide sequences for more than 300,000 organisms named at the genus level or lower, obtained primarily through submissions from individual laboratories and batch submissions from large-scale sequencing projects. Most submissions are made using the web-based BankIt or standalone Sequin programs, and accession numbers are assigned by GenBank(R) staff upon receipt. Daily data exchange with the European Molecular Biology Laboratory Nucleotide Sequence Database in Europe and the DNA Data Bank of Japan ensures worldwide coverage. GenBank is accessible through the National Center for Biotechnology Information (NCBI) Entrez retrieval system, which integrates data from the major DNA and protein sequence databases along with taxonomy, genome, mapping, protein structure and domain information, and the biomedical journal literature via PubMed. BLAST provides sequence similarity searches of GenBank and other sequence databases. Complete bimonthly releases and daily updates of the GenBank database are available by FTP. To access GenBank and its related retrieval and analysis services, begin at the NCBI Homepage: www.ncbi.nlm.nih.gov.
NASA Astrophysics Data System (ADS)
Andrina, G.; Basso, V.; Saitta, L.
2004-08-01
The effort in optimising the AIV process has been mainly focused in the recent years on the standardisation of approaches and on the application of new methodologies. But the earlier the intervention, the greater the benefits in terms of cost and schedule. Early phases of AIV process relied up to now on standards that need to be tailored through company and personal expertise. A study has then been conducted in order to exploit the possibility to develop an expert system helping in making choices in the early, conceptual phase of Assembly, Integration and Verification, namely the Model Philosophy and the test definition. The work focused on a hybrid approach, allowing interaction between historical data and human expertise. The expert system that has been prototyped exploits both information elicited from domain experts and results of a Data Mining activity on the existent data bases of completed projects verification data. The Data Mining algorithms allow the extraction of past experience resident on ESA/ MATD data base, which contains information in the form of statistical summaries, costs, frequencies of on-ground and in flight failures. Finding non-trivial associations could then be utilised by the experts to manage new decisions in a controlled way (Standards driven) at the beginning or during the AIV Process Moreover, the Expert AIV could allow compilation of a set of feasible AIV schedules to support further programmatic-driven choices.
Automated segmentation of murine lung tumors in x-ray micro-CT images
NASA Astrophysics Data System (ADS)
Swee, Joshua K. Y.; Sheridan, Clare; de Bruin, Elza; Downward, Julian; Lassailly, Francois; Pizarro, Luis
2014-03-01
Recent years have seen micro-CT emerge as a means of providing imaging analysis in pre-clinical study, with in-vivo micro-CT having been shown to be particularly applicable to the examination of murine lung tumors. Despite this, existing studies have involved substantial human intervention during the image analysis process, with the use of fully-automated aids found to be almost non-existent. We present a new approach to automate the segmentation of murine lung tumors designed specifically for in-vivo micro-CT-based pre-clinical lung cancer studies that addresses the specific requirements of such study, as well as the limitations human-centric segmentation approaches experience when applied to such micro-CT data. Our approach consists of three distinct stages, and begins by utilizing edge enhancing and vessel enhancing non-linear anisotropic diffusion filters to extract anatomy masks (lung/vessel structure) in a pre-processing stage. Initial candidate detection is then performed through ROI reduction utilizing obtained masks and a two-step automated segmentation approach that aims to extract all disconnected objects within the ROI, and consists of Otsu thresholding, mathematical morphology and marker-driven watershed. False positive reduction is finally performed on initial candidates through random-forest-driven classification using the shape, intensity, and spatial features of candidates. We provide validation of our approach using data from an associated lung cancer study, showing favorable results both in terms of detection (sensitivity=86%, specificity=89%) and structural recovery (Dice Similarity=0.88) when compared against manual specialist annotation.
E-Labs - Learning with Authentic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bardeen, Marjorie G.; Wayne, Mitchell
the success teachers have had providing an opportunity for students to: • Organize and conduct authentic research. • Experience the environment of scientific collaborations. • Possibly make real contributions to a burgeoning scientific field. We've created projects that are problem-based, student driven and technology dependent. Students reach beyond classroom walls to explore data with other students and experts and share results, publishing original work to a worldwide audience. Students can discover and extend the research of other students, modeling the processes of modern, large-scale research projects. From start to finish e-Labs are student-led, teacher-guided projects. Students need only a Webmore » browser to access computing techniques employed by professional researchers. A Project Map with milestones allows students to set the research plan rather than follow a step-by-step process common in other online projects. Most importantly, e-Labs build the learning experience around the students' own questions and let them use the very tools that scientists use. Students contribute to and access shared data, most derived from professional research databases. They use common analysis tools, store their work and use metadata to discover, replicate and confirm the research of others. This is where real scientific collaboration begins. Using online tools, students correspond with other research groups, post comments and questions, prepare summary reports, and in general participate in the part of scientific research that is often left out of classroom experiments. Teaching tools such as student and teacher logbooks, pre- and post-tests and an assessment rubric aligned with learner outcomes help teachers guide student work. Constraints on interface designs and administrative tools such as registration databases give teachers the "one-stop-shopping" they seek for multiple e-Labs. Teaching and administrative tools also allow us to track usage and assess the impact on student learning.« less
ERIC Educational Resources Information Center
Costigan, Arthur
2018-01-01
This study presents how beginning teachers create and teach an English Language Arts (ELA) curriculum in urban schools in the context of an educational reform movement driven by mandates such as Common Core State Standards (CCSS), high stakes tests, and prescribed curricula. They serve in schools with each using unique and individual curricula due…
Urban Neighborhood Information Systems: Crime Prevention and Control Applications.
ERIC Educational Resources Information Center
Pattavina, April; Pierce, Glenn; Saiz, Alan
2002-01-01
Chronicles the need for and development of an interdisciplinary, integrated neighborhood-level database for Boston, Massachusetts, discussing database content and potential applications of this database to a range of criminal justice problems and initiatives (e.g., neighborhood crime patterns, needs assessment, and program planning and…
Towards computational improvement of DNA database indexing and short DNA query searching.
Stojanov, Done; Koceski, Sašo; Mileva, Aleksandra; Koceska, Nataša; Bande, Cveta Martinovska
2014-09-03
In order to facilitate and speed up the search of massive DNA databases, the database is indexed at the beginning, employing a mapping function. By searching through the indexed data structure, exact query hits can be identified. If the database is searched against an annotated DNA query, such as a known promoter consensus sequence, then the starting locations and the number of potential genes can be determined. This is particularly relevant if unannotated DNA sequences have to be functionally annotated. However, indexing a massive DNA database and searching an indexed data structure with millions of entries is a time-demanding process. In this paper, we propose a fast DNA database indexing and searching approach, identifying all query hits in the database, without having to examine all entries in the indexed data structure, limiting the maximum length of a query that can be searched against the database. By applying the proposed indexing equation, the whole human genome could be indexed in 10 hours on a personal computer, under the assumption that there is enough RAM to store the indexed data structure. Analysing the methodology proposed by Reneker, we observed that hits at starting positions [Formula: see text] are not reported, if the database is searched against a query shorter than [Formula: see text] nucleotides, such that [Formula: see text] is the length of the DNA database words being mapped and [Formula: see text] is the length of the query. A solution of this drawback is also presented.
DOT National Transportation Integrated Search
2010-03-01
Web 2.0 is an umbrella term for websites or online applications that are user-driven and emphasize collaboration and user interactivity. The trend away from static web pages to a more user-driven Internet model has also occurred in the public s...
Technology Directions for the 21st Century. Vol. 2
NASA Technical Reports Server (NTRS)
Crimi, Giles F.; Verheggen, Henry; Malinowski, John; Malinowski, Robert; Botta, Robert
1996-01-01
The Office of Space Communications (OSC) is tasked by NASA to conduct a planning process to meet NASA's science mission and other communications and data processing requirements. A set of technology trend studies was undertaken by Science Applications International Corporation (SAIC) for OSC to identify quantitative data that can be used to predict performance of electronic equipment in the future to assist in the planning process. Only commercially available, off-the-shelf technology was included. For each technology area considered, the current state of the technology is discussed, future applications that could benefit from use of the technology are identified, and likely future developments of the technology are described. The impact of each technology area on NASA operations is presented together with a discussion of the feasibility and risk associated with its development. An approximate timeline is given for the next 15 to 25 years to indicate the anticipated evolution of capabilities within each of the technology areas considered. This volume contains four chapters: one each on technology trends for database systems, computer software, neural and fuzzy systems, and artificial intelligence. The principal study results are summarized at the beginning of each chapter.
Goldberg, Brittany; Sichtig, Heike; Geyer, Chelsie; Ledeboer, Nathan
2015-01-01
ABSTRACT Next-generation DNA sequencing (NGS) has progressed enormously over the past decade, transforming genomic analysis and opening up many new opportunities for applications in clinical microbiology laboratories. The impact of NGS on microbiology has been revolutionary, with new microbial genomic sequences being generated daily, leading to the development of large databases of genomes and gene sequences. The ability to analyze microbial communities without culturing organisms has created the ever-growing field of metagenomics and microbiome analysis and has generated significant new insights into the relation between host and microbe. The medical literature contains many examples of how this new technology can be used for infectious disease diagnostics and pathogen analysis. The implementation of NGS in medical practice has been a slow process due to various challenges such as clinical trials, lack of applicable regulatory guidelines, and the adaptation of the technology to the clinical environment. In April 2015, the American Academy of Microbiology (AAM) convened a colloquium to begin to define these issues, and in this document, we present some of the concepts that were generated from these discussions. PMID:26646014
Initiation of a Database of CEUS Ground Motions for NGA East
NASA Astrophysics Data System (ADS)
Cramer, C. H.
2007-12-01
The Nuclear Regulatory Commission has funded the first stage of development of a database of central and eastern US (CEUS) broadband and accelerograph records, along the lines of the existing Next Generation Attenuation (NGA) database for active tectonic areas. This database will form the foundation of an NGA East project for the development of CEUS ground-motion prediction equations that include the effects of soils. This initial effort covers the development of a database design and the beginning of data collection to populate the database. It also includes some processing for important source parameters (Brune corner frequency and stress drop) and site parameters (kappa, Vs30). Besides collecting appropriate earthquake recordings and information, existing information about site conditions at recording sites will also be gathered, including geology and geotechnical information. The long-range goal of the database development is to complete the database and make it available in 2010. The database design is centered on CEUS ground motion information needs but is built on the Pacific Earthquake Engineering Research Center's (PEER) NGA experience. Documentation from the PEER NGA website was reviewed and relevant fields incorporated into the CEUS database design. CEUS database tables include ones for earthquake, station, component, record, and references. As was done for NGA, a CEUS ground- motion flat file of key information will be extracted from the CEUS database for use in attenuation relation development. A short report on the CEUS database and several initial design-definition files are available at https://umdrive.memphis.edu:443/xythoswfs/webui/_xy-7843974_docstore1. Comments and suggestions on the database design can be sent to the author. More details will be presented in a poster at the meeting.
Broad phonetic class definition driven by phone confusions
NASA Astrophysics Data System (ADS)
Lopes, Carla; Perdigão, Fernando
2012-12-01
Intermediate representations between the speech signal and phones may be used to improve discrimination among phones that are often confused. These representations are usually found according to broad phonetic classes, which are defined by a phonetician. This article proposes an alternative data-driven method to generate these classes. Phone confusion information from the analysis of the output of a phone recognition system is used to find clusters at high risk of mutual confusion. A metric is defined to compute the distance between phones. The results, using TIMIT data, show that the proposed confusion-driven phone clustering method is an attractive alternative to the approaches based on human knowledge. A hierarchical classification structure to improve phone recognition is also proposed using a discriminative weight training method. Experiments show improvements in phone recognition on the TIMIT database compared to a baseline system.
The Development of Variable MLM Editor and TSQL Translator Based on Arden Syntax in Taiwan
Liang, Yan-Ching; Chang, Polun
2003-01-01
The Arden Syntax standard has been utilized in the medical informatics community in several countries during the past decade. It is never used in nursing in Taiwan. We try to develop a system that acquire medical expert knowledge in Chinese and translates data and logic slot into TSQL Language. The system implements TSQL translator interpreting database queries referred to in the knowledge modules. The decision-support systems in medicine are data driven system where TSQL triggers as inference engine can be used to facilitate linking to a database. PMID:14728414
Database security and encryption technology research and application
NASA Astrophysics Data System (ADS)
Zhu, Li-juan
2013-03-01
The main purpose of this paper is to discuss the current database information leakage problem, and discuss the important role played by the message encryption techniques in database security, As well as MD5 encryption technology principle and the use in the field of website or application. This article is divided into introduction, the overview of the MD5 encryption technology, the use of MD5 encryption technology and the final summary. In the field of requirements and application, this paper makes readers more detailed and clearly understood the principle, the importance in database security, and the use of MD5 encryption technology.