Effective Engineering Outreach through an Undergraduate Mentoring Team and Module Database
ERIC Educational Resources Information Center
Young, Colin; Butterfield, Anthony E.
2014-01-01
The rising need for engineers has led to increased interest in community outreach in engineering departments nationwide. We present a sustainable outreach model involving trained undergraduate mentors to build ties with K-12 teachers and students. An associated online module database of chemical engineering demonstrations, available to educators…
Database on Demand: insight how to build your own DBaaS
NASA Astrophysics Data System (ADS)
Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio
2015-12-01
At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.
NASA Astrophysics Data System (ADS)
Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.
2012-12-01
At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.
BioCarian: search engine for exploratory searches in heterogeneous biological databases.
Zaki, Nazar; Tennakoon, Chandana
2017-10-02
There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search on previously published viral integration data and were able to deduce the main conclusions of the original publication. BioCarian is accessible via http://www.biocarian.com . We have developed a search engine to explore RDF databases that can be used by both novice and advanced users.
Whetzel, Patricia L.; Grethe, Jeffrey S.; Banks, Davis E.; Martone, Maryann E.
2015-01-01
The NIDDK Information Network (dkNET; http://dknet.org) was launched to serve the needs of basic and clinical investigators in metabolic, digestive and kidney disease by facilitating access to research resources that advance the mission of the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK). By research resources, we mean the multitude of data, software tools, materials, services, projects and organizations available to researchers in the public domain. Most of these are accessed via web-accessible databases or web portals, each developed, designed and maintained by numerous different projects, organizations and individuals. While many of the large government funded databases, maintained by agencies such as European Bioinformatics Institute and the National Center for Biotechnology Information, are well known to researchers, many more that have been developed by and for the biomedical research community are unknown or underutilized. At least part of the problem is the nature of dynamic databases, which are considered part of the “hidden” web, that is, content that is not easily accessed by search engines. dkNET was created specifically to address the challenge of connecting researchers to research resources via these types of community databases and web portals. dkNET functions as a “search engine for data”, searching across millions of database records contained in hundreds of biomedical databases developed and maintained by independent projects around the world. A primary focus of dkNET are centers and projects specifically created to provide high quality data and resources to NIDDK researchers. Through the novel data ingest process used in dkNET, additional data sources can easily be incorporated, allowing it to scale with the growth of digital data and the needs of the dkNET community. Here, we provide an overview of the dkNET portal and its functions. We show how dkNET can be used to address a variety of use cases that involve searching for research resources. PMID:26393351
NASA Technical Reports Server (NTRS)
Liaw, Morris; Evesson, Donna
1988-01-01
This is a manual for users of the Software Engineering and Ada Database (SEAD). SEAD was developed to provide an information resource to NASA and NASA contractors with respect to Ada-based resources and activities that are available or underway either in NASA or elsewhere in the worldwide Ada community. The sharing of such information will reduce the duplication of effort while improving quality in the development of future software systems. The manual describes the organization of the data in SEAD, the user interface from logging in to logging out, and concludes with a ten chapter tutorial on how to use the information in SEAD. Two appendices provide quick reference for logging into SEAD and using the keyboard of an IBM 3270 or VT100 computer terminal.
Centralized database for interconnection system design. [for spacecraft
NASA Technical Reports Server (NTRS)
Billitti, Joseph W.
1989-01-01
A database application called DFACS (Database, Forms and Applications for Cabling and Systems) is described. The objective of DFACS is to improve the speed and accuracy of interconnection system information flow during the design and fabrication stages of a project, while simultaneously supporting both the horizontal (end-to-end wiring) and the vertical (wiring by connector) design stratagems used by the Jet Propulsion Laboratory (JPL) project engineering community. The DFACS architecture is centered around a centralized database and program methodology which emulates the manual design process hitherto used at JPL. DFACS has been tested and successfully applied to existing JPL hardware tasks with a resulting reduction in schedule time and costs.
Multi-source and ontology-based retrieval engine for maize mutant phenotypes
Green, Jason M.; Harnsomburana, Jaturon; Schaeffer, Mary L.; Lawrence, Carolyn J.; Shyu, Chi-Ren
2011-01-01
Model Organism Databases, including the various plant genome databases, collect and enable access to massive amounts of heterogeneous information, including sequence data, gene product information, images of mutant phenotypes, etc, as well as textual descriptions of many of these entities. While a variety of basic browsing and search capabilities are available to allow researchers to query and peruse the names and attributes of phenotypic data, next-generation search mechanisms that allow querying and ranking of text descriptions are much less common. In addition, the plant community needs an innovative way to leverage the existing links in these databases to search groups of text descriptions simultaneously. Furthermore, though much time and effort have been afforded to the development of plant-related ontologies, the knowledge embedded in these ontologies remains largely unused in available plant search mechanisms. Addressing these issues, we have developed a unique search engine for mutant phenotypes from MaizeGDB. This advanced search mechanism integrates various text description sources in MaizeGDB to aid a user in retrieving desired mutant phenotype information. Currently, descriptions of mutant phenotypes, loci and gene products are utilized collectively for each search, though expansion of the search mechanism to include other sources is straightforward. The retrieval engine, to our knowledge, is the first engine to exploit the content and structure of available domain ontologies, currently the Plant and Gene Ontologies, to expand and enrich retrieval results in major plant genomic databases. Database URL: http:www.PhenomicsWorld.org/QBTA.php PMID:21558151
Gamberini, R; Del Buono, D; Lolli, F; Rimini, B
2013-11-01
The definition and utilisation of engineering indexes in the field of Municipal Solid Waste Management (MSWM) is an issue of interest for technicians and scientists, which is widely discussed in literature. Specifically, the availability of consolidated engineering indexes is useful when new waste collection services are designed, along with when their performance is evaluated after a warm-up period. However, most published works in the field of MSWM complete their study with an analysis of isolated case studies. Conversely, decision makers require tools for information collection and exchange in order to trace the trends of these engineering indexes in large experiments. In this paper, common engineering indexes are presented and their values analysed in virtuous Italian communities, with the aim of contributing to the creation of a useful database whose data could be used during experiments, by indicating examples of MSWM demand profiles and the costs required to manage them. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Moro, Teresa T.; Savage, Teresa A.; Gehlert, Sarah
2017-01-01
Background: The nature and quality of end-of-life care received by adults with intellectual disabilities in out-of-home, non-institutional community agency residences in Western nations is not well understood. Method: A range of databases and search engines were used to locate conceptual, clinical and research articles from relevant peer-reviewed…
An Hour with the Internet Curmudgeon.
ERIC Educational Resources Information Center
Morgovsky, Joel
While the Internet undeniably contains an enormous amount of information, community colleges should consider some key issues before joining the headlong rush toward virtual classrooms. First, information can be very difficult to find on the Internet. Although search engines, web databases, and subject directories have been developed to help users…
IntegromeDB: an integrated system and biological search engine.
Baitaluk, Michael; Kozhenkov, Sergey; Dubinina, Yulia; Ponomarenko, Julia
2012-01-19
With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback.
ExplorEnz: a MySQL database of the IUBMB enzyme nomenclature.
McDonald, Andrew G; Boyce, Sinéad; Moss, Gerard P; Dixon, Henry B F; Tipton, Keith F
2007-07-27
We describe the database ExplorEnz, which is the primary repository for EC numbers and enzyme data that are being curated on behalf of the IUBMB. The enzyme nomenclature is incorporated into many other resources, including the ExPASy-ENZYME, BRENDA and KEGG bioinformatics databases. The data, which are stored in a MySQL database, preserve the formatting of chemical and enzyme names. A simple, easy to use, web-based query interface is provided, along with an advanced search engine for more complex queries. The database is publicly available at http://www.enzyme-database.org. The data are available for download as SQL and XML files via FTP. ExplorEnz has powerful and flexible search capabilities and provides the scientific community with the most up-to-date version of the IUBMB Enzyme List.
ASGARD: an open-access database of annotated transcriptomes for emerging model arthropod species.
Zeng, Victor; Extavour, Cassandra G
2012-01-01
The increased throughput and decreased cost of next-generation sequencing (NGS) have shifted the bottleneck genomic research from sequencing to annotation, analysis and accessibility. This is particularly challenging for research communities working on organisms that lack the basic infrastructure of a sequenced genome, or an efficient way to utilize whatever sequence data may be available. Here we present a new database, the Assembled Searchable Giant Arthropod Read Database (ASGARD). This database is a repository and search engine for transcriptomic data from arthropods that are of high interest to multiple research communities but currently lack sequenced genomes. We demonstrate the functionality and utility of ASGARD using de novo assembled transcriptomes from the milkweed bug Oncopeltus fasciatus, the cricket Gryllus bimaculatus and the amphipod crustacean Parhyale hawaiensis. We have annotated these transcriptomes to assign putative orthology, coding region determination, protein domain identification and Gene Ontology (GO) term annotation to all possible assembly products. ASGARD allows users to search all assemblies by orthology annotation, GO term annotation or Basic Local Alignment Search Tool. User-friendly features of ASGARD include search term auto-completion suggestions based on database content, the ability to download assembly product sequences in FASTA format, direct links to NCBI data for predicted orthologs and graphical representation of the location of protein domains and matches to similar sequences from the NCBI non-redundant database. ASGARD will be a useful repository for transcriptome data from future NGS studies on these and other emerging model arthropods, regardless of sequencing platform, assembly or annotation status. This database thus provides easy, one-stop access to multi-species annotated transcriptome information. We anticipate that this database will be useful for members of multiple research communities, including developmental biology, physiology, evolutionary biology, ecology, comparative genomics and phylogenomics. Database URL: asgard.rc.fas.harvard.edu.
Online World Conference and Expo: A Zillion Things at Once.
ERIC Educational Resources Information Center
Chuck, Lysbeth B.
1997-01-01
Presents the keynote speakers of the Online World 1997 conference, as well as HotBot and other search engines, the CyberClinic tracks (Practical Searching, Resource Management, Trends and Technology, Corporate Electronic Publishing, Content Reviews, and Roundtable Discussions), Web-based communities, and an exhibited database of over 12,000…
Cachat, Jonathan; Bandrowski, Anita; Grethe, Jeffery S; Gupta, Amarnath; Astakhov, Vadim; Imam, Fahim; Larson, Stephen D; Martone, Maryann E
2012-01-01
The number of available neuroscience resources (databases, tools, materials, and networks) available via the Web continues to expand, particularly in light of newly implemented data sharing policies required by funding agencies and journals. However, the nature of dense, multifaceted neuroscience data and the design of classic search engine systems make efficient, reliable, and relevant discovery of such resources a significant challenge. This challenge is especially pertinent for online databases, whose dynamic content is largely opaque to contemporary search engines. The Neuroscience Information Framework was initiated to address this problem of finding and utilizing neuroscience-relevant resources. Since its first production release in 2008, NIF has been surveying the resource landscape for the neurosciences, identifying relevant resources and working to make them easily discoverable by the neuroscience community. In this chapter, we provide a survey of the resource landscape for neuroscience: what types of resources are available, how many there are, what they contain, and most importantly, ways in which these resources can be utilized by the research community to advance neuroscience research. Copyright © 2012 Elsevier Inc. All rights reserved.
IntegromeDB: an integrated system and biological search engine
2012-01-01
Background With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Description Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. Conclusions The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback. PMID:22260095
ProtaBank: A repository for protein design and engineering data.
Wang, Connie Y; Chang, Paul M; Ary, Marie L; Allen, Benjamin D; Chica, Roberto A; Mayo, Stephen L; Olafson, Barry D
2018-03-25
We present ProtaBank, a repository for storing, querying, analyzing, and sharing protein design and engineering data in an actively maintained and updated database. ProtaBank provides a format to describe and compare all types of protein mutational data, spanning a wide range of properties and techniques. It features a user-friendly web interface and programming layer that streamlines data deposition and allows for batch input and queries. The database schema design incorporates a standard format for reporting protein sequences and experimental data that facilitates comparison of results across different data sets. A suite of analysis and visualization tools are provided to facilitate discovery, to guide future designs, and to benchmark and train new predictive tools and algorithms. ProtaBank will provide a valuable resource to the protein engineering community by storing and safeguarding newly generated data, allowing for fast searching and identification of relevant data from the existing literature, and exploring correlations between disparate data sets. ProtaBank invites researchers to contribute data to the database to make it accessible for search and analysis. ProtaBank is available at https://protabank.org. © 2018 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.
The development of variable MLM editor and TSQL translator based on Arden Syntax in Taiwan.
Liang, Yan Ching; Chang, Polun
2003-01-01
The Arden Syntax standard has been utilized in the medical informatics community in several countries during the past decade. It is never used in nursing in Taiwan. We try to develop a system that acquire medical expert knowledge in Chinese and translates data and logic slot into TSQL Language. The system implements TSQL translator interpreting database queries referred to in the knowledge modules. The decision-support systems in medicine are data driven system where TSQL triggers as inference engine can be used to facilitate linking to a database.
NASA Astrophysics Data System (ADS)
Pathak, S. K.; Deshpande, N. J.
2007-10-01
The present scenario of the INDEST Consortium among engineering, science and technology (including astronomy and astrophysics) libraries in India is discussed. The Indian National Digital Library in Engineering Sciences & Technology (INDEST) Consortium is a major initiative of the Ministry of Human Resource Development, Government of India. The INDEST Consortium provides access to 16 full text e-resources and 7 bibliographic databases for 166 institutions as members who are taking advantage of cost effective access to premier resources in engineering, science and technology, including astronomy and astrophysics. Member institutions can access over 6500 e-journals from 1092 publishers. Out of these, over 150 e-journals are exclusively for the astronomy and physics community. The current study also presents a comparative analysis of the key features of nine major services, viz. ACM Digital Library, ASCE Journals, ASME Journals, EBSCO Databases (Business Source Premier), Elsevier's Science Direct, Emerald Full Text, IEEE/IEE Electronic Library Online (IEL), ProQuest ABI/INFORM and Springer Verlag's Link. In this paper, the limitations of this consortium are also discussed.
ExplorEnz: a MySQL database of the IUBMB enzyme nomenclature
McDonald, Andrew G; Boyce, Sinéad; Moss, Gerard P; Dixon, Henry BF; Tipton, Keith F
2007-01-01
Background We describe the database ExplorEnz, which is the primary repository for EC numbers and enzyme data that are being curated on behalf of the IUBMB. The enzyme nomenclature is incorporated into many other resources, including the ExPASy-ENZYME, BRENDA and KEGG bioinformatics databases. Description The data, which are stored in a MySQL database, preserve the formatting of chemical and enzyme names. A simple, easy to use, web-based query interface is provided, along with an advanced search engine for more complex queries. The database is publicly available at . The data are available for download as SQL and XML files via FTP. Conclusion ExplorEnz has powerful and flexible search capabilities and provides the scientific community with the most up-to-date version of the IUBMB Enzyme List. PMID:17662133
AphidBase: A centralized bioinformatic resource for annotation of the pea aphid genome
Legeai, Fabrice; Shigenobu, Shuji; Gauthier, Jean-Pierre; Colbourne, John; Rispe, Claude; Collin, Olivier; Richards, Stephen; Wilson, Alex C. C.; Tagu, Denis
2015-01-01
AphidBase is a centralized bioinformatic resource that was developed to facilitate community annotation of the pea aphid genome by the International Aphid Genomics Consortium (IAGC). The AphidBase Information System designed to organize and distribute genomic data and annotations for a large international community was constructed using open source software tools from the Generic Model Organism Database (GMOD). The system includes Apollo and GBrowse utilities as well as a wiki, blast search capabilities and a full text search engine. AphidBase strongly supported community cooperation and coordination in the curation of gene models during community annotation of the pea aphid genome. AphidBase can be accessed at http://www.aphidbase.com. PMID:20482635
GenderMedDB: an interactive database of sex and gender-specific medical literature.
Oertelt-Prigione, Sabine; Gohlke, Björn-Oliver; Dunkel, Mathias; Preissner, Robert; Regitz-Zagrosek, Vera
2014-01-01
Searches for sex and gender-specific publications are complicated by the absence of a specific algorithm within search engines and by the lack of adequate archives to collect the retrieved results. We previously addressed this issue by initiating the first systematic archive of medical literature containing sex and/or gender-specific analyses. This initial collection has now been greatly enlarged and re-organized as a free user-friendly database with multiple functions: GenderMedDB (http://gendermeddb.charite.de). GenderMedDB retrieves the included publications from the PubMed database. Manuscripts containing sex and/or gender-specific analysis are continuously screened and the relevant findings organized systematically into disciplines and diseases. Publications are furthermore classified by research type, subject and participant numbers. More than 11,000 abstracts are currently included in the database, after screening more than 40,000 publications. The main functions of the database include searches by publication data or content analysis based on pre-defined classifications. In addition, registrants are enabled to upload relevant publications, access descriptive publication statistics and interact in an open user forum. Overall, GenderMedDB offers the advantages of a discipline-specific search engine as well as the functions of a participative tool for the gender medicine community.
Database for propagation models
NASA Astrophysics Data System (ADS)
Kantak, Anil V.
1991-07-01
A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.
CERT tribal internship program. Final intern report: Maria Perez, 1994
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-09-01
Historically, American Indian Tribes have lacked sufficient numbers of trained, technical personnel from their communities to serve their communities; tribal expertise in the fields of science, business and engineering being extremely rare and programs to encourage these disciplines almost non-existent. Subsequently, Tribes have made crucial decisions about their land and other facets of Tribal existence based upon outside technical expertise, such as that provided by the United States government and/or private industries. These outside expert opinions rarely took into account the traditional and cultural values of the Tribes being advised. The purpose of this internship was twofold: Create and maintainmore » a working relationship between CERT and Colorado State University (CSU) to plan for the Summit on Tribal human resource development; and Evaluate and engage in current efforts to strengthen the Tribal Resource Institute in Business, Engineering and Science (TRIBES) program. The intern lists the following as the project results: Positive interactions and productive meetings between CERT and CSU; Gathered information from Tribes; CERT database structure modification; Experience as facilitator in participating methods; Preliminary job descriptions for staff of future TRIBES programs; and Additions for the intern`s personal database of professional contacts and resources.« less
The Development of Variable MLM Editor and TSQL Translator Based on Arden Syntax in Taiwan
Liang, Yan-Ching; Chang, Polun
2003-01-01
The Arden Syntax standard has been utilized in the medical informatics community in several countries during the past decade. It is never used in nursing in Taiwan. We try to develop a system that acquire medical expert knowledge in Chinese and translates data and logic slot into TSQL Language. The system implements TSQL translator interpreting database queries referred to in the knowledge modules. The decision-support systems in medicine are data driven system where TSQL triggers as inference engine can be used to facilitate linking to a database. PMID:14728414
NASA Technical Reports Server (NTRS)
Liaw, Morris; Evesson, Donna
1988-01-01
Software Engineering and Ada Database (SEAD) was developed to provide an information resource to NASA and NASA contractors with respect to Ada-based resources and activities which are available or underway either in NASA or elsewhere in the worldwide Ada community. The sharing of such information will reduce duplication of effort while improving quality in the development of future software systems. SEAD data is organized into five major areas: information regarding education and training resources which are relevant to the life cycle of Ada-based software engineering projects such as those in the Space Station program; research publications relevant to NASA projects such as the Space Station Program and conferences relating to Ada technology; the latest progress reports on Ada projects completed or in progress both within NASA and throughout the free world; Ada compilers and other commercial products that support Ada software development; and reusable Ada components generated both within NASA and from elsewhere in the free world. This classified listing of reusable components shall include descriptions of tools, libraries, and other components of interest to NASA. Sources for the data include technical newletters and periodicals, conference proceedings, the Ada Information Clearinghouse, product vendors, and project sponsors and contractors.
Upgrades to the TPSX Material Properties Database
NASA Technical Reports Server (NTRS)
Squire, T. H.; Milos, F. S.; Partridge, Harry (Technical Monitor)
2001-01-01
The TPSX Material Properties Database is a web-based tool that serves as a database for properties of advanced thermal protection materials. TPSX provides an easy user interface for retrieving material property information in a variety of forms, both graphical and text. The primary purpose and advantage of TPSX is to maintain a high quality source of often used thermal protection material properties in a convenient, easily accessible form, for distribution to government and aerospace industry communities. Last year a major upgrade to the TPSX web site was completed. This year, through the efforts of researchers at several NASA centers, the Office of the Chief Engineer awarded funds to update and expand the databases in TPSX. The FY01 effort focuses on updating correcting the Ames and Johnson thermal protection materials databases. In this session we will summarize the improvements made to the web site last year, report on the status of the on-going database updates, describe the planned upgrades for FY02 and FY03, and provide a demonstration of TPSX.
Adjacency and Proximity Searching in the Science Citation Index and Google
2005-01-01
major database search engines , including commercial S&T database search engines (e.g., Science Citation Index (SCI), Engineering Compendex (EC...PubMed, OVID), Federal agency award database search engines (e.g., NSF, NIH, DOE, EPA, as accessed in Federal R&D Project Summaries), Web search Engines (e.g...searching. Some database search engines allow strict constrained co- occurrence searching as a user option (e.g., OVID, EC), while others do not (e.g., SCI
DFACS - DATABASE, FORMS AND APPLICATIONS FOR CABLING AND SYSTEMS, VERSION 3.30
NASA Technical Reports Server (NTRS)
Billitti, J. W.
1994-01-01
DFACS is an interactive multi-user computer-aided engineering tool for system level electrical integration and cabling engineering. The purpose of the program is to provide the engineering community with a centralized database for entering and accessing system functional definitions, subsystem and instrument-end circuit pinout details, and harnessing data. The primary objective is to provide an instantaneous single point of information interchange, thus avoiding error-prone, time-consuming, and costly multiple-path data shuttling. The DFACS program, which is centered around a single database, has built-in menus that provide easy data input and access for all involved system, subsystem, and cabling personnel. The DFACS program allows parallel design of circuit data sheets and harness drawings. It also recombines raw information to automatically generate various project documents and drawings including the Circuit Data Sheet Index, the Electrical Interface Circuits List, Assembly and Equipment Lists, Electrical Ground Tree, Connector List, Cable Tree, Cabling Electrical Interface and Harness Drawings, Circuit Data Sheets, and ECR List of Affected Interfaces/Assemblies. Real time automatic production of harness drawings and circuit data sheets from the same data reservoir ensures instant system and cabling engineering design harmony. DFACS also contains automatic wire routing procedures and extensive error checking routines designed to minimize the possibility of engineering error. DFACS is designed to run on DEC VAX series computers under VMS using Version 6.3/01 of INGRES QUEL/OSL, a relational database system which is available through Relational Technology, Inc. The program is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard media) or a TK50 tape cartridge. DFACS was developed in 1987 and last updated in 1990. DFACS is a copyrighted work with all copyright vested in NASA. DEC, VAX and VMS are trademarks of Digital Equipment Corporation. INGRES QUEL/OSL is a trademark of Relational Technology, Inc.
Corwin, John; Silberschatz, Avi; Miller, Perry L; Marenco, Luis
2007-01-01
Data sparsity and schema evolution issues affecting clinical informatics and bioinformatics communities have led to the adoption of vertical or object-attribute-value-based database schemas to overcome limitations posed when using conventional relational database technology. This paper explores these issues and discusses why biomedical data are difficult to model using conventional relational techniques. The authors propose a solution to these obstacles based on a relational database engine using a sparse, column-store architecture. The authors provide benchmarks comparing the performance of queries and schema-modification operations using three different strategies: (1) the standard conventional relational design; (2) past approaches used by biomedical informatics researchers; and (3) their sparse, column-store architecture. The performance results show that their architecture is a promising technique for storing and processing many types of data that are not handled well by the other two semantic data models.
Protein Information Resource: a community resource for expert annotation of protein data
Barker, Winona C.; Garavelli, John S.; Hou, Zhenglin; Huang, Hongzhan; Ledley, Robert S.; McGarvey, Peter B.; Mewes, Hans-Werner; Orcutt, Bruce C.; Pfeiffer, Friedhelm; Tsugita, Akira; Vinayaka, C. R.; Xiao, Chunlin; Yeh, Lai-Su L.; Wu, Cathy
2001-01-01
The Protein Information Resource, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the most comprehensive and expertly annotated protein sequence database in the public domain, the PIR-International Protein Sequence Database. To provide timely and high quality annotation and promote database interoperability, the PIR-International employs rule-based and classification-driven procedures based on controlled vocabulary and standard nomenclature and includes status tags to distinguish experimentally determined from predicted protein features. The database contains about 200 000 non-redundant protein sequences, which are classified into families and superfamilies and their domains and motifs identified. Entries are extensively cross-referenced to other sequence, classification, genome, structure and activity databases. The PIR web site features search engines that use sequence similarity and database annotation to facilitate the analysis and functional identification of proteins. The PIR-International databases and search tools are accessible on the PIR web site at http://pir.georgetown.edu/ and at the MIPS web site at http://www.mips.biochem.mpg.de. The PIR-International Protein Sequence Database and other files are also available by FTP. PMID:11125041
Flexible workflow sharing and execution services for e-scientists
NASA Astrophysics Data System (ADS)
Kacsuk, Péter; Terstyanszky, Gábor; Kiss, Tamas; Sipos, Gergely
2013-04-01
The sequence of computational and data manipulation steps required to perform a specific scientific analysis is called a workflow. Workflows that orchestrate data and/or compute intensive applications on Distributed Computing Infrastructures (DCIs) recently became standard tools in e-science. At the same time the broad and fragmented landscape of workflows and DCIs slows down the uptake of workflow-based work. The development, sharing, integration and execution of workflows is still a challenge for many scientists. The FP7 "Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs" (SHIWA) project significantly improved the situation, with a simulation platform that connects different workflow systems, different workflow languages, different DCIs and workflows into a single, interoperable unit. The SHIWA Simulation Platform is a service package, already used by various scientific communities, and used as a tool by the recently started ER-flow FP7 project to expand the use of workflows among European scientists. The presentation will introduce the SHIWA Simulation Platform and the services that ER-flow provides based on the platform to space and earth science researchers. The SHIWA Simulation Platform includes: 1. SHIWA Repository: A database where workflows and meta-data about workflows can be stored. The database is a central repository to discover and share workflows within and among communities . 2. SHIWA Portal: A web portal that is integrated with the SHIWA Repository and includes a workflow executor engine that can orchestrate various types of workflows on various grid and cloud platforms. 3. SHIWA Desktop: A desktop environment that provides similar access capabilities than the SHIWA Portal, however it runs on the users' desktops/laptops instead of a portal server. 4. Workflow engines: the ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflow engines are already integrated with the execution engine of the SHIWA Portal. Other engines can be added when required. Through the SHIWA Portal one can define and run simulations on the SHIWA Virtual Organisation, an e-infrastructure that gathers computing and data resources from various DCIs, including the European Grid Infrastructure. The Portal via third party workflow engines provides support for the most widely used academic workflow engines and it can be extended with other engines on demand. Such extensions translate between workflow languages and facilitate the nesting of workflows into larger workflows even when those are written in different languages and require different interpreters for execution. Through the workflow repository and the portal lonely scientists and scientific collaborations can share and offer workflows for reuse and execution. Given the integrated nature of the SHIWA Simulation Platform the shared workflows can be executed online, without installing any special client environment and downloading workflows. The FP7 "Building a European Research Community through Interoperable Workflows and Data" (ER-flow) project disseminates the achievements of the SHIWA project and use these achievements to build workflow user communities across Europe. ER-flow provides application supports to research communities within and beyond the project consortium to develop, share and run workflows with the SHIWA Simulation Platform.
Visibiome: an efficient microbiome search engine based on a scalable, distributed architecture.
Azman, Syafiq Kamarul; Anwar, Muhammad Zohaib; Henschel, Andreas
2017-07-24
Given the current influx of 16S rRNA profiles of microbiota samples, it is conceivable that large amounts of them eventually are available for search, comparison and contextualization with respect to novel samples. This process facilitates the identification of similar compositional features in microbiota elsewhere and therefore can help to understand driving factors for microbial community assembly. We present Visibiome, a microbiome search engine that can perform exhaustive, phylogeny based similarity search and contextualization of user-provided samples against a comprehensive dataset of 16S rRNA profiles environments, while tackling several computational challenges. In order to scale to high demands, we developed a distributed system that combines web framework technology, task queueing and scheduling, cloud computing and a dedicated database server. To further ensure speed and efficiency, we have deployed Nearest Neighbor search algorithms, capable of sublinear searches in high-dimensional metric spaces in combination with an optimized Earth Mover Distance based implementation of weighted UniFrac. The search also incorporates pairwise (adaptive) rarefaction and optionally, 16S rRNA copy number correction. The result of a query microbiome sample is the contextualization against a comprehensive database of microbiome samples from a diverse range of environments, visualized through a rich set of interactive figures and diagrams, including barchart-based compositional comparisons and ranking of the closest matches in the database. Visibiome is a convenient, scalable and efficient framework to search microbiomes against a comprehensive database of environmental samples. The search engine leverages a popular but computationally expensive, phylogeny based distance metric, while providing numerous advantages over the current state of the art tool.
Ju, Feng; Zhang, Tong
2015-11-03
Recent advances in DNA sequencing technologies have prompted the widespread application of metagenomics for the investigation of novel bioresources (e.g., industrial enzymes and bioactive molecules) and unknown biohazards (e.g., pathogens and antibiotic resistance genes) in natural and engineered microbial systems across multiple disciplines. This review discusses the rigorous experimental design and sample preparation in the context of applying metagenomics in environmental sciences and biotechnology. Moreover, this review summarizes the principles, methodologies, and state-of-the-art bioinformatics procedures, tools and database resources for metagenomics applications and discusses two popular strategies (analysis of unassembled reads versus assembled contigs/draft genomes) for quantitative or qualitative insights of microbial community structure and functions. Overall, this review aims to facilitate more extensive application of metagenomics in the investigation of uncultured microorganisms, novel enzymes, microbe-environment interactions, and biohazards in biotechnological applications where microbial communities are engineered for bioenergy production, wastewater treatment, and bioremediation.
Database Search Engines: Paradigms, Challenges and Solutions.
Verheggen, Kenneth; Martens, Lennart; Berven, Frode S; Barsnes, Harald; Vaudel, Marc
2016-01-01
The first step in identifying proteins from mass spectrometry based shotgun proteomics data is to infer peptides from tandem mass spectra, a task generally achieved using database search engines. In this chapter, the basic principles of database search engines are introduced with a focus on open source software, and the use of database search engines is demonstrated using the freely available SearchGUI interface. This chapter also discusses how to tackle general issues related to sequence database searching and shows how to minimize their impact.
Powder Injection Molding of Ceramic Engine Components for Transportation
NASA Astrophysics Data System (ADS)
Lenz, Juergen; Enneti, Ravi K.; Onbattuvelli, Valmikanathan; Kate, Kunal; Martin, Renee; Atre, Sundar
2012-03-01
Silicon nitride has been the favored material for manufacturing high-efficiency engine components for transportation due to its high temperature stability, good wear resistance, excellent corrosion resistance, thermal shock resistance, and low density. The use of silicon nitride in engine components greatly depends on the ability to fabricate near net-shape components economically. The absence of a material database for design and simulation has further restricted the engineering community in developing parts from silicon nitride. In this paper, the design and manufacturability of silicon nitride engine rotors for unmanned aerial vehicles by the injection molding process are discussed. The feedstock material property data obtained from experiments were used to simulate the flow of the material during injection molding. The areas susceptible to the formation of defects during the injection molding process of the engine component were identified from the simulations. A test sample was successfully injection molded using the feedstock and sintered to 99% density without formation of significant observable defects.
Hanson, Andrew D; Pribat, Anne; Waller, Jeffrey C; de Crécy-Lagard, Valérie
2009-12-14
Like other forms of engineering, metabolic engineering requires knowledge of the components (the 'parts list') of the target system. Lack of such knowledge impairs both rational engineering design and diagnosis of the reasons for failures; it also poses problems for the related field of metabolic reconstruction, which uses a cell's parts list to recreate its metabolic activities in silico. Despite spectacular progress in genome sequencing, the parts lists for most organisms that we seek to manipulate remain highly incomplete, due to the dual problem of 'unknown' proteins and 'orphan' enzymes. The former are all the proteins deduced from genome sequence that have no known function, and the latter are all the enzymes described in the literature (and often catalogued in the EC database) for which no corresponding gene has been reported. Unknown proteins constitute up to about half of the proteins in prokaryotic genomes, and much more than this in higher plants and animals. Orphan enzymes make up more than a third of the EC database. Attacking the 'missing parts list' problem is accordingly one of the great challenges for post-genomic biology, and a tremendous opportunity to discover new facets of life's machinery. Success will require a co-ordinated community-wide attack, sustained over years. In this attack, comparative genomics is probably the single most effective strategy, for it can reliably predict functions for unknown proteins and genes for orphan enzymes. Furthermore, it is cost-efficient and increasingly straightforward to deploy owing to a proliferation of databases and associated tools.
National Institute of Standards and Technology Data Gateway
SRD 103a NIST ThermoData Engine Database (PC database for purchase) ThermoData Engine is the first product fully implementing all major principles of the concept of dynamic data evaluation formulated at NIST/TRC.
biochem4j: Integrated and extensible biochemical knowledge through graph databases.
Swainston, Neil; Batista-Navarro, Riza; Carbonell, Pablo; Dobson, Paul D; Dunstan, Mark; Jervis, Adrian J; Vinaixa, Maria; Williams, Alan R; Ananiadou, Sophia; Faulon, Jean-Loup; Mendes, Pedro; Kell, Douglas B; Scrutton, Nigel S; Breitling, Rainer
2017-01-01
Biologists and biochemists have at their disposal a number of excellent, publicly available data resources such as UniProt, KEGG, and NCBI Taxonomy, which catalogue biological entities. Despite the usefulness of these resources, they remain fundamentally unconnected. While links may appear between entries across these databases, users are typically only able to follow such links by manual browsing or through specialised workflows. Although many of the resources provide web-service interfaces for computational access, performing federated queries across databases remains a non-trivial but essential activity in interdisciplinary systems and synthetic biology programmes. What is needed are integrated repositories to catalogue both biological entities and-crucially-the relationships between them. Such a resource should be extensible, such that newly discovered relationships-for example, those between novel, synthetic enzymes and non-natural products-can be added over time. With the introduction of graph databases, the barrier to the rapid generation, extension and querying of such a resource has been lowered considerably. With a particular focus on metabolic engineering as an illustrative application domain, biochem4j, freely available at http://biochem4j.org, is introduced to provide an integrated, queryable database that warehouses chemical, reaction, enzyme and taxonomic data from a range of reliable resources. The biochem4j framework establishes a starting point for the flexible integration and exploitation of an ever-wider range of biological data sources, from public databases to laboratory-specific experimental datasets, for the benefit of systems biologists, biosystems engineers and the wider community of molecular biologists and biological chemists.
biochem4j: Integrated and extensible biochemical knowledge through graph databases
Batista-Navarro, Riza; Dunstan, Mark; Jervis, Adrian J.; Vinaixa, Maria; Ananiadou, Sophia; Faulon, Jean-Loup; Kell, Douglas B.
2017-01-01
Biologists and biochemists have at their disposal a number of excellent, publicly available data resources such as UniProt, KEGG, and NCBI Taxonomy, which catalogue biological entities. Despite the usefulness of these resources, they remain fundamentally unconnected. While links may appear between entries across these databases, users are typically only able to follow such links by manual browsing or through specialised workflows. Although many of the resources provide web-service interfaces for computational access, performing federated queries across databases remains a non-trivial but essential activity in interdisciplinary systems and synthetic biology programmes. What is needed are integrated repositories to catalogue both biological entities and–crucially–the relationships between them. Such a resource should be extensible, such that newly discovered relationships–for example, those between novel, synthetic enzymes and non-natural products–can be added over time. With the introduction of graph databases, the barrier to the rapid generation, extension and querying of such a resource has been lowered considerably. With a particular focus on metabolic engineering as an illustrative application domain, biochem4j, freely available at http://biochem4j.org, is introduced to provide an integrated, queryable database that warehouses chemical, reaction, enzyme and taxonomic data from a range of reliable resources. The biochem4j framework establishes a starting point for the flexible integration and exploitation of an ever-wider range of biological data sources, from public databases to laboratory-specific experimental datasets, for the benefit of systems biologists, biosystems engineers and the wider community of molecular biologists and biological chemists. PMID:28708831
Reactome graph database: Efficient access to complex pathway data
Korninger, Florian; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D’Eustachio, Peter
2018-01-01
Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types. PMID:29377902
Reactome graph database: Efficient access to complex pathway data.
Fabregat, Antonio; Korninger, Florian; Viteri, Guilherme; Sidiropoulos, Konstantinos; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning
2018-01-01
Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.
ThermoData Engine Database - Pure Compounds and Binary Mixtures
National Institute of Standards and Technology Data Gateway
SRD 103b NIST ThermoData Engine Version 6.0 - Pure CompoThermoData Engine Database - Pure Compounds and Binary Mixtures (PC database for purchase) This database contains property data for more than 21,000 pure compounds, 37,500 binary mixtures, 10,000 ternary mixtures, and 6,000 chemical reactions.
On Establishing Big Data Wave Breakwaters with Analytics (Invited)
NASA Astrophysics Data System (ADS)
Riedel, M.
2013-12-01
The Research Data Alliance Big Data Analytics (RDA-BDA) Interest Group seeks to develop community based recommendations on feasible data analytics approaches to address scientific community needs of utilizing large quantities of data. RDA-BDA seeks to analyze different scientific domain applications and their potential use of various big data analytics techniques. A systematic classification of feasible combinations of analysis algorithms, analytical tools, data and resource characteristics and scientific queries will be covered in these recommendations. These combinations are complex since a wide variety of different data analysis algorithms exist (e.g. specific algorithms using GPUs of analyzing brain images) that need to work together with multiple analytical tools reaching from simple (iterative) map-reduce methods (e.g. with Apache Hadoop or Twister) to sophisticated higher level frameworks that leverage machine learning algorithms (e.g. Apache Mahout). These computational analysis techniques are often augmented with visual analytics techniques (e.g. computational steering on large-scale high performance computing platforms) to put the human judgement into the analysis loop or new approaches with databases that are designed to support new forms of unstructured or semi-structured data as opposed to the rather tradtional structural databases (e.g. relational databases). More recently, data analysis and underpinned analytics frameworks also have to consider energy footprints of underlying resources. To sum up, the aim of this talk is to provide pieces of information to understand big data analytics in the context of science and engineering using the aforementioned classification as the lighthouse and as the frame of reference for a systematic approach. This talk will provide insights about big data analytics methods in context of science within varios communities and offers different views of how approaches of correlation and causality offer complementary methods to advance in science and engineering today. The RDA Big Data Analytics Group seeks to understand what approaches are not only technically feasible, but also scientifically feasible. The lighthouse Goal of the RDA Big Data Analytics Group is a classification of clever combinations of various Technologies and scientific applications in order to provide clear recommendations to the scientific community what approaches are technicalla and scientifically feasible.
Municipal GIS incorporates database from pipe lines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-05-01
League City, a coastal area community of about 35,000 population in Galveston County, Texas, has developed an impressive municipal GIS program. The system represents a textbook example of what a municipal GIS can represent and produce. In 1987, the city engineer was authorized to begin developing the area information system. City survey personnel used state-of-the-art Global Positioning System (GPS) technology to establish a first order monumentation program with a grid of 78 monuments set over 54 sq mi. Street, subdivision, survey, utilities, taxing criteria, hydrology, topography, environmental and other concerns were layered into the municipal GIS database program. Today, areamore » developers submit all layout, design, and land use plan data to the city in digital format without hard copy. Multi-color maps with high resolution graphics can be quickly generate for cross-referenced queries sensitive to political, environmental, engineering, taxing, and/or utility capacity jurisdictions. The design of both the GIS and data base system are described.« less
NASA Technical Reports Server (NTRS)
McQuillen, John; Green, Robert D.; Henrie, Ben; Miller, Teresa; Chiaramonte, Fran
2014-01-01
The Physical Science Informatics (PSI) system is the next step in this an effort to make NASA sponsored flight data available to the scientific and engineering community, along with the general public. The experimental data, from six overall disciplines, Combustion Science, Fluid Physics, Complex Fluids, Fundamental Physics, and Materials Science, will present some unique challenges. Besides data in textual or numerical format, large portions of both the raw and analyzed data for many of these experiments are digital images and video, requiring large data storage requirements. In addition, the accessible data will include experiment design and engineering data (including applicable drawings), any analytical or numerical models, publications, reports, and patents, and any commercial products developed as a result of the research. This objective of paper includes the following: Present the preliminary layout (Figure 2) of MABE data within the PSI database. Obtain feedback on the layout. Present the procedure to obtain access to this database.
Audain, Enrique; Uszkoreit, Julian; Sachsenberg, Timo; Pfeuffer, Julianus; Liang, Xiao; Hermjakob, Henning; Sanchez, Aniel; Eisenacher, Martin; Reinert, Knut; Tabb, David L; Kohlbacher, Oliver; Perez-Riverol, Yasset
2017-01-06
In mass spectrometry-based shotgun proteomics, protein identifications are usually the desired result. However, most of the analytical methods are based on the identification of reliable peptides and not the direct identification of intact proteins. Thus, assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is a critical step in proteomics research. Currently, different protein inference algorithms and tools are available for the proteomics community. Here, we evaluated five software tools for protein inference (PIA, ProteinProphet, Fido, ProteinLP, MSBayesPro) using three popular database search engines: Mascot, X!Tandem, and MS-GF+. All the algorithms were evaluated using a highly customizable KNIME workflow using four different public datasets with varying complexities (different sample preparation, species and analytical instruments). We defined a set of quality control metrics to evaluate the performance of each combination of search engines, protein inference algorithm, and parameters on each dataset. We show that the results for complex samples vary not only regarding the actual numbers of reported protein groups but also concerning the actual composition of groups. Furthermore, the robustness of reported proteins when using databases of differing complexities is strongly dependant on the applied inference algorithm. Finally, merging the identifications of multiple search engines does not necessarily increase the number of reported proteins, but does increase the number of peptides per protein and thus can generally be recommended. Protein inference is one of the major challenges in MS-based proteomics nowadays. Currently, there are a vast number of protein inference algorithms and implementations available for the proteomics community. Protein assembly impacts in the final results of the research, the quantitation values and the final claims in the research manuscript. Even though protein inference is a crucial step in proteomics data analysis, a comprehensive evaluation of the many different inference methods has never been performed. Previously Journal of proteomics has published multiple studies about other benchmark of bioinformatics algorithms (PMID: 26585461; PMID: 22728601) in proteomics studies making clear the importance of those studies for the proteomics community and the journal audience. This manuscript presents a new bioinformatics solution based on the KNIME/OpenMS platform that aims at providing a fair comparison of protein inference algorithms (https://github.com/KNIME-OMICS). Six different algorithms - ProteinProphet, MSBayesPro, ProteinLP, Fido and PIA- were evaluated using the highly customizable workflow on four public datasets with varying complexities. Five popular database search engines Mascot, X!Tandem, MS-GF+ and combinations thereof were evaluated for every protein inference tool. In total >186 proteins lists were analyzed and carefully compare using three metrics for quality assessments of the protein inference results: 1) the numbers of reported proteins, 2) peptides per protein, and the 3) number of uniquely reported proteins per inference method, to address the quality of each inference method. We also examined how many proteins were reported by choosing each combination of search engines, protein inference algorithms and parameters on each dataset. The results show that using 1) PIA or Fido seems to be a good choice when studying the results of the analyzed workflow, regarding not only the reported proteins and the high-quality identifications, but also the required runtime. 2) Merging the identifications of multiple search engines gives almost always more confident results and increases the number of peptides per protein group. 3) The usage of databases containing not only the canonical, but also known isoforms of proteins has a small impact on the number of reported proteins. The detection of specific isoforms could, concerning the question behind the study, compensate for slightly shorter reports using the parsimonious reports. 4) The current workflow can be easily extended to support new algorithms and search engine combinations. Copyright © 2016. Published by Elsevier B.V.
An approach in building a chemical compound search engine in oracle database.
Wang, H; Volarath, P; Harrison, R
2005-01-01
A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated. The convenience of the implementation emphasizes the efficiency and simplicity of the framework.
TRENDS: A flight test relational database user's guide and reference manual
NASA Technical Reports Server (NTRS)
Bondi, M. J.; Bjorkman, W. S.; Cross, J. L.
1994-01-01
This report is designed to be a user's guide and reference manual for users intending to access rotocraft test data via TRENDS, the relational database system which was developed as a tool for the aeronautical engineer with no programming background. This report has been written to assist novice and experienced TRENDS users. TRENDS is a complete system for retrieving, searching, and analyzing both numerical and narrative data, and for displaying time history and statistical data in graphical and numerical formats. This manual provides a 'guided tour' and a 'user's guide' for the new and intermediate-skilled users. Examples for the use of each menu item within TRENDS is provided in the Menu Reference section of the manual, including full coverage for TIMEHIST, one of the key tools. This manual is written around the XV-15 Tilt Rotor database, but does include an appendix on the UH-60 Blackhawk database. This user's guide and reference manual establishes a referrable source for the research community and augments NASA TM-101025, TRENDS: The Aeronautical Post-Test, Database Management System, Jan. 1990, written by the same authors.
A World Wide Web (WWW) server database engine for an organelle database, MitoDat.
Lemkin, P F; Chipperfield, M; Merril, C; Zullo, S
1996-03-01
We describe a simple database search engine "dbEngine" which may be used to quickly create a searchable database on a World Wide Web (WWW) server. Data may be prepared from spreadsheet programs (such as Excel, etc.) or from tables exported from relationship database systems. This Common Gateway Interface (CGI-BIN) program is used with a WWW server such as available commercially, or from National Center for Supercomputer Algorithms (NCSA) or CERN. Its capabilities include: (i) searching records by combinations of terms connected with ANDs or ORs; (ii) returning search results as hypertext links to other WWW database servers; (iii) mapping lists of literature reference identifiers to the full references; (iv) creating bidirectional hypertext links between pictures and the database. DbEngine has been used to support the MitoDat database (Mendelian and non-Mendelian inheritance associated with the Mitochondrion) on the WWW.
SnoVault and encodeD: A novel object-based storage system and applications to ENCODE metadata.
Hitz, Benjamin C; Rowe, Laurence D; Podduturi, Nikhil R; Glick, David I; Baymuradov, Ulugbek K; Malladi, Venkat S; Chan, Esther T; Davidson, Jean M; Gabdank, Idan; Narayana, Aditi K; Onate, Kathrina C; Hilton, Jason; Ho, Marcus C; Lee, Brian T; Miyasato, Stuart R; Dreszer, Timothy R; Sloan, Cricket A; Strattan, J Seth; Tanaka, Forrest Y; Hong, Eurie L; Cherry, J Michael
2017-01-01
The Encyclopedia of DNA elements (ENCODE) project is an ongoing collaborative effort to create a comprehensive catalog of functional elements initiated shortly after the completion of the Human Genome Project. The current database exceeds 6500 experiments across more than 450 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the H. sapiens and M. musculus genomes. All ENCODE experimental data, metadata, and associated computational analyses are submitted to the ENCODE Data Coordination Center (DCC) for validation, tracking, storage, unified processing, and distribution to community resources and the scientific community. As the volume of data increases, the identification and organization of experimental details becomes increasingly intricate and demands careful curation. The ENCODE DCC has created a general purpose software system, known as SnoVault, that supports metadata and file submission, a database used for metadata storage, web pages for displaying the metadata and a robust API for querying the metadata. The software is fully open-source, code and installation instructions can be found at: http://github.com/ENCODE-DCC/snovault/ (for the generic database) and http://github.com/ENCODE-DCC/encoded/ to store genomic data in the manner of ENCODE. The core database engine, SnoVault (which is completely independent of ENCODE, genomic data, or bioinformatic data) has been released as a separate Python package.
SnoVault and encodeD: A novel object-based storage system and applications to ENCODE metadata
Podduturi, Nikhil R.; Glick, David I.; Baymuradov, Ulugbek K.; Malladi, Venkat S.; Chan, Esther T.; Davidson, Jean M.; Gabdank, Idan; Narayana, Aditi K.; Onate, Kathrina C.; Hilton, Jason; Ho, Marcus C.; Lee, Brian T.; Miyasato, Stuart R.; Dreszer, Timothy R.; Sloan, Cricket A.; Strattan, J. Seth; Tanaka, Forrest Y.; Hong, Eurie L.; Cherry, J. Michael
2017-01-01
The Encyclopedia of DNA elements (ENCODE) project is an ongoing collaborative effort to create a comprehensive catalog of functional elements initiated shortly after the completion of the Human Genome Project. The current database exceeds 6500 experiments across more than 450 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the H. sapiens and M. musculus genomes. All ENCODE experimental data, metadata, and associated computational analyses are submitted to the ENCODE Data Coordination Center (DCC) for validation, tracking, storage, unified processing, and distribution to community resources and the scientific community. As the volume of data increases, the identification and organization of experimental details becomes increasingly intricate and demands careful curation. The ENCODE DCC has created a general purpose software system, known as SnoVault, that supports metadata and file submission, a database used for metadata storage, web pages for displaying the metadata and a robust API for querying the metadata. The software is fully open-source, code and installation instructions can be found at: http://github.com/ENCODE-DCC/snovault/ (for the generic database) and http://github.com/ENCODE-DCC/encoded/ to store genomic data in the manner of ENCODE. The core database engine, SnoVault (which is completely independent of ENCODE, genomic data, or bioinformatic data) has been released as a separate Python package. PMID:28403240
Martone, Maryann E.; Tran, Joshua; Wong, Willy W.; Sargis, Joy; Fong, Lisa; Larson, Stephen; Lamont, Stephan P.; Gupta, Amarnath; Ellisman, Mark H.
2008-01-01
Databases have become integral parts of data management, dissemination and mining in biology. At the Second Annual Conference on Electron Tomography, held in Amsterdam in 2001, we proposed that electron tomography data should be shared in a manner analogous to structural data at the protein and sequence scales. At that time, we outlined our progress in creating a database to bring together cell level imaging data across scales, The Cell Centered Database (CCDB). The CCDB was formally launched in 2002 as an on-line repository of high-resolution 3D light and electron microscopic reconstructions of cells and subcellular structures. It contains 2D, 3D and 4D structural and protein distribution information from confocal, multiphoton and electron microscopy, including correlated light and electron microscopy. Many of the data sets are derived from electron tomography of cells and tissues. In the five years since its debut, we have moved the CCDB from a prototype to a stable resource and expanded the scope of the project to include data management and knowledge engineering. Here we provide an update on the CCDB and how it is used by the scientific community. We also describe our work in developing additional knowledge tools, e.g., ontologies, for annotation and query of electron microscopic data. PMID:18054501
Towards an integrated European strong motion data distribution
NASA Astrophysics Data System (ADS)
Luzi, Lucia; Clinton, John; Cauzzi, Carlo; Puglia, Rodolfo; Michelini, Alberto; Van Eck, Torild; Sleeman, Reinhoud; Akkar, Sinan
2013-04-01
Recent decades have seen a significant increase in the quality and quantity of strong motion data collected in Europe, as dense and often real-time and continuously monitored broadband strong motion networks have been constructed in many nations. There has been a concurrent increase in demand for access to strong motion data not only from researchers for engineering and seismological studies, but also from civil authorities and seismic networks for the rapid assessment of ground motion and shaking intensity following significant earthquakes (e.g. ShakeMaps). Aside from a few notable exceptions on the national scale, databases providing access to strong motion data has not appeared to keep pace with these developments. In the framework of the EC infrastructure project NERA (2010 - 2014), that integrates key research infrastructures in Europe for monitoring earthquakes and assessing their hazard and risk, the network activity NA3 deals with the networking of acceleration networks and SM data. Within the NA3 activity two infrastructures are being constructed: i) a Rapid Response Strong Motion (RRSM) database, that following a strong event, automatically parameterises all available on-scale waveform data within the European Integrated waveform Data Archives (EIDA) and makes the waveforms easily available to the seismological community within minutes of an event; and ii) a European Strong Motion (ESM) database of accelerometric records, with associated metadata relevant to earthquake engineering and seismology research communities, using standard, manual processing that reflects the state of the art and research needs in these fields. These two separate repositories form the core infrastructures being built to distribute strong motion data in Europe in order to guarantee rapid and long-term availability of high quality waveform data to both the international scientific community and the hazard mitigation communities. These infrastructures will provide the access to strong motion data in an eventual EPOS seismological service. A working group on Strong Motion data is being created at ORFEUS in 2013. This body, consisting of experts in strong motion data collection, processing and research from across Europe, will provide the umbrella organisation that will 1) have the political clout to negotiate data sharing agreements with strong motion data providers and 2) manage the software during a transition from the end of NERA to the EPOS community. We expect the community providing data to the RRSM and ESM will gradually grow, under the supervision of ORFEUS, and eventually include strong motion data from networks from all European countries that can have an open data policy.
Database assessment of CMIP5 and hydrological models to determine flood risk areas
NASA Astrophysics Data System (ADS)
Limlahapun, Ponthip; Fukui, Hiromichi
2016-11-01
Solutions for water-related disasters may not be solved with a single scientific method. Based on this premise, we involved logic conceptions, associate sequential result amongst models, and database applications attempting to analyse historical and future scenarios in the context of flooding. The three main models used in this study are (1) the fifth phase of the Coupled Model Intercomparison Project (CMIP5) to derive precipitation; (2) the Integrated Flood Analysis System (IFAS) to extract amount of discharge; and (3) the Hydrologic Engineering Center (HEC) model to generate inundated areas. This research notably focused on integrating data regardless of system-design complexity, and database approaches are significantly flexible, manageable, and well-supported for system data transfer, which makes them suitable for monitoring a flood. The outcome of flood map together with real-time stream data can help local communities identify areas at-risk of flooding in advance.
The Permafrost Young Researchers Network (PYRN): Contribution to IPY's "Thermal State of Permafrost"
NASA Astrophysics Data System (ADS)
Johansson, M.; Lantuit, H.; Frauenfeld, O. W.
2007-12-01
The Permafrost Young Researchers Network (PYRN, www.pyrn.org) is a unique resource for students, young scientists, and engineers studying permafrost. It is an international organization fostering innovative collaboration, seeking to recruit, retain, and promote future generations of permafrost scientists and engineers. Initiated for and during IPY, PYRN directs the multi-disciplinary talents of its membership toward global awareness, knowledge, and response to permafrost-related challenges in a changing climate. Created as an education and outreach component of the International Permafrost Association (IPA), PYRN is a central database of permafrost information and science for more than 350 young researchers from 33 countries. PYRN distributes a newsletter, recognizes outstanding permafrost research by its members through an annual awards program, organizes training workshops (2007 in Abisko, Sweden and St. Petersburg, Russia), and contributes to the growth and future of the permafrost community. While networking forms the basis of PYRN's activities, the organization also seeks to establish itself as a driver of permafrost research for the IPY and beyond. We recently launched a series of initiatives on several continents aimed at providing young scientists and engineers with the means to conduct ground temperature monitoring in under-investigated permafrost regions. Focusing on sites not currently covered by the IPA's "Thermal State of Permafrost" project, the young investigators of PYRN will provide and use lightweight drills and temperature sensors to instrument shallow boreholes in those regions. The data and results will be incorporated in the global database on permafrost temperatures and made freely available to the scientific community, thereby contributing to the advance of permafrost science and the strengthening of the next generation of permafrost researchers.
On the Compliance of Women Engineers with a Gendered Scientific System.
Ghiasi, Gita; Larivière, Vincent; Sugimoto, Cassidy R
2015-01-01
There has been considerable effort in the last decade to increase the participation of women in engineering through various policies. However, there has been little empirical research on gender disparities in engineering which help underpin the effective preparation, co-ordination, and implementation of the science and technology (S&T) policies. This article aims to present a comprehensive gendered analysis of engineering publications across different specialties and provide a cross-gender analysis of research output and scientific impact of engineering researchers in academic, governmental, and industrial sectors. For this purpose, 679,338 engineering articles published from 2008 to 2013 are extracted from the Web of Science database and 974,837 authorships are analyzed. The structures of co-authorship collaboration networks in different engineering disciplines are examined, highlighting the role of female scientists in the diffusion of knowledge. The findings reveal that men dominate 80% of all the scientific production in engineering. Women engineers publish their papers in journals with higher Impact Factors than their male peers, but their work receives lower recognition (fewer citations) from the scientific community. Engineers-regardless of their gender-contribute to the reproduction of the male-dominated scientific structures through forming and repeating their collaborations predominantly with men. The results of this study call for integration of data driven gender-related policies in existing S&T discourse.
Petaminer: Using ROOT for efficient data storage in MySQL database
NASA Astrophysics Data System (ADS)
Cranshaw, J.; Malon, D.; Vaniachine, A.; Fine, V.; Lauret, J.; Hamill, P.
2010-04-01
High Energy and Nuclear Physics (HENP) experiments store Petabytes of event data and Terabytes of calibration data in ROOT files. The Petaminer project is developing a custom MySQL storage engine to enable the MySQL query processor to directly access experimental data stored in ROOT files. Our project is addressing the problem of efficient navigation to PetaBytes of HENP experimental data described with event-level TAG metadata, which is required by data intensive physics communities such as the LHC and RHIC experiments. Physicists need to be able to compose a metadata query and rapidly retrieve the set of matching events, where improved efficiency will facilitate the discovery process by permitting rapid iterations of data evaluation and retrieval. Our custom MySQL storage engine enables the MySQL query processor to directly access TAG data stored in ROOT TTrees. As ROOT TTrees are column-oriented, reading them directly provides improved performance over traditional row-oriented TAG databases. Leveraging the flexible and powerful SQL query language to access data stored in ROOT TTrees, the Petaminer approach enables rich MySQL index-building capabilities for further performance optimization.
Problems of information support in scientific research
NASA Astrophysics Data System (ADS)
Shamaev, V. G.; Gorshkov, A. B.
2015-11-01
This paper reports on the creation of the open access Akustika portal (AKDATA.RU) designed to provide Russian-language easy-to-read and search information on acoustics and related topics. The absence of a Russian-language publication in foreign databases means that it is effectively lost for much of the scientific community. The portal has three interrelated sections: the Akustika information search system (ISS) (Acoustics), full-text archive of the Akusticheskii Zhurnal (Acoustic Journal), and 'Signal'naya informatsiya' ('Signaling information') on acoustics. The paper presents a description of the Akustika ISS, including its structure, content, interface, and information search capabilities for basic and applied research in diverse areas of science, engineering, biology, medicine, etc. The intended users of the portal are physicists, engineers, and engineering technologists interested in expanding their research activities and seeking to increase their knowledge base. Those studying current trends in the Russian-language contribution to international science may also find the portal useful.
A growing opportunity: Community gardens affiliated with US hospitals and academic health centers
George, Daniel R.; Rovniak, Liza S.; Kraschnewski, Jennifer L.; Hanson, Ryan; Sciamanna, Christopher N.
2014-01-01
Background Community gardens can reduce public health disparities through promoting physical activity and healthy eating, growing food for underserved populations, and accelerating healing from injury or disease. Despite their potential to contribute to comprehensive patient care, no prior studies have investigated the prevalence of community gardens affiliated with US healthcare institutions, and the demographic characteristics of communities served by these gardens. Methods In 2013, national community garden databases, scientific abstracts, and public search engines (e.g., Google Scholar) were used to identify gardens. Outcomes included the prevalence of hospital-based community gardens by US regions, and demographic characteristics (age, race/ethnicity, education, household income, and obesity rates) of communities served by gardens. Results There were 110 healthcare-based gardens, with 39 in the Midwest, 25 in the South, 24 in the Northeast, and 22 in the West. Compared to US population averages, communities served by healthcare-based gardens had similar demographic characteristics, but significantly lower rates of obesity (27% versus 34%, P < .001). Conclusions Healthcare-based gardens are located in regions that are demographically representative of the US population, and are associated with lower rates of obesity in communities they serve. PMID:25599017
A Growing Opportunity: Community Gardens Affiliated with US Hospitals and Academic Health Centers.
George, Daniel R; Rovniak, Liza S; Kraschnewski, Jennifer L; Hanson, Ryan; Sciamanna, Christopher N
Community gardens can reduce public health disparities through promoting physical activity and healthy eating, growing food for underserved populations, and accelerating healing from injury or disease. Despite their potential to contribute to comprehensive patient care, no prior studies have investigated the prevalence of community gardens affiliated with US healthcare institutions, and the demographic characteristics of communities served by these gardens. In 2013, national community garden databases, scientific abstracts, and public search engines (e.g., Google Scholar) were used to identify gardens. Outcomes included the prevalence of hospital-based community gardens by US regions, and demographic characteristics (age, race/ethnicity, education, household income, and obesity rates) of communities served by gardens. There were 110 healthcare-based gardens, with 39 in the Midwest, 25 in the South, 24 in the Northeast, and 22 in the West. Compared to US population averages, communities served by healthcare-based gardens had similar demographic characteristics, but significantly lower rates of obesity (27% versus 34%, p < .001). Healthcare-based gardens are located in regions that are demographically representative of the US population, and are associated with lower rates of obesity in communities they serve.
BGD: a database of bat genomes.
Fang, Jianfei; Wang, Xuan; Mu, Shuo; Zhang, Shuyi; Dong, Dong
2015-01-01
Bats account for ~20% of mammalian species, and are the only mammals with true powered flight. For the sake of their specialized phenotypic traits, many researches have been devoted to examine the evolution of bats. Until now, some whole genome sequences of bats have been assembled and annotated, however, a uniform resource for the annotated bat genomes is still unavailable. To make the extensive data associated with the bat genomes accessible to the general biological communities, we established a Bat Genome Database (BGD). BGD is an open-access, web-available portal that integrates available data of bat genomes and genes. It hosts data from six bat species, including two megabats and four microbats. Users can query the gene annotations using efficient searching engine, and it offers browsable tracks of bat genomes. Furthermore, an easy-to-use phylogenetic analysis tool was also provided to facilitate online phylogeny study of genes. To the best of our knowledge, BGD is the first database of bat genomes. It will extend our understanding of the bat evolution and be advantageous to the bat sequences analysis. BGD is freely available at: http://donglab.ecnu.edu.cn/databases/BatGenome/.
A database of charged cosmic rays
NASA Astrophysics Data System (ADS)
Maurin, D.; Melot, F.; Taillet, R.
2014-09-01
Aims: This paper gives a description of a new online database and associated online tools (data selection, data export, plots, etc.) for charged cosmic-ray measurements. The experimental setups (type, flight dates, techniques) from which the data originate are included in the database, along with the references to all relevant publications. Methods: The database relies on the MySQL5 engine. The web pages and queries are based on PHP, AJAX and the jquery, jquery.cluetip, jquery-ui, and table-sorter third-party libraries. Results: In this first release, we restrict ourselves to Galactic cosmic rays with Z ≤ 30 and a kinetic energy per nucleon up to a few tens of TeV/n. This corresponds to more than 200 different sub-experiments (i.e., different experiments, or data from the same experiment flying at different times) in as many publications. Conclusions: We set up a cosmic-ray database (CRDB) and provide tools to sort and visualise the data. New data can be submitted, providing the community with a collaborative tool to archive past and future cosmic-ray measurements. http://lpsc.in2p3.fr/crdb; Contact: crdatabase@lpsc.in2p3.fr
Seismic Search Engine: A distributed database for mining large scale seismic data
NASA Astrophysics Data System (ADS)
Liu, Y.; Vaidya, S.; Kuzma, H. A.
2009-12-01
The International Monitoring System (IMS) of the CTBTO collects terabytes worth of seismic measurements from many receiver stations situated around the earth with the goal of detecting underground nuclear testing events and distinguishing them from other benign, but more common events such as earthquakes and mine blasts. The International Data Center (IDC) processes and analyzes these measurements, as they are collected by the IMS, to summarize event detections in daily bulletins. Thereafter, the data measurements are archived into a large format database. Our proposed Seismic Search Engine (SSE) will facilitate a framework for data exploration of the seismic database as well as the development of seismic data mining algorithms. Analogous to GenBank, the annotated genetic sequence database maintained by NIH, through SSE, we intend to provide public access to seismic data and a set of processing and analysis tools, along with community-generated annotations and statistical models to help interpret the data. SSE will implement queries as user-defined functions composed from standard tools and models. Each query is compiled and executed over the database internally before reporting results back to the user. Since queries are expressed with standard tools and models, users can easily reproduce published results within this framework for peer-review and making metric comparisons. As an illustration, an example query is “what are the best receiver stations in East Asia for detecting events in the Middle East?” Evaluating this query involves listing all receiver stations in East Asia, characterizing known seismic events in that region, and constructing a profile for each receiver station to determine how effective its measurements are at predicting each event. The results of this query can be used to help prioritize how data is collected, identify defective instruments, and guide future sensor placements.
Access to Space Interactive Design Web Site
NASA Technical Reports Server (NTRS)
Leon, John; Cutlip, William; Hametz, Mark
2000-01-01
The Access To Space (ATS) Group at NASA's Goddard Space Flight Center (GSFC) supports the science and technology community at GSFC by facilitating frequent and affordable opportunities for access to space. Through partnerships established with access mode suppliers, the ATS Group has developed an interactive Mission Design web site. The ATS web site provides both the information and the tools necessary to assist mission planners in selecting and planning their ride to space. This includes the evaluation of single payloads vs. ride-sharing opportunities to reduce the cost of access to space. Features of this site include the following: (1) Mission Database. Our mission database contains a listing of missions ranging from proposed missions to manifested. Missions can be entered by our user community through data input tools. Data is then accessed by users through various search engines: orbit parameters, ride-share opportunities, spacecraft parameters, other mission notes, launch vehicle, and contact information. (2) Launch Vehicle Toolboxes. The launch vehicle toolboxes provide the user a full range of information on vehicle classes and individual configurations. Topics include: general information, environments, performance, payload interface, available volume, and launch sites.
Verheggen, Kenneth; Raeder, Helge; Berven, Frode S; Martens, Lennart; Barsnes, Harald; Vaudel, Marc
2017-09-13
Sequence database search engines are bioinformatics algorithms that identify peptides from tandem mass spectra using a reference protein sequence database. Two decades of development, notably driven by advances in mass spectrometry, have provided scientists with more than 30 published search engines, each with its own properties. In this review, we present the common paradigm behind the different implementations, and its limitations for modern mass spectrometry datasets. We also detail how the search engines attempt to alleviate these limitations, and provide an overview of the different software frameworks available to the researcher. Finally, we highlight alternative approaches for the identification of proteomic mass spectrometry datasets, either as a replacement for, or as a complement to, sequence database search engines. © 2017 Wiley Periodicals, Inc.
Reynolds, Julie A; Thaiss, Christopher; Katkin, Wendy; Thompson, Robert J
2012-01-01
Despite substantial evidence that writing can be an effective tool to promote student learning and engagement, writing-to-learn (WTL) practices are still not widely implemented in science, technology, engineering, and mathematics (STEM) disciplines, particularly at research universities. Two major deterrents to progress are the lack of a community of science faculty committed to undertaking and applying the necessary pedagogical research, and the absence of a conceptual framework to systematically guide study designs and integrate findings. To address these issues, we undertook an initiative, supported by the National Science Foundation and sponsored by the Reinvention Center, to build a community of WTL/STEM educators who would undertake a heuristic review of the literature and formulate a conceptual framework. In addition to generating a searchable database of empirically validated and promising WTL practices, our work lays the foundation for multi-university empirical studies of the effectiveness of WTL practices in advancing student learning and engagement.
Changes in Exercise Data Management
NASA Technical Reports Server (NTRS)
Buxton, R. E.; Kalogera, K. L.; Hanson, A. M.
2018-01-01
The suite of exercise hardware aboard the International Space Station (ISS) generates an immense amount of data. The data collected from the treadmill, cycle ergometer, and resistance strength training hardware are basic exercise parameters (time, heart rate, speed, load, etc.). The raw data are post processed in the laboratory and more detailed parameters are calculated from each exercise data file. Updates have recently been made to how this valuable data are stored, adding an additional level of data security, increasing data accessibility, and resulting in overall increased efficiency of medical report delivery. Questions regarding exercise performance or how exercise may influence other variables of crew health frequently arise within the crew health care community. Inquiries over the health of the exercise hardware often need quick analysis and response to ensure the exercise system is operable on a continuous basis. Consolidating all of the exercise system data in a single repository enables a quick response to both the medical and engineering communities. A SQL server database is currently in use, and provides a secure location for all of the exercise data starting at ISS Expedition 1 - current day. The database has been structured to update derived metrics automatically, making analysis and reporting available within minutes of dropping the inflight data it into the database. Commercial tools were evaluated to help aggregate and visualize data from the SQL database. The Tableau software provides manageable interface, which has improved the laboratory's output time of crew reports by 67%. Expansion of the SQL database to be inclusive of additional medical requirement metrics, addition of 'app-like' tools for mobile visualization, and collaborative use (e.g. operational support teams, research groups, and International Partners) of the data system is currently being explored.
OntoMate: a text-mining tool aiding curation at the Rat Genome Database
Liu, Weisong; Laulederkind, Stanley J. F.; Hayman, G. Thomas; Wang, Shur-Jen; Nigam, Rajni; Smith, Jennifer R.; De Pons, Jeff; Dwinell, Melinda R.; Shimoyama, Mary
2015-01-01
The Rat Genome Database (RGD) is the premier repository of rat genomic, genetic and physiologic data. Converting data from free text in the scientific literature to a structured format is one of the main tasks of all model organism databases. RGD spends considerable effort manually curating gene, Quantitative Trait Locus (QTL) and strain information. The rapidly growing volume of biomedical literature and the active research in the biological natural language processing (bioNLP) community have given RGD the impetus to adopt text-mining tools to improve curation efficiency. Recently, RGD has initiated a project to use OntoMate, an ontology-driven, concept-based literature search engine developed at RGD, as a replacement for the PubMed (http://www.ncbi.nlm.nih.gov/pubmed) search engine in the gene curation workflow. OntoMate tags abstracts with gene names, gene mutations, organism name and most of the 16 ontologies/vocabularies used at RGD. All terms/ entities tagged to an abstract are listed with the abstract in the search results. All listed terms are linked both to data entry boxes and a term browser in the curation tool. OntoMate also provides user-activated filters for species, date and other parameters relevant to the literature search. Using the system for literature search and import has streamlined the process compared to using PubMed. The system was built with a scalable and open architecture, including features specifically designed to accelerate the RGD gene curation process. With the use of bioNLP tools, RGD has added more automation to its curation workflow. Database URL: http://rgd.mcw.edu PMID:25619558
Software Engineering Laboratory (SEL) database organization and user's guide, revision 2
NASA Technical Reports Server (NTRS)
Morusiewicz, Linda; Bristow, John
1992-01-01
The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base table is described. In addition, techniques for accessing the database through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL) are discussed.
Software Engineering Laboratory (SEL) database organization and user's guide
NASA Technical Reports Server (NTRS)
So, Maria; Heller, Gerard; Steinberg, Sandra; Spiegel, Douglas
1989-01-01
The organization of the Software Engineering Laboratory (SEL) database is presented. Included are definitions and detailed descriptions of the database tables and views, the SEL data, and system support data. The mapping from the SEL and system support data to the base tables is described. In addition, techniques for accessing the database, through the Database Access Manager for the SEL (DAMSEL) system and via the ORACLE structured query language (SQL), are discussed.
TRENDS: The aeronautical post-test database management system
NASA Technical Reports Server (NTRS)
Bjorkman, W. S.; Bondi, M. J.
1990-01-01
TRENDS, an engineering-test database operating system developed by NASA to support rotorcraft flight tests, is described. Capabilities and characteristics of the system are presented, with examples of its use in recalling and analyzing rotorcraft flight-test data from a TRENDS database. The importance of system user-friendliness in gaining users' acceptance is stressed, as is the importance of integrating supporting narrative data with numerical data in engineering-test databases. Considerations relevant to the creation and maintenance of flight-test database are discussed and TRENDS' solutions to database management problems are described. Requirements, constraints, and other considerations which led to the system's configuration are discussed and some of the lessons learned during TRENDS' development are presented. Potential applications of TRENDS to a wide range of aeronautical and other engineering tests are identified.
Mielczarek, A T; Saunders, A M; Larsen, P; Albertsen, M; Stevenson, M; Nielsen, J L; Nielsen, P H
2013-01-01
Since 2006 more than 50 Danish full-scale wastewater treatment plants with nutrient removal have been investigated in a project called 'The Microbial Database for Danish Activated Sludge Wastewater Treatment Plants with Nutrient Removal (MiDas-DK)'. Comprehensive sets of samples have been collected, analyzed and associated with extensive operational data from the plants. The community composition was analyzed by quantitative fluorescence in situ hybridization (FISH) supported by 16S rRNA amplicon sequencing and deep metagenomics. MiDas-DK has been a powerful tool to study the complex activated sludge ecosystems, and, besides many scientific articles on fundamental issues on mixed communities encompassing nitrifiers, denitrifiers, bacteria involved in P-removal, hydrolysis, fermentation, and foaming, the project has provided results that can be used to optimize the operation of full-scale plants and carry out trouble-shooting. A core microbial community has been defined comprising the majority of microorganisms present in the plants. Time series have been established, providing an overview of temporal variations in the different plants. Interestingly, although most microorganisms were present in all plants, there seemed to be plant-specific factors that controlled the population composition thereby keeping it unique in each plant over time. Statistical analyses of FISH and operational data revealed some correlations, but less than expected. MiDas-DK (www.midasdk.dk) will continue over the next years and we hope the approach can inspire others to make similar projects in other parts of the world to get a more comprehensive understanding of microbial communities in wastewater engineering.
On the Compliance of Women Engineers with a Gendered Scientific System
Ghiasi, Gita; Larivière, Vincent; Sugimoto, Cassidy R.
2015-01-01
There has been considerable effort in the last decade to increase the participation of women in engineering through various policies. However, there has been little empirical research on gender disparities in engineering which help underpin the effective preparation, co-ordination, and implementation of the science and technology (S&T) policies. This article aims to present a comprehensive gendered analysis of engineering publications across different specialties and provide a cross-gender analysis of research output and scientific impact of engineering researchers in academic, governmental, and industrial sectors. For this purpose, 679,338 engineering articles published from 2008 to 2013 are extracted from the Web of Science database and 974,837 authorships are analyzed. The structures of co-authorship collaboration networks in different engineering disciplines are examined, highlighting the role of female scientists in the diffusion of knowledge. The findings reveal that men dominate 80% of all the scientific production in engineering. Women engineers publish their papers in journals with higher Impact Factors than their male peers, but their work receives lower recognition (fewer citations) from the scientific community. Engineers—regardless of their gender—contribute to the reproduction of the male-dominated scientific structures through forming and repeating their collaborations predominantly with men. The results of this study call for integration of data driven gender-related policies in existing S&T discourse. PMID:26716831
Porter, K.A.; Jaiswal, K.S.; Wald, D.J.; Greene, M.; Comartin, Craig
2008-01-01
The U.S. Geological Survey’s Prompt Assessment of Global Earthquake’s Response (PAGER) Project and the Earthquake Engineering Research Institute’s World Housing Encyclopedia (WHE) are creating a global database of building stocks and their earthquake vulnerability. The WHE already represents a growing, community-developed public database of global housing and its detailed structural characteristics. It currently contains more than 135 reports on particular housing types in 40 countries. The WHE-PAGER effort extends the WHE in several ways: (1) by addressing non-residential construction; (2) by quantifying the prevalence of each building type in both rural and urban areas; (3) by addressing day and night occupancy patterns, (4) by adding quantitative vulnerability estimates from judgment or statistical observation; and (5) by analytically deriving alternative vulnerability estimates using in part laboratory testing.
SAADA: Astronomical Databases Made Easier
NASA Astrophysics Data System (ADS)
Michel, L.; Nguyen, H. N.; Motch, C.
2005-12-01
Many astronomers wish to share datasets with their community but have not enough manpower to develop databases having the functionalities required for high-level scientific applications. The SAADA project aims at automatizing the creation and deployment process of such databases. A generic but scientifically relevant data model has been designed which allows one to build databases by providing only a limited number of product mapping rules. Databases created by SAADA rely on a relational database supporting JDBC and covered by a Java layer including a lot of generated code. Such databases can simultaneously host spectra, images, source lists and plots. Data are grouped in user defined collections whose content can be seen as one unique set per data type even if their formats differ. Datasets can be correlated one with each other using qualified links. These links help, for example, to handle the nature of a cross-identification (e.g., a distance or a likelihood) or to describe their scientific content (e.g., by associating a spectrum to a catalog entry). The SAADA query engine is based on a language well suited to the data model which can handle constraints on linked data, in addition to classical astronomical queries. These constraints can be applied on the linked objects (number, class and attributes) and/or on the link qualifier values. Databases created by SAADA are accessed through a rich WEB interface or a Java API. We are currently developing an inter-operability module implanting VO protocols.
The LAILAPS search engine: a feature model for relevance ranking in life science databases.
Lange, Matthias; Spies, Karl; Colmsee, Christian; Flemming, Steffen; Klapperstück, Matthias; Scholz, Uwe
2010-03-25
Efficient and effective information retrieval in life sciences is one of the most pressing challenge in bioinformatics. The incredible growth of life science databases to a vast network of interconnected information systems is to the same extent a big challenge and a great chance for life science research. The knowledge found in the Web, in particular in life-science databases, are a valuable major resource. In order to bring it to the scientist desktop, it is essential to have well performing search engines. Thereby, not the response time nor the number of results is important. The most crucial factor for millions of query results is the relevance ranking. In this paper, we present a feature model for relevance ranking in life science databases and its implementation in the LAILAPS search engine. Motivated by the observation of user behavior during their inspection of search engine result, we condensed a set of 9 relevance discriminating features. These features are intuitively used by scientists, who briefly screen database entries for potential relevance. The features are both sufficient to estimate the potential relevance, and efficiently quantifiable. The derivation of a relevance prediction function that computes the relevance from this features constitutes a regression problem. To solve this problem, we used artificial neural networks that have been trained with a reference set of relevant database entries for 19 protein queries. Supporting a flexible text index and a simple data import format, this concepts are implemented in the LAILAPS search engine. It can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases. LAILAPS is publicly available for SWISSPROT data at http://lailaps.ipk-gatersleben.de.
NSWC Crane Aerospace Cell Test History Database
NASA Technical Reports Server (NTRS)
Brown, Harry; Moore, Bruce
1994-01-01
The Aerospace Cell Test History Database was developed to provide project engineers and scientists ready access to the data obtained from testing of aerospace cell designs at Naval Surface Warfare Center, Crane Division. The database is intended for use by all aerospace engineers and scientists involved in the design of power systems for satellites. Specifically, the database will provide a tool for project engineers to review the progress of their test at Crane and to have ready access to data for evaluation. Additionally, the database will provide a history of test results that designers can draw upon to answer questions about cell performance under certain test conditions and aid in selection of a cell for a satellite battery. Viewgraphs are included.
Comet: an open-source MS/MS sequence database search tool.
Eng, Jimmy K; Jahan, Tahmina A; Hoopmann, Michael R
2013-01-01
Proteomics research routinely involves identifying peptides and proteins via MS/MS sequence database search. Thus the database search engine is an integral tool in many proteomics research groups. Here, we introduce the Comet search engine to the existing landscape of commercial and open-source database search tools. Comet is open source, freely available, and based on one of the original sequence database search tools that has been widely used for many years. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Solar Sail Propulsion Technology Readiness Level Database
NASA Technical Reports Server (NTRS)
Adams, Charles L.
2004-01-01
The NASA In-Space Propulsion Technology (ISPT) Projects Office has been sponsoring 2 solar sail system design and development hardware demonstration activities over the past 20 months. Able Engineering Company (AEC) of Goleta, CA is leading one team and L Garde, Inc. of Tustin, CA is leading the other team. Component, subsystem and system fabrication and testing has been completed successfully. The goal of these activities is to advance the technology readiness level (TRL) of solar sail propulsion from 3 towards 6 by 2006. These activities will culminate in the deployment and testing of 20-meter solar sail system ground demonstration hardware in the 30 meter diameter thermal-vacuum chamber at NASA Glenn Plum Brook in 2005. This paper will describe the features of a computer database system that documents the results of the solar sail development activities to-date. Illustrations of the hardware components and systems, test results, analytical models, relevant space environment definition and current TRL assessment, as stored and manipulated within the database are presented. This database could serve as a central repository for all data related to the advancement of solar sail technology sponsored by the ISPT, providing an up-to-date assessment of the TRL of this technology. Current plans are to eventually make the database available to the Solar Sail community through the Space Transportation Information Network (STIN).
Permafrost Young Researchers Get Their Hands Dirty: The PYRN-Thermal State of Permafrost IPY Project
NASA Astrophysics Data System (ADS)
Johansson, M.; Lantuit, H.
2009-04-01
The Permafrost Young Researchers Network (PYRN) (www.pyrn.org) is a unique resource for students and young scientists and engineers studying permafrost. It is an international organization fostering innovative collaboration, seeking to recruit, retain, and promote future generations of permafrost scientists and engineers. Initiated for and during IPY, PYRN directs the multi-disciplinary talents of its membership toward global awareness, knowledge, and response to permafrost-related challenges in a changing climate. Created as an education and outreach component of the International Permafrost Association (IPA), PYRN is a central database of permafrost information and science for more than 500 young researchers from over 40 countries. PYRN distributes a newsletter, recognizes outstanding permafrost research by its members through an annual awards program, organizes training workshops (2007 in Abisko, Sweden and St. Petersburg, Russia, 2008 in Fairbanks, Alaska and St. Petersburg, Russia), and contributes to the growth and future of the permafrost community. While networking forms the basis of PYRN's activities, the organization also seeks to establish itself as a driver of permafrost research for the IPY and beyond. We recently launched a series of initiatives on several continents aimed at providing young scientists and engineers with the means to conduct ground temperature monitoring in under investigated permafrost regions. Focusing on sites not currently covered by the IPA's "Thermal State of Permafrost" project, the young investigators of PYRN successfully launched and funded the PYRN-TSP project. They use lightweight drills and temperature sensors to instrument shallow boreholes in those regions. The first phase of the project was started in the spring of 2008 at Scandinavian sites. The data and results will be incorporated in the global database on permafrost temperatures and made freely available to the scientific community, thereby contributing to the advance of permafrost science and the strengthening of the next generation of permafrost researchers.
Blending Education and Polymer Science: Semiautomated Creation of a Thermodynamic Property Database
ERIC Educational Resources Information Center
Tchoua, Roselyne B.; Qin, Jian; Audus, Debra J.; Chard, Kyle; Foster, Ian T.; de Pablo, Juan
2016-01-01
Structured databases of chemical and physical properties play a central role in the everyday research activities of scientists and engineers. In materials science, researchers and engineers turn to these databases to quickly query, compare, and aggregate various properties, thereby allowing for the development or application of new materials. The…
ERIC Educational Resources Information Center
Turner, Laura
2001-01-01
Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…
Astronaut Demographic Database: Everything You Want to Know About Astronauts and More
NASA Technical Reports Server (NTRS)
Keeton, Kathryn; Patterson, Holly
2011-01-01
A wealth of information regarding the astronaut population is available that could be especially useful to researchers. However, until now, it has been difficult to obtain that information in a systematic way. Therefore, this "astronaut database" began as a way for researchers within the Behavioral Health and Performance Group to keep track of the ever growing astronaut corps population. Before our effort, compilation of such data could be found, but not in a way that was easily acquired or accessible. One would have to use internet search engines, read through lengthy and potentially inaccurate informational sites, or read through astronaut biographies compiled by NASA. Astronauts are a unique class of individuals and, by examining such information, which we dubbed "Demographics," we hoped to find some commonalities that may be useful for other research areas and future research topics. By organizing the information pertaining to astronauts1 in a formal, unified catalog, we believe we have made the information more easily accessible, readily useable, and user friendly. Our end goal is to provide this database to others as a highly functional resource within the research community. Perhaps the database can eventually be an official, published document for researchers to gain full access.
Drinking water quality in Indigenous communities in Canada and health outcomes: a scoping review.
Bradford, Lori E A; Okpalauwaekwe, Udoka; Waldner, Cheryl L; Bharadwaj, Lalita A
2016-01-01
Many Indigenous communities in Canada live with high-risk drinking water systems and drinking water advisories and experience health status and water quality below that of the general population. A scoping review of research examining drinking water quality and its relationship to Indigenous health was conducted. The study was undertaken to identify the extent of the literature, summarize current reports and identify research needs. A scoping review was designed to identify peer-reviewed literature that examined challenges related to drinking water and health in Indigenous communities in Canada. Key search terms were developed and mapped on five bibliographic databases (MEDLINE/PubMED, Web of Knowledge, SciVerse Scopus, Taylor and Francis online journal and Google Scholar). Online searches for grey literature using relevant government websites were completed. Sixteen articles (of 518; 156 bibliographic search engines, 362 grey literature) met criteria for inclusion (contained keywords; publication year 2000-2015; peer-reviewed and from Canada). Studies were quantitative (8), qualitative (5) or mixed (3) and included case, cohort, cross-sectional and participatory designs. In most articles, no definition of "health" was given (14/16), and the primary health issue described was gastrointestinal illness (12/16). Challenges to the study of health and well-being with respect to drinking water in Indigenous communities included irregular funding, remote locations, ethical approval processes, small sample sizes and missing data. Research on drinking water and health outcomes in Indigenous communities in Canada is limited and occurs on an opportunistic basis. There is a need for more research funding, and inquiry to inform policy decisions for improvements of water quality and health-related outcomes in Indigenous communities. A coordinated network looking at First Nations water and health outcomes, a database to store and create access to research findings, increased funding and time frames for funding, and more decolonizing and community-based participatory research aimed at understanding the relationship between drinking water quality and health outcomes in First Nations communities in Canada are needed.
Development of a Data Citations Database for an Interdisciplinary Data Center
NASA Astrophysics Data System (ADS)
Chen, R. S.; Downs, R. R.; Schumacher, J.; Gerard, A.
2017-12-01
The scientific community has long depended on consistent citation of the scientific literature to enable traceability, support replication, and facilitate analysis and debate about scientific hypotheses, theories, assumptions, and conclusions. However, only in the past few years has the community focused on consistent citation of scientific data, e.g., through the application of Digital Object Identifiers (DOIs) to data, the development of peer-reviewed data publications, community principles and guidelines, and other mechanisms. This means that, moving ahead, it should be easier to identify and track data citations and conduct systematic bibliometric studies. However, this still leaves the problem that many legacy datasets and past citations lack DOIs, making it difficult to develop a historical baseline or assess trends. With this in mind, the NASA Socioeconomic Data and Applications Center (SEDAC) has developed a searchable citations database, containing more than 3,400 citations of SEDAC data and information products over the past 20 years. These citations were collected through various indices and search tools and in some cases through direct contacts with authors. The citations come from a range of natural, social, health, and engineering science journals, books, reports, and other media. The database can be used to find and extract citations filtered by a range of criteria, enabling quantitative analysis of trends, intercomparisons between data collections, and categorization of citations by type. We present a preliminary analysis of citations for selected SEDAC data collections, in order to establish a baseline and assess options for ongoing metrics to track the impact of SEDAC data on interdisciplinary science. We also present an analysis of the uptake of DOIs within data citations reported in published studies that used SEDAC data.
Biomedical Engineering curriculum at UAM-I: a critical review.
Martinez Licona, Fabiola; Azpiroz-Leehan, Joaquin; Urbina Medal, E Gerardo; Cadena Mendez, Miguel
2014-01-01
The Biomedical Engineering (BME) curriculum at Universidad Autónoma Metropolitana (UAM) has undergone at least four major transformations since the founding of the BME undergraduate program in 1974. This work is a critical assessment of the curriculum from the point of view of its results as derived from an analysis of, among other resources, institutional databases on students, graduates and their academic performance. The results of the evaluation can help us define admission policies as well as reasonable limits on the maximum duration of undergraduate studies. Other results linked to the faculty composition and the social environment can be used to define a methodology for the evaluation of teaching and the implementation of mentoring and tutoring programs. Changes resulting from this evaluation may be the only way to assure and maintain leadership and recognition from the BME community.
NASA Interactive Forms Type Interface - NIFTI
NASA Technical Reports Server (NTRS)
Jain, Bobby; Morris, Bill
2005-01-01
A flexible database query, update, modify, and delete tool was developed that provides an easy interface to Oracle forms. This tool - the NASA interactive forms type interface, or NIFTI - features on-the- fly forms creation, forms sharing among users, the capability to query the database from user-entered criteria on forms, traversal of query results, an ability to generate tab-delimited reports, viewing and downloading of reports to the user s workstation, and a hypertext-based help system. NIFTI is a very powerful ad hoc query tool that was developed using C++, X-Windows by a Motif application framework. A unique tool, NIFTI s capabilities appear in no other known commercial-off-the- shelf (COTS) tool, because NIFTI, which can be launched from the user s desktop, is a simple yet very powerful tool with a highly intuitive, easy-to-use graphical user interface (GUI) that will expedite the creation of database query/update forms. NIFTI, therefore, can be used in NASA s International Space Station (ISS) as well as within government and industry - indeed by all users of the widely disseminated Oracle base. And it will provide significant cost savings in the areas of user training and scalability while advancing the art over current COTS browsers. No COTS browser performs all the functions NIFTI does, and NIFTI is easier to use. NIFTI s cost savings are very significant considering the very large database with which it is used and the large user community with varying data requirements it will support. Its ease of use means that personnel unfamiliar with databases (e.g., managers, supervisors, clerks, and others) can develop their own personal reports. For NASA, a tool such as NIFTI was needed to query, update, modify, and make deletions within the ISS vehicle master database (VMDB), a repository of engineering data that includes an indentured parts list and associated resource data (power, thermal, volume, weight, and the like). Since the VMDB is used both as a collection point for data and as a common repository for engineering, integration, and operations teams, a tool such as NIFTI had to be designed that could expedite the creation of database query/update forms which could then be shared among users.
Beach nourishment in the USA, the history, the impacts, and the future
NASA Astrophysics Data System (ADS)
Young, Robert; Coburn, Andrew
2017-04-01
Currently, the primary tool being used at the local, state, and federal level in the USA to adapt to rising sea level, and to reduce potential storm damage is the addition of sand to the coastal system in the form of engineered beaches and dunes (commonly referred to as beach nourishment or beach replenishment). At the Program for the Study of Developed Shorelines, we have built a comprehensive database of all beach dredge and fill projects in the USA. The database tracks a history of beach projects that date back to 1923 with continual updates as new projects are implemented today. The projects in the database represent the movement of over 950 million cubic meters of sand covering over 3700 km of shoreline. This massive program of shoreline stabilization is being carried out with little long-term vision or planning, and no consideration for the cumulative environmental impacts of mining and placing so much sand. It is no exaggeration to say that a significant portion of the US East and Gulf Coasts are now completely artificial constructs, with engineering replacing natural processes. Along many shorelines, beach nourishment has become unsustainable as sand sources diminish. In addition, the cost of moving the sand has increased dramatically as communities scramble to build beaches and dunes. This program is not sustainable into the future, but there has been no widespread recognition of this reality, nor any move towards sensible retreat from the coast.
Common Database Interface for Heterogeneous Software Engineering Tools.
1987-12-01
SUB-GROUP Database Management Systems ;Programming(Comuters); 1e 05 Computer Files;Information Transfer;Interfaces; 19. ABSTRACT (Continue on reverse...Air Force Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Information Systems ...Literature ..... 8 System 690 Configuration ......... 8 Database Functionis ............ 14 Software Engineering Environments ... 14 Data Manager
Algorithms for database-dependent search of MS/MS data.
Matthiesen, Rune
2013-01-01
The frequent used bottom-up strategy for identification of proteins and their associated modifications generate nowadays typically thousands of MS/MS spectra that normally are matched automatically against a protein sequence database. Search engines that take as input MS/MS spectra and a protein sequence database are referred as database-dependent search engines. Many programs both commercial and freely available exist for database-dependent search of MS/MS spectra and most of the programs have excellent user documentation. The aim here is therefore to outline the algorithm strategy behind different search engines rather than providing software user manuals. The process of database-dependent search can be divided into search strategy, peptide scoring, protein scoring, and finally protein inference. Most efforts in the literature have been put in to comparing results from different software rather than discussing the underlining algorithms. Such practical comparisons can be cluttered by suboptimal implementation and the observed differences are frequently caused by software parameters settings which have not been set proper to allow even comparison. In other words an algorithmic idea can still be worth considering even if the software implementation has been demonstrated to be suboptimal. The aim in this chapter is therefore to split the algorithms for database-dependent searching of MS/MS data into the above steps so that the different algorithmic ideas become more transparent and comparable. Most search engines provide good implementations of the first three data analysis steps mentioned above, whereas the final step of protein inference are much less developed for most search engines and is in many cases performed by an external software. The final part of this chapter illustrates how protein inference is built into the VEMS search engine and discusses a stand-alone program SIR for protein inference that can import a Mascot search result.
Community Organizing for Database Trial Buy-In by Patrons
ERIC Educational Resources Information Center
Pionke, J. J.
2015-01-01
Database trials do not often garner a lot of feedback. Using community-organizing techniques can not only potentially increase the amount of feedback received but also deepen the relationship between the librarian and his or her constituent group. This is a case study of the use of community-organizing techniques in a series of database trials for…
Lowe, H. J.
1993-01-01
This paper describes Image Engine, an object-oriented, microcomputer-based, multimedia database designed to facilitate the storage and retrieval of digitized biomedical still images, video, and text using inexpensive desktop computers. The current prototype runs on Apple Macintosh computers and allows network database access via peer to peer file sharing protocols. Image Engine supports both free text and controlled vocabulary indexing of multimedia objects. The latter is implemented using the TView thesaurus model developed by the author. The current prototype of Image Engine uses the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary (with UMLS Meta-1 extensions) as its indexing thesaurus. PMID:8130596
Writing-to-Learn in Undergraduate Science Education: A Community-Based, Conceptually Driven Approach
Reynolds, Julie A.; Thaiss, Christopher; Katkin, Wendy; Thompson, Robert J.
2012-01-01
Despite substantial evidence that writing can be an effective tool to promote student learning and engagement, writing-to-learn (WTL) practices are still not widely implemented in science, technology, engineering, and mathematics (STEM) disciplines, particularly at research universities. Two major deterrents to progress are the lack of a community of science faculty committed to undertaking and applying the necessary pedagogical research, and the absence of a conceptual framework to systematically guide study designs and integrate findings. To address these issues, we undertook an initiative, supported by the National Science Foundation and sponsored by the Reinvention Center, to build a community of WTL/STEM educators who would undertake a heuristic review of the literature and formulate a conceptual framework. In addition to generating a searchable database of empirically validated and promising WTL practices, our work lays the foundation for multi-university empirical studies of the effectiveness of WTL practices in advancing student learning and engagement. PMID:22383613
Preparing Engineers for the Challenges of Community Engagement
ERIC Educational Resources Information Center
Harsh, Matthew; Bernstein, Michael J.; Wetmore, Jameson; Cozzens, Susan; Woodson, Thomas; Castillo, Rafael
2017-01-01
Despite calls to address global challenges through community engagement, engineers are not formally prepared to engage with communities. Little research has been done on means to address this "engagement gap" in engineering education. We examine the efficacy of an intensive, two-day Community Engagement Workshop for engineers, designed…
Foster, Joseph M; Moreno, Pablo; Fabregat, Antonio; Hermjakob, Henning; Steinbeck, Christoph; Apweiler, Rolf; Wakelam, Michael J O; Vizcaíno, Juan Antonio
2013-01-01
Protein sequence databases are the pillar upon which modern proteomics is supported, representing a stable reference space of predicted and validated proteins. One example of such resources is UniProt, enriched with both expertly curated and automatic annotations. Taken largely for granted, similar mature resources such as UniProt are not available yet in some other "omics" fields, lipidomics being one of them. While having a seasoned community of wet lab scientists, lipidomics lies significantly behind proteomics in the adoption of data standards and other core bioinformatics concepts. This work aims to reduce the gap by developing an equivalent resource to UniProt called 'LipidHome', providing theoretically generated lipid molecules and useful metadata. Using the 'FASTLipid' Java library, a database was populated with theoretical lipids, generated from a set of community agreed upon chemical bounds. In parallel, a web application was developed to present the information and provide computational access via a web service. Designed specifically to accommodate high throughput mass spectrometry based approaches, lipids are organised into a hierarchy that reflects the variety in the structural resolution of lipid identifications. Additionally, cross-references to other lipid related resources and papers that cite specific lipids were used to annotate lipid records. The web application encompasses a browser for viewing lipid records and a 'tools' section where an MS1 search engine is currently implemented. LipidHome can be accessed at http://www.ebi.ac.uk/apweiler-srv/lipidhome.
Salemi, Jason L; Salinas-Miranda, Abraham A; Wilson, Roneé E; Salihu, Hamisu M
2015-01-01
Objective To describe the use of a clinically enhanced maternal and child health (MCH) database to strengthen community-engaged research activities, and to support the sustainability of data infrastructure initiatives. Data Sources/Study Setting Population-based, longitudinal database covering over 2.3 million mother–infant dyads during a 12-year period (1998–2009) in Florida. Setting: A community-based participatory research (CBPR) project in a socioeconomically disadvantaged community in central Tampa, Florida. Study Design Case study of the use of an enhanced state database for supporting CBPR activities. Principal Findings A federal data infrastructure award resulted in the creation of an MCH database in which over 92 percent of all birth certificate records for infants born between 1998 and 2009 were linked to maternal and infant hospital encounter-level data. The population-based, longitudinal database was used to supplement data collected from focus groups and community surveys with epidemiological and health care cost data on important MCH disparity issues in the target community. Data were used to facilitate a community-driven, decision-making process in which the most important priorities for intervention were identified. Conclusions Integrating statewide all-payer, hospital-based databases into CBPR can empower underserved communities with a reliable source of health data, and it can promote the sustainability of newly developed data systems. PMID:25879276
NASA Astrophysics Data System (ADS)
Piasecki, M.; Beran, B.
2007-12-01
Search engines have changed the way we see the Internet. The ability to find the information by just typing in keywords was a big contribution to the overall web experience. While the conventional search engine methodology worked well for textual documents, locating scientific data remains a problem since they are stored in databases not readily accessible by search engine bots. Considering different temporal, spatial and thematic coverage of different databases, especially for interdisciplinary research it is typically necessary to work with multiple data sources. These sources can be federal agencies which generally offer national coverage or regional sources which cover a smaller area with higher detail. However for a given geographic area of interest there often exists more than one database with relevant data. Thus being able to query multiple databases simultaneously is a desirable feature that would be tremendously useful for scientists. Development of such a search engine requires dealing with various heterogeneity issues. In scientific databases, systems often impose controlled vocabularies which ensure that they are generally homogeneous within themselves but are semantically heterogeneous when moving between different databases. This defines the boundaries of possible semantic related problems making it easier to solve than with the conventional search engines that deal with free text. We have developed a search engine that enables querying multiple data sources simultaneously and returns data in a standardized output despite the aforementioned heterogeneity issues between the underlying systems. This application relies mainly on metadata catalogs or indexing databases, ontologies and webservices with virtual globe and AJAX technologies for the graphical user interface. Users can trigger a search of dozens of different parameters over hundreds of thousands of stations from multiple agencies by providing a keyword, a spatial extent, i.e. a bounding box, and a temporal bracket. As part of this development we have also added an environment that allows users to do some of the semantic tagging, i.e. the linkage of a variable name (which can be anything they desire) to defined concepts in the ontology structure which in turn provides the backbone of the search engine.
Configuration management program plan for Hanford site systems engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kellie, C.L.
This plan establishes the integrated management program for the evolving technical baseline developed through the systems engineering process. This configuration management program aligns with the criteria identified in the DOE Standard, DOE-STD-1073-93. Included are specific requirements for control of the systems engineering RDD-100 database, and electronic data incorporated in the database that establishes the Hanford Site Technical Baseline.
Correlated Attack Modeling (CAM)
2003-10-01
describing attack models to a scenario recognition engine, a prototype of such an engine was developed, using components of the EMERALD intrusion...content. Results – The attacker gains information enabling remote access to database (i.e., privileged login information, database layout to allow...engine that uses attack specifications written in CAML. The implementation integrates two advanced technologies devel- oped in the EMERALD program [27, 31
Re-thinking organisms: The impact of databases on model organism biology.
Leonelli, Sabina; Ankeny, Rachel A
2012-03-01
Community databases have become crucial to the collection, ordering and retrieval of data gathered on model organisms, as well as to the ways in which these data are interpreted and used across a range of research contexts. This paper analyses the impact of community databases on research practices in model organism biology by focusing on the history and current use of four community databases: FlyBase, Mouse Genome Informatics, WormBase and The Arabidopsis Information Resource. We discuss the standards used by the curators of these databases for what counts as reliable evidence, acceptable terminology, appropriate experimental set-ups and adequate materials (e.g., specimens). On the one hand, these choices are informed by the collaborative research ethos characterising most model organism communities. On the other hand, the deployment of these standards in databases reinforces this ethos and gives it concrete and precise instantiations by shaping the skills, practices, values and background knowledge required of the database users. We conclude that the increasing reliance on community databases as vehicles to circulate data is having a major impact on how researchers conduct and communicate their research, which affects how they understand the biology of model organisms and its relation to the biology of other species. Copyright © 2011 Elsevier Ltd. All rights reserved.
State analysis requirements database for engineering complex embedded systems
NASA Technical Reports Server (NTRS)
Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.
2004-01-01
It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.
Hammond, Davyda; Conlon, Kathryn; Barzyk, Timothy; Chahine, Teresa; Zartarian, Valerie; Schultz, Brad
2011-03-01
Communities are concerned over pollution levels and seek methods to systematically identify and prioritize the environmental stressors in their communities. Geographic information system (GIS) maps of environmental information can be useful tools for communities in their assessment of environmental-pollution-related risks. Databases and mapping tools that supply community-level estimates of ambient concentrations of hazardous pollutants, risk, and potential health impacts can provide relevant information for communities to understand, identify, and prioritize potential exposures and risk from multiple sources. An assessment of existing databases and mapping tools was conducted as part of this study to explore the utility of publicly available databases, and three of these databases were selected for use in a community-level GIS mapping application. Queried data from the U.S. EPA's National-Scale Air Toxics Assessment, Air Quality System, and National Emissions Inventory were mapped at the appropriate spatial and temporal resolutions for identifying risks of exposure to air pollutants in two communities. The maps combine monitored and model-simulated pollutant and health risk estimates, along with local survey results, to assist communities with the identification of potential exposure sources and pollution hot spots. Findings from this case study analysis will provide information to advance the development of new tools to assist communities with environmental risk assessments and hazard prioritization. © 2010 Society for Risk Analysis.
Experimental Characterization of Gas Turbine Emissions at Simulated Flight Altitude Conditions
NASA Technical Reports Server (NTRS)
Howard, R. P.; Wormhoudt, J. C.; Whitefield, P. D.
1996-01-01
NASA's Atmospheric Effects of Aviation Project (AEAP) is developing a scientific basis for assessment of the atmospheric impact of subsonic and supersonic aviation. A primary goal is to assist assessments of United Nations scientific organizations and hence, consideration of emissions standards by the International Civil Aviation Organization (ICAO). Engine tests have been conducted at AEDC to fulfill the need of AEAP. The purpose of these tests is to obtain a comprehensive database to be used for supplying critical information to the atmospheric research community. It includes: (1) simulated sea-level-static test data as well as simulated altitude data; and (2) intrusive (extractive probe) data as well as non-intrusive (optical techniques) data. A commercial-type bypass engine with aviation fuel was used in this test series. The test matrix was set by parametrically selecting the temperature, pressure, and flow rate at sea-level-static and different altitudes to obtain a parametric set of data.
Uhlirova, Hana; Tian, Peifang; Kılıç, Kıvılcım; Thunemann, Martin; Sridhar, Vishnu B; Chmelik, Radim; Bartsch, Hauke; Dale, Anders M; Devor, Anna; Saisan, Payam A
2018-05-04
The importance of sharing experimental data in neuroscience grows with the amount and complexity of data acquired and various techniques used to obtain and process these data. However, the majority of experimental data, especially from individual studies of regular-sized laboratories never reach wider research community. A graphical user interface (GUI) engine called Neurovascular Network Explorer 2.0 (NNE 2.0) has been created as a tool for simple and low-cost sharing and exploring of vascular imaging data. NNE 2.0 interacts with a database containing optogenetically-evoked dilation/constriction time-courses of individual vessels measured in mice somatosensory cortex in vivo by 2-photon microscopy. NNE 2.0 enables selection and display of the time-courses based on different criteria (subject, branching order, cortical depth, vessel diameter, arteriolar tree) as well as simple mathematical manipulation (e.g. averaging, peak-normalization) and data export. It supports visualization of the vascular network in 3D and enables localization of the individual functional vessel diameter measurements within vascular trees. NNE 2.0, its source code, and the corresponding database are freely downloadable from UCSD Neurovascular Imaging Laboratory website 1 . The source code can be utilized by the users to explore the associated database or as a template for databasing and sharing their own experimental results provided the appropriate format.
A case study for a digital seabed database: Bohai Sea engineering geology database
NASA Astrophysics Data System (ADS)
Tianyun, Su; Shikui, Zhai; Baohua, Liu; Ruicai, Liang; Yanpeng, Zheng; Yong, Wang
2006-07-01
This paper discusses the designing plan of ORACLE-based Bohai Sea engineering geology database structure from requisition analysis, conceptual structure analysis, logical structure analysis, physical structure analysis and security designing. In the study, we used the object-oriented Unified Modeling Language (UML) to model the conceptual structure of the database and used the powerful function of data management which the object-oriented and relational database ORACLE provides to organize and manage the storage space and improve its security performance. By this means, the database can provide rapid and highly effective performance in data storage, maintenance and query to satisfy the application requisition of the Bohai Sea Oilfield Paradigm Area Information System.
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Soditus, Sherry; Hendricks, Robert C.; Zaretsky, Erwin V.
2002-01-01
Over the past two decades there has been considerable effort by NASA Glenn and others to develop probabilistic codes to predict with reasonable engineering certainty the life and reliability of critical components in rotating machinery and, more specifically, in the rotating sections of airbreathing and rocket engines. These codes have, to a very limited extent, been verified with relatively small bench rig type specimens under uniaxial loading. Because of the small and very narrow database the acceptance of these codes within the aerospace community has been limited. An alternate approach to generating statistically significant data under complex loading and environments simulating aircraft and rocket engine conditions is to obtain, catalog and statistically analyze actual field data. End users of the engines, such as commercial airlines and the military, record and store operational and maintenance information. This presentation describes a cooperative program between the NASA GRC, United Airlines, USAF Wright Laboratory, U.S. Army Research Laboratory and Australian Aeronautical & Maritime Research Laboratory to obtain and analyze these airline data for selected components such as blades, disks and combustors. These airline data will be used to benchmark and compare existing life prediction codes.
2001-09-01
replication) -- all from Visual Basic and VBA . In fact, we found that the SQL Server engine actually had a plethora of options, most formidable of...2002, the new SQL Server 2000 database engine, and Microsoft Visual Basic.NET. This thesis describes our use of the Spiral Development Model to...versions of Microsoft products? Specifically, the pending release of Microsoft Office 2002, the new SQL Server 2000 database engine, and Microsoft
Configuration management program plan for Hanford site systems engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, A.G.
This plan establishes the integrated configuration management program for the evolving technical baseline developed through the systems engineering process. This configuration management program aligns with the criteria identified in the DOE Standard, DOE-STD-1073-93. Included are specific requirements for control of the systems engineering RDD-100 database, and electronic data incorporated in the database that establishes the Hanford site technical baseline.
NASA Technical Reports Server (NTRS)
Levack, Daniel J. H.
2000-01-01
The Alternate Propulsion Subsystem Concepts contract had seven tasks defined that are reported under this contract deliverable. The tasks were: FAA Restart Study, J-2S Restart Study, Propulsion Database Development. SSME Upper Stage Use. CERs for Liquid Propellant Rocket Engines. Advanced Low Cost Engines, and Tripropellant Comparison Study. The two restart studies, F-1A and J-2S, generated program plans for restarting production of each engine. Special emphasis was placed on determining changes to individual parts due to obsolete materials, changes in OSHA and environmental concerns, new processes available, and any configuration changes to the engines. The Propulsion Database Development task developed a database structure and format which is easy to use and modify while also being comprehensive in the level of detail available. The database structure included extensive engine information and allows for parametric data generation for conceptual engine concepts. The SSME Upper Stage Use task examined the changes needed or desirable to use the SSME as an upper stage engine both in a second stage and in a translunar injection stage. The CERs for Liquid Engines task developed qualitative parametric cost estimating relationships at the engine and major subassembly level for estimating development and production costs of chemical propulsion liquid rocket engines. The Advanced Low Cost Engines task examined propulsion systems for SSTO applications including engine concept definition, mission analysis. trade studies. operating point selection, turbomachinery alternatives, life cycle cost, weight definition. and point design conceptual drawings and component design. The task concentrated on bipropellant engines, but also examined tripropellant engines. The Tripropellant Comparison Study task provided an unambiguous comparison among various tripropellant implementation approaches and cycle choices, and then compared them to similarly designed bipropellant engines in the SSTO mission This volume overviews each of the tasks giving its objectives, main results. and conclusions. More detailed Final Task Reports are available on each individual task.
Federated Web-accessible Clinical Data Management within an Extensible NeuroImaging Database
Keator, David B.; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R.; Bockholt, Jeremy; Grethe, Jeffrey S.
2010-01-01
Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging studies is introduced: The Human Clinical Imaging Database (HID) and Toolkit. The database schema is constructed to support the storage of new data types without changes to the underlying schema. The complex infrastructure allows management of experiment data, such as image protocol and behavioral task parameters, as well as subject-specific data, including demographics, clinical assessments, and behavioral task performance metrics. Of significant interest, embedded clinical data entry and management tools enhance both consistency of data reporting and automatic entry of data into the database. The Clinical Assessment Layout Manager (CALM) allows users to create on-line data entry forms for use within and across sites, through which data is pulled into the underlying database via the generic clinical assessment management engine (GAME). Importantly, the system is designed to operate in a distributed environment, serving both human users and client applications in a service-oriented manner. Querying capabilities use a built-in multi-database parallel query builder/result combiner, allowing web-accessible queries within and across multiple federated databases. The system along with its documentation is open-source and available from the Neuroimaging Informatics Tools and Resource Clearinghouse (NITRC) site. PMID:20567938
Nastasi, Giovanni; Miceli, Carla; Pittalà, Valeria; Modica, Maria N; Prezzavento, Orazio; Romeo, Giuseppe; Rescifina, Antonio; Marrazzo, Agostino; Amata, Emanuele
2017-01-01
Sigma (σ) receptors are accepted as a particular receptor class consisting of two subtypes: sigma-1 (σ 1 ) and sigma-2 (σ 2 ). The two receptor subtypes have specific drug actions, pharmacological profiles and molecular characteristics. The σ 2 receptor is overexpressed in several tumor cell lines, and its ligands are currently under investigation for their role in tumor diagnosis and treatment. The σ 2 receptor structure has not been disclosed, and researchers rely on σ 2 receptor radioligand binding assay to understand the receptor's pharmacological behavior and design new lead compounds. Here we present the sigma-2 Receptor Selective Ligands Database (S2RSLDB) a manually curated database of the σ 2 receptor selective ligands containing more than 650 compounds. The database is built with chemical structure information, radioligand binding affinity data, computed physicochemical properties, and experimental radioligand binding procedures. The S2RSLDB is freely available online without account login and having a powerful search engine the user may build complex queries, sort tabulated results, generate color coded 2D and 3D graphs and download the data for additional screening. The collection here reported is extremely useful for the development of new ligands endowed of σ 2 receptor affinity, selectivity, and appropriate physicochemical properties. The database will be updated yearly and in the near future, an online submission form will be available to help with keeping the database widely spread in the research community and continually updated. The database is available at http://www.researchdsf.unict.it/S2RSLDB.
Federated web-accessible clinical data management within an extensible neuroimaging database.
Ozyurt, I Burak; Keator, David B; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R; Bockholt, Jeremy; Grethe, Jeffrey S
2010-12-01
Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging studies is introduced: The Human Clinical Imaging Database (HID) and Toolkit. The database schema is constructed to support the storage of new data types without changes to the underlying schema. The complex infrastructure allows management of experiment data, such as image protocol and behavioral task parameters, as well as subject-specific data, including demographics, clinical assessments, and behavioral task performance metrics. Of significant interest, embedded clinical data entry and management tools enhance both consistency of data reporting and automatic entry of data into the database. The Clinical Assessment Layout Manager (CALM) allows users to create on-line data entry forms for use within and across sites, through which data is pulled into the underlying database via the generic clinical assessment management engine (GAME). Importantly, the system is designed to operate in a distributed environment, serving both human users and client applications in a service-oriented manner. Querying capabilities use a built-in multi-database parallel query builder/result combiner, allowing web-accessible queries within and across multiple federated databases. The system along with its documentation is open-source and available from the Neuroimaging Informatics Tools and Resource Clearinghouse (NITRC) site.
U.S. Army Research Laboratory (ARL) multimodal signatures database
NASA Astrophysics Data System (ADS)
Bennett, Kelly
2008-04-01
The U.S. Army Research Laboratory (ARL) Multimodal Signatures Database (MMSDB) is a centralized collection of sensor data of various modalities that are co-located and co-registered. The signatures include ground and air vehicles, personnel, mortar, artillery, small arms gunfire from potential sniper weapons, explosives, and many other high value targets. This data is made available to Department of Defense (DoD) and DoD contractors, Intel agencies, other government agencies (OGA), and academia for use in developing target detection, tracking, and classification algorithms and systems to protect our Soldiers. A platform independent Web interface disseminates the signatures to researchers and engineers within the scientific community. Hierarchical Data Format 5 (HDF5) signature models provide an excellent solution for the sharing of complex multimodal signature data for algorithmic development and database requirements. Many open source tools for viewing and plotting HDF5 signatures are available over the Web. Seamless integration of HDF5 signatures is possible in both proprietary computational environments, such as MATLAB, and Free and Open Source Software (FOSS) computational environments, such as Octave and Python, for performing signal processing, analysis, and algorithm development. Future developments include extending the Web interface into a portal system for accessing ARL algorithms and signatures, High Performance Computing (HPC) resources, and integrating existing database and signature architectures into sensor networking environments.
NCAD, a database integrating the intrinsic conformational preferences of non-coded amino acids
Revilla-López, Guillem; Torras, Juan; Curcó, David; Casanovas, Jordi; Calaza, M. Isabel; Zanuy, David; Jiménez, Ana I.; Cativiela, Carlos; Nussinov, Ruth; Grodzinski, Piotr; Alemán, Carlos
2010-01-01
Peptides and proteins find an ever-increasing number of applications in the biomedical and materials engineering fields. The use of non-proteinogenic amino acids endowed with diverse physicochemical and structural features opens the possibility to design proteins and peptides with novel properties and functions. Moreover, non-proteinogenic residues are particularly useful to control the three-dimensional arrangement of peptidic chains, which is a crucial issue for most applications. However, information regarding such amino acids –also called non-coded, non-canonical or non-standard– is usually scattered among publications specialized in quite diverse fields as well as in patents. Making all these data useful to the scientific community requires new tools and a framework for their assembly and coherent organization. We have successfully compiled, organized and built a database (NCAD, Non-Coded Amino acids Database) containing information about the intrinsic conformational preferences of non-proteinogenic residues determined by quantum mechanical calculations, as well as bibliographic information about their synthesis, physical and spectroscopic characterization, conformational propensities established experimentally, and applications. The architecture of the database is presented in this work together with the first family of non-coded residues included, namely, α-tetrasubstituted α-amino acids. Furthermore, the NCAD usefulness is demonstrated through a test-case application example. PMID:20455555
Metabolic pathways for the whole community.
Hanson, Niels W; Konwar, Kishori M; Hawley, Alyse K; Altman, Tomer; Karp, Peter D; Hallam, Steven J
2014-07-22
A convergence of high-throughput sequencing and computational power is transforming biology into information science. Despite these technological advances, converting bits and bytes of sequence information into meaningful insights remains a challenging enterprise. Biological systems operate on multiple hierarchical levels from genomes to biomes. Holistic understanding of biological systems requires agile software tools that permit comparative analyses across multiple information levels (DNA, RNA, protein, and metabolites) to identify emergent properties, diagnose system states, or predict responses to environmental change. Here we adopt the MetaPathways annotation and analysis pipeline and Pathway Tools to construct environmental pathway/genome databases (ePGDBs) that describe microbial community metabolism using MetaCyc, a highly curated database of metabolic pathways and components covering all domains of life. We evaluate Pathway Tools' performance on three datasets with different complexity and coding potential, including simulated metagenomes, a symbiotic system, and the Hawaii Ocean Time-series. We define accuracy and sensitivity relationships between read length, coverage and pathway recovery and evaluate the impact of taxonomic pruning on ePGDB construction and interpretation. Resulting ePGDBs provide interactive metabolic maps, predict emergent metabolic pathways associated with biosynthesis and energy production and differentiate between genomic potential and phenotypic expression across defined environmental gradients. This multi-tiered analysis provides the user community with specific operating guidelines, performance metrics and prediction hazards for more reliable ePGDB construction and interpretation. Moreover, it demonstrates the power of Pathway Tools in predicting metabolic interactions in natural and engineered ecosystems.
Mohanty, Anee; Wu, Yichao; Cao, Bin
2014-10-01
In natural and engineered environments, microorganisms often exist as complex communities, which are key to the health of ecosystems and the success of bioprocesses in various engineering applications. With the rapid development of nanotechnology in recent years, engineered nanomaterials (ENMs) have been considered one type of emerging contaminants that pose great potential risks to the proper function of microbial communities in natural and engineered ecosystems. The impacts of ENMs on microorganisms have attracted increasing research attentions; however, most studies focused on the antimicrobial activities of ENMs at single cell and population level. Elucidating the influence of ENMs on microbial communities represents a critical step toward a comprehensive understanding of the ecotoxicity of ENMs. In this mini-review, we summarize and discuss recent research work on the impacts of ENMs on microbial communities in natural and engineered ecosystems, with an emphasis on their influences on the community structure and function. We also highlight several important research topics which may be of great interest to the research community.
Identification and Classification of Common Risks in Space Science Missions
NASA Technical Reports Server (NTRS)
Hihn, Jairus M.; Chattopadhyay, Debarati; Hanna, Robert A.; Port, Daniel; Eggleston, Sabrina
2010-01-01
Due to the highly constrained schedules and budgets that NASA missions must contend with, the identification and management of cost, schedule and risks in the earliest stages of the lifecycle is critical. At the Jet Propulsion Laboratory (JPL) it is the concurrent engineering teams that first address these items in a systematic manner. Foremost of these concurrent engineering teams is Team X. Started in 1995, Team X has carried out over 1000 studies, dramatically reducing the time and cost involved, and has been the model for other concurrent engineering teams both within NASA and throughout the larger aerospace community. The ability to do integrated risk identification and assessment was first introduced into Team X in 2001. Since that time the mission risks identified in each study have been kept in a database. In this paper we will describe how the Team X risk process is evolving highlighting the strengths and weaknesses of the different approaches. The paper will especially focus on the identification and classification of common risks that have arisen during Team X studies of space based science missions.
Thermal Performance Data Services (TPDS)
NASA Technical Reports Server (NTRS)
French, Richard T.; Wright, Michael J.
2013-01-01
Initiated as a NASA Engineering and Safety Center (NESC) assessment in 2009, the Thermal Performance Database (TPDB) was a response to the need for a centralized thermal performance data archive. The assessment was renamed Thermal Performance Data Services (TPDS) in 2012; the undertaking has had two fronts of activity: the development of a repository software application and the collection of historical thermal performance data sets from dispersed sources within the thermal performance community. This assessment has delivered a foundational tool on which additional features should be built to increase efficiency, expand the protection of critical Agency investments, and provide new discipline-advancing work opportunities. This report contains the information from the assessment.
CropEx Web-Based Agricultural Monitoring and Decision Support
NASA Technical Reports Server (NTRS)
Harvey. Craig; Lawhead, Joel
2011-01-01
CropEx is a Web-based agricultural Decision Support System (DSS) that monitors changes in crop health over time. It is designed to be used by a wide range of both public and private organizations, including individual producers and regional government offices with a vested interest in tracking vegetation health. The database and data management system automatically retrieve and ingest data for the area of interest. Another stores results of the processing and supports the DSS. The processing engine will allow server-side analysis of imagery with support for image sub-setting and a set of core raster operations for image classification, creation of vegetation indices, and change detection. The system includes the Web-based (CropEx) interface, data ingestion system, server-side processing engine, and a database processing engine. It contains a Web-based interface that has multi-tiered security profiles for multiple users. The interface provides the ability to identify areas of interest to specific users, user profiles, and methods of processing and data types for selected or created areas of interest. A compilation of programs is used to ingest available data into the system, classify that data, profile that data for quality, and make data available for the processing engine immediately upon the data s availability to the system (near real time). The processing engine consists of methods and algorithms used to process the data in a real-time fashion without copying, storing, or moving the raw data. The engine makes results available to the database processing engine for storage and further manipulation. The database processing engine ingests data from the image processing engine, distills those results into numerical indices, and stores each index for an area of interest. This process happens each time new data is ingested and processed for the area of interest, and upon subsequent database entries, the database processing engine qualifies each value for each area of interest and conducts a logical processing of results indicating when and where thresholds are exceeded. Reports are provided at regular, operator-determined intervals that include variances from thresholds and links to view raw data for verification, if necessary. The technology and method of development allow the code base to easily be modified for varied use in the real-time and near-real-time processing environments. In addition, the final product will be demonstrated as a means for rapid draft assessment of imagery.
NASA Astrophysics Data System (ADS)
Stepanov, Sergey
2013-03-01
X-Ray Server (x-server.gmca.aps.anl.gov) is a WWW-based computational server for modeling of X-ray diffraction, reflection and scattering data. The modeling software operates directly on the server and can be accessed remotely either from web browsers or from user software. In the later case the server can be deployed as a software library or a data fitting engine. As the server recently surpassed the milestones of 15 years online and 1.5 million calculations, it accumulated a number of technical solutions that are discussed in this paper. The developed approaches to detecting physical model limits and user calculations failures, solutions to spam and firewall problems, ways to involve the community in replenishing databases and methods to teach users automated access to the server programs may be helpful for X-ray researchers interested in using the server or sharing their own software online.
SSME environment database development
NASA Technical Reports Server (NTRS)
Reardon, John
1987-01-01
The internal environment of the Space Shuttle Main Engine (SSME) is being determined from hot firings of the prototype engines and from model tests using either air or water as the test fluid. The objectives are to develop a database system to facilitate management and analysis of test measurements and results, to enter available data into the the database, and to analyze available data to establish conventions and procedures to provide consistency in data normalization and configuration geometry references.
DRUMS: a human disease related unique gene mutation search engine.
Li, Zuofeng; Liu, Xingnan; Wen, Jingran; Xu, Ye; Zhao, Xin; Li, Xuan; Liu, Lei; Zhang, Xiaoyan
2011-10-01
With the completion of the human genome project and the development of new methods for gene variant detection, the integration of mutation data and its phenotypic consequences has become more important than ever. Among all available resources, locus-specific databases (LSDBs) curate one or more specific genes' mutation data along with high-quality phenotypes. Although some genotype-phenotype data from LSDB have been integrated into central databases little effort has been made to integrate all these data by a search engine approach. In this work, we have developed disease related unique gene mutation search engine (DRUMS), a search engine for human disease related unique gene mutation as a convenient tool for biologists or physicians to retrieve gene variant and related phenotype information. Gene variant and phenotype information were stored in a gene-centred relational database. Moreover, the relationships between mutations and diseases were indexed by the uniform resource identifier from LSDB, or another central database. By querying DRUMS, users can access the most popular mutation databases under one interface. DRUMS could be treated as a domain specific search engine. By using web crawling, indexing, and searching technologies, it provides a competitively efficient interface for searching and retrieving mutation data and their relationships to diseases. The present system is freely accessible at http://www.scbit.org/glif/new/drums/index.html. © 2011 Wiley-Liss, Inc.
A Taxonomic Search Engine: Federating taxonomic databases using web services
Page, Roderic DM
2005-01-01
Background The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. Results The Taxonomic Search Engine (TSE) is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO) and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID) authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata) for each name. Conclusion The Taxonomic Search Engine is available at and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names. PMID:15757517
Preparing engineers for the challenges of community engagement
NASA Astrophysics Data System (ADS)
Harsh, Matthew; Bernstein, Michael J.; Wetmore, Jameson; Cozzens, Susan; Woodson, Thomas; Castillo, Rafael
2017-11-01
Despite calls to address global challenges through community engagement, engineers are not formally prepared to engage with communities. Little research has been done on means to address this 'engagement gap' in engineering education. We examine the efficacy of an intensive, two-day Community Engagement Workshop for engineers, designed to help engineers better look beyond technology, listen to and learn from people, and empower communities. We assessed the efficacy of the workshop in a non-experimental pre-post design using a questionnaire and a concept map. Questionnaire results indicate participants came away better able to ask questions more broadly inclusive of non-technological dimensions of engineering projects. Concept map results indicate participants have a greater understanding of ways social factors shape complex material systems after completing the programme. Based on the workshop's strengths and weaknesses, we discuss the potential of expanding and supplementing the programme to help engineers account for social aspects central to engineered systems.
Optics Toolbox: An Intelligent Relational Database System For Optical Designers
NASA Astrophysics Data System (ADS)
Weller, Scott W.; Hopkins, Robert E.
1986-12-01
Optical designers were among the first to use the computer as an engineering tool. Powerful programs have been written to do ray-trace analysis, third-order layout, and optimization. However, newer computing techniques such as database management and expert systems have not been adopted by the optical design community. For the purpose of this discussion we will define a relational database system as a database which allows the user to specify his requirements using logical relations. For example, to search for all lenses in a lens database with a F/number less than two, and a half field of view near 28 degrees, you might enter the following: FNO < 2.0 and FOV of 28 degrees ± 5% Again for the purpose of this discussion, we will define an expert system as a program which contains expert knowledge, can ask intelligent questions, and can form conclusions based on the answers given and the knowledge which it contains. Most expert systems store this knowledge in the form of rules-of-thumb, which are written in an English-like language, and which are easily modified by the user. An example rule is: IF require microscope objective in air and require NA > 0.9 THEN suggest the use of an oil immersion objective The heart of the expert system is the rule interpreter, sometimes called an inference engine, which reads the rules and forms conclusions based on them. The use of a relational database system containing lens prototypes seems to be a viable prospect. However, it is not clear that expert systems have a place in optical design. In domains such as medical diagnosis and petrology, expert systems are flourishing. These domains are quite different from optical design, however, because optical design is a creative process, and the rules are difficult to write down. We do think that an expert system is feasible in the area of first order layout, which is sufficiently diagnostic in nature to permit useful rules to be written. This first-order expert would emulate an expert designer as he interacted with a customer for the first time: asking the right questions, forming conclusions, and making suggestions. With these objectives in mind, we have developed the Optics Toolbox. Optics Toolbox is actually two programs in one: it is a powerful relational database system with twenty-one search parameters, four search modes, and multi-database support, as well as a first-order optical design expert system with a rule interpreter which has full access to the relational database. The system schematic is shown in Figure 1.
A Summary of the Naval Postgraduate School Research Program
1989-08-30
5 Fundamental Theory for Automatically Combining Changes to Software Systems ............................ 6 Database -System Approach to...Software Engineering Environments(SEE’s) .................................. 10 Multilevel Database Security .......................... 11 Temporal... Database Management and Real-Time Database Computers .................................... 12 The Multi-lingual, Multi Model, Multi-Backend Database
Drinking water quality in Indigenous communities in Canada and health outcomes: a scoping review
Bradford, Lori E. A.; Bharadwaj, Lalita A.; Okpalauwaekwe, Udoka; Waldner, Cheryl L.
2016-01-01
Background Many Indigenous communities in Canada live with high-risk drinking water systems and drinking water advisories and experience health status and water quality below that of the general population. A scoping review of research examining drinking water quality and its relationship to Indigenous health was conducted. Objective The study was undertaken to identify the extent of the literature, summarize current reports and identify research needs. Design A scoping review was designed to identify peer-reviewed literature that examined challenges related to drinking water and health in Indigenous communities in Canada. Key search terms were developed and mapped on five bibliographic databases (MEDLINE/PubMED, Web of Knowledge, SciVerse Scopus, Taylor and Francis online journal and Google Scholar). Online searches for grey literature using relevant government websites were completed. Results Sixteen articles (of 518; 156 bibliographic search engines, 362 grey literature) met criteria for inclusion (contained keywords; publication year 2000–2015; peer-reviewed and from Canada). Studies were quantitative (8), qualitative (5) or mixed (3) and included case, cohort, cross-sectional and participatory designs. In most articles, no definition of “health” was given (14/16), and the primary health issue described was gastrointestinal illness (12/16). Challenges to the study of health and well-being with respect to drinking water in Indigenous communities included irregular funding, remote locations, ethical approval processes, small sample sizes and missing data. Conclusions Research on drinking water and health outcomes in Indigenous communities in Canada is limited and occurs on an opportunistic basis. There is a need for more research funding, and inquiry to inform policy decisions for improvements of water quality and health-related outcomes in Indigenous communities. A coordinated network looking at First Nations water and health outcomes, a database to store and create access to research findings, increased funding and time frames for funding, and more decolonizing and community-based participatory research aimed at understanding the relationship between drinking water quality and health outcomes in First Nations communities in Canada are needed. PMID:27478143
Labrecque, Michel; Ratté, Stéphane; Frémont, Pierre; Cauchon, Michel; Ouellet, Jérôme; Hogg, William; McGowan, Jessie; Gagnon, Marie-Pierre; Njoya, Merlin; Légaré, France
2013-10-01
To compare the ability of users of 2 medical search engines, InfoClinique and the Trip database, to provide correct answers to clinical questions and to explore the perceived effects of the tools on the clinical decision-making process. Randomized trial. Three family medicine units of the family medicine program of the Faculty of Medicine at Laval University in Quebec city, Que. Fifteen second-year family medicine residents. Residents generated 30 structured questions about therapy or preventive treatment (2 questions per resident) based on clinical encounters. Using an Internet platform designed for the trial, each resident answered 20 of these questions (their own 2, plus 18 of the questions formulated by other residents, selected randomly) before and after searching for information with 1 of the 2 search engines. For each question, 5 residents were randomly assigned to begin their search with InfoClinique and 5 with the Trip database. The ability of residents to provide correct answers to clinical questions using the search engines, as determined by third-party evaluation. After answering each question, participants completed a questionnaire to assess their perception of the engine's effect on the decision-making process in clinical practice. Of 300 possible pairs of answers (1 answer before and 1 after the initial search), 254 (85%) were produced by 14 residents. Of these, 132 (52%) and 122 (48%) pairs of answers concerned questions that had been assigned an initial search with InfoClinique and the Trip database, respectively. Both engines produced an important and similar absolute increase in the proportion of correct answers after searching (26% to 62% for InfoClinique, for an increase of 36%; 24% to 63% for the Trip database, for an increase of 39%; P = .68). For all 30 clinical questions, at least 1 resident produced the correct answer after searching with either search engine. The mean (SD) time of the initial search for each question was 23.5 (7.6) minutes with InfoClinique and 22.3 (7.8) minutes with the Trip database (P = .30). Participants' perceptions of each engine's effect on the decision-making process were very positive and similar for both search engines. Family medicine residents' ability to provide correct answers to clinical questions increased dramatically and similarly with the use of both InfoClinique and the Trip database. These tools have strong potential to increase the quality of medical care.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Minjing; Qian, Wei-jun; Gao, Yuqian
The kinetics of biogeochemical processes in natural and engineered environmental systems are typically described using Monod-type or modified Monod-type models. These models rely on biomass as surrogates for functional enzymes in microbial community that catalyze biogeochemical reactions. A major challenge to apply such models is the difficulty to quantitatively measure functional biomass for constraining and validating the models. On the other hand, omics-based approaches have been increasingly used to characterize microbial community structure, functions, and metabolites. Here we proposed an enzyme-based model that can incorporate omics-data to link microbial community functions with biogeochemical process kinetics. The model treats enzymes asmore » time-variable catalysts for biogeochemical reactions and applies biogeochemical reaction network to incorporate intermediate metabolites. The sequences of genes and proteins from metagenomes, as well as those from the UniProt database, were used for targeted enzyme quantification and to provide insights into the dynamic linkage among functional genes, enzymes, and metabolites that are necessary to be incorporated in the model. The application of the model was demonstrated using denitrification as an example by comparing model-simulated with measured functional enzymes, genes, denitrification substrates and intermediates« less
Déjà vu: a database of highly similar citations in the scientific literature
Errami, Mounir; Sun, Zhaohui; Long, Tara C.; George, Angela C.; Garner, Harold R.
2009-01-01
In the scientific research community, plagiarism and covert multiple publications of the same data are considered unacceptable because they undermine the public confidence in the scientific integrity. Yet, little has been done to help authors and editors to identify highly similar citations, which sometimes may represent cases of unethical duplication. For this reason, we have made available Déjà vu, a publicly available database of highly similar Medline citations identified by the text similarity search engine eTBLAST. Following manual verification, highly similar citation pairs are classified into various categories ranging from duplicates with different authors to sanctioned duplicates. Déjà vu records also contain user-provided commentary and supporting information to substantiate each document's categorization. Déjà vu and eTBLAST are available to authors, editors, reviewers, ethicists and sociologists to study, intercept, annotate and deter questionable publication practices. These tools are part of a sustained effort to enhance the quality of Medline as ‘the’ biomedical corpus. The Déjà vu database is freely accessible at http://spore.swmed.edu/dejavu. The tool eTBLAST is also freely available at http://etblast.org. PMID:18757888
Deja vu: a database of highly similar citations in the scientific literature.
Errami, Mounir; Sun, Zhaohui; Long, Tara C; George, Angela C; Garner, Harold R
2009-01-01
In the scientific research community, plagiarism and covert multiple publications of the same data are considered unacceptable because they undermine the public confidence in the scientific integrity. Yet, little has been done to help authors and editors to identify highly similar citations, which sometimes may represent cases of unethical duplication. For this reason, we have made available Déjà vu, a publicly available database of highly similar Medline citations identified by the text similarity search engine eTBLAST. Following manual verification, highly similar citation pairs are classified into various categories ranging from duplicates with different authors to sanctioned duplicates. Déjà vu records also contain user-provided commentary and supporting information to substantiate each document's categorization. Déjà vu and eTBLAST are available to authors, editors, reviewers, ethicists and sociologists to study, intercept, annotate and deter questionable publication practices. These tools are part of a sustained effort to enhance the quality of Medline as 'the' biomedical corpus. The Déjà vu database is freely accessible at http://spore.swmed.edu/dejavu. The tool eTBLAST is also freely available at http://etblast.org.
Jet aircraft engine exhaust emissions database development: Year 1990 and 2015 scenarios
NASA Technical Reports Server (NTRS)
Landau, Z. Harry; Metwally, Munir; Vanalstyne, Richard; Ward, Clay A.
1994-01-01
Studies relating to environmental emissions associated with the High Speed Civil Transport (HSCT) military jet and charter jet aircraft were conducted by McDonnell Douglas Aerospace Transport Aircraft. The report includes engine emission results for baseline 1990 charter and military scenario and the projected jet engine emissions results for a 2015 scenario for a Mach 1.6 HSCT charter and military fleet. Discussions of the methodology used in formulating these databases are provided.
[Application of atomic absorption spectrometry in the engine knock detection].
Chen, Li-Dan
2013-02-01
Because existing human experience diagnosis method and apparatus for auxiliary diagnosis method are difficult to diagnose quickly engine knock. Atomic absorption spectrometry was used to detect the automobile engine knock in in innovative way. After having determined Fe, Al, Cu, Cr and Pb content in the 35 groups of Audi A6 engine oil whose travel course is 2 000 -70 000 kilometers and whose sampling interval is 2 000 kilometers by atomic absorption spectrometry, the database of primary metal content in the same automobile engine at different mileage was established. The research shows that the main metal content fluctuates within a certain range. In practical engineering applications, after the determination of engine oil main metal content and comparison with its database value, it can not only help to diagnose the type and location of engine knock without the disintegration and reduce vehicle maintenance costs and improve the accuracy of engine knock fault diagnosis.
Guerette, P.; Robinson, B.; Moran, W. P.; Messick, C.; Wright, M.; Wofford, J.; Velez, R.
1995-01-01
Community-based multi-disciplinary care of chronically ill individuals frequently requires the efforts of several agencies and organizations. The Community Care Coordination Network (CCCN) is an effort to establish a community-based clinical database and electronic communication system to facilitate the exchange of pertinent patient data among primary care, community-based and hospital-based providers. In developing a primary care based electronic record, a method is needed to update records from the field or remote sites and agencies and yet maintain data quality. Scannable data entry with fixed fields, optical character recognition and verification was compared to traditional keyboard data entry to determine the relative efficiency of each method in updating the CCCN database. PMID:8563414
The BioMart community portal: an innovative alternative to large, centralized data repositories
USDA-ARS?s Scientific Manuscript database
The BioMart Community Portal (www.biomart.org) is a community-driven effort to provide a unified interface to biomedical databases that are distributed worldwide. The portal provides access to numerous database projects supported by 30 scientific organizations. It includes over 800 different biologi...
Ground truth and benchmarks for performance evaluation
NASA Astrophysics Data System (ADS)
Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.
2003-09-01
Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.
International Soil Carbon Network (ISCN) Database v3-1
Nave, Luke [University of Michigan] (ORCID:0000000182588335); Johnson, Kris [USDA-Forest Service; van Ingen, Catharine [Microsoft Research; Agarwal, Deborah [Lawrence Berkeley National Laboratory] (ORCID:0000000150452396); Humphrey, Marty [University of Virginia; Beekwilder, Norman [University of Virginia
2016-01-01
The ISCN is an international scientific community devoted to the advancement of soil carbon research. The ISCN manages an open-access, community-driven soil carbon database. This is version 3-1 of the ISCN Database, released in December 2015. It gathers 38 separate dataset contributions, totalling 67,112 sites with data from 71,198 soil profiles and 431,324 soil layers. For more information about the ISCN, its scientific community and resources, data policies and partner networks visit: http://iscn.fluxdata.org/.
Database Deposit Service through JOIS : JAFIC File on Food Industry and Osaka Urban Engineering File
NASA Astrophysics Data System (ADS)
Kataoka, Akihiro
JICST has launched the database deposit service for the excellent quality in small-and medium size, both of which have no dissemination network. JAFIC File on Food Industry produced by the Japan Food Industry Center and Osaka Urban Engineering File by Osaka City have been in service by JOIS since March 2, 1987. In this paper the outline of the above databases is introduced in focussing on the items covered and retrieved by JOIS.
Data collection procedures for the Software Engineering Laboratory (SEL) database
NASA Technical Reports Server (NTRS)
Heller, Gerard; Valett, Jon; Wild, Mary
1992-01-01
This document is a guidebook to collecting software engineering data on software development and maintenance efforts, as practiced in the Software Engineering Laboratory (SEL). It supersedes the document entitled Data Collection Procedures for the Rehosted SEL Database, number SEL-87-008 in the SEL series, which was published in October 1987. It presents procedures to be followed on software development and maintenance projects in the Flight Dynamics Division (FDD) of Goddard Space Flight Center (GSFC) for collecting data in support of SEL software engineering research activities. These procedures include detailed instructions for the completion and submission of SEL data collection forms.
Revilla-López, Guillem; Rodríguez-Ropero, Francisco; Curcó, David; Torras, Juan; Calaza, M. Isabel; Zanuy, David; Jiménez, Ana I.; Cativiela, Carlos; Nussinov, Ruth; Alemán, Carlos
2011-01-01
Recently, we reported a database (NCAD, Non-Coded Amino acids Database; http://recerca.upc.edu/imem/index.htm) that was built to compile information about the intrinsic conformational preferences of non-proteinogenic residues determined by quantum mechanical calculations, as well as bibliographic information about their synthesis, physical and spectroscopic characterization, the experimentally-established conformational propensities, and applications (J. Phys. Chem. B 2010, 114, 7413). The database initially contained the information available for α-tetrasubstituted α-amino acids. In this work, we extend NCAD to three families of compounds, which can be used to engineer peptides and proteins incorporating modifications at the –NHCO– peptide bond. Such families are: N-substituted α-amino acids, thio-α-amino acids, and diamines and diacids used to build retropeptides. The conformational preferences of these compounds have been analyzed and described based on the information captured in the database. In addition, we provide an example of the utility of the database and of the compounds it compiles in protein and peptide engineering. Specifically, the symmetry of a sequence engineered to stabilize the 310-helix with respect to the α-helix has been broken without perturbing significantly the secondary structure through targeted replacements using the information contained in the database. PMID:21491493
Wedge, David C; Krishna, Ritesh; Blackhurst, Paul; Siepen, Jennifer A; Jones, Andrew R; Hubbard, Simon J
2011-04-01
Confident identification of peptides via tandem mass spectrometry underpins modern high-throughput proteomics. This has motivated considerable recent interest in the postprocessing of search engine results to increase confidence and calculate robust statistical measures, for example through the use of decoy databases to calculate false discovery rates (FDR). FDR-based analyses allow for multiple testing and can assign a single confidence value for both sets and individual peptide spectrum matches (PSMs). We recently developed an algorithm for combining the results from multiple search engines, integrating FDRs for sets of PSMs made by different search engine combinations. Here we describe a web-server and a downloadable application that makes this routinely available to the proteomics community. The web server offers a range of outputs including informative graphics to assess the confidence of the PSMs and any potential biases. The underlying pipeline also provides a basic protein inference step, integrating PSMs into protein ambiguity groups where peptides can be matched to more than one protein. Importantly, we have also implemented full support for the mzIdentML data standard, recently released by the Proteomics Standards Initiative, providing users with the ability to convert native formats to mzIdentML files, which are available to download.
Wedge, David C; Krishna, Ritesh; Blackhurst, Paul; Siepen, Jennifer A; Jones, Andrew R.; Hubbard, Simon J.
2013-01-01
Confident identification of peptides via tandem mass spectrometry underpins modern high-throughput proteomics. This has motivated considerable recent interest in the post-processing of search engine results to increase confidence and calculate robust statistical measures, for example through the use of decoy databases to calculate false discovery rates (FDR). FDR-based analyses allow for multiple testing and can assign a single confidence value for both sets and individual peptide spectrum matches (PSMs). We recently developed an algorithm for combining the results from multiple search engines, integrating FDRs for sets of PSMs made by different search engine combinations. Here we describe a web-server, and a downloadable application, which makes this routinely available to the proteomics community. The web server offers a range of outputs including informative graphics to assess the confidence of the PSMs and any potential biases. The underlying pipeline provides a basic protein inference step, integrating PSMs into protein ambiguity groups where peptides can be matched to more than one protein. Importantly, we have also implemented full support for the mzIdentML data standard, recently released by the Proteomics Standards Initiative, providing users with the ability to convert native formats to mzIdentML files, which are available to download. PMID:21222473
NASA Astrophysics Data System (ADS)
Zaslavsky, I.; Richard, S. M.; Valentine, D. W., Jr.; Grethe, J. S.; Hsu, L.; Malik, T.; Bermudez, L. E.; Gupta, A.; Lehnert, K. A.; Whitenack, T.; Ozyurt, I. B.; Condit, C.; Calderon, R.; Musil, L.
2014-12-01
EarthCube is envisioned as a cyberinfrastructure that fosters new, transformational geoscience by enabling sharing, understanding and scientifically-sound and efficient re-use of formerly unconnected data resources, software, models, repositories, and computational power. Its purpose is to enable science enterprise and workforce development via an extensible and adaptable collaboration and resource integration framework. A key component of this vision is development of comprehensive inventories supporting resource discovery and re-use across geoscience domains. The goal of the EarthCube CINERGI (Community Inventory of EarthCube Resources for Geoscience Interoperability) project is to create a methodology and assemble a large inventory of high-quality information resources with standard metadata descriptions and traceable provenance. The inventory is compiled from metadata catalogs maintained by geoscience data facilities, as well as from user contributions. The latter mechanism relies on community resource viewers: online applications that support update and curation of metadata records. Once harvested into CINERGI, metadata records from domain catalogs and community resource viewers are loaded into a staging database implemented in MongoDB, and validated for compliance with ISO 19139 metadata schema. Several types of metadata defects detected by the validation engine are automatically corrected with help of several information extractors or flagged for manual curation. The metadata harvesting, validation and processing components generate provenance statements using W3C PROV notation, which are stored in a Neo4J database. Thus curated metadata, along with the provenance information, is re-published and accessed programmatically and via a CINERGI online application. This presentation focuses on the role of resource inventories in a scalable and adaptable information infrastructure, and on the CINERGI metadata pipeline and its implementation challenges. Key project components are described at the project's website (http://workspace.earthcube.org/cinergi), which also provides access to the initial resource inventory, the inventory metadata model, metadata entry forms and a collection of the community resource viewers.
ERIC Educational Resources Information Center
Allesch, Jurgen; Preiss-Allesch, Dagmar
This report describes a study that identified major databases in operation in the 12 European Community countries that provide small- and medium-sized enterprises with information on opportunities for obtaining training and continuing education. Thirty-five databases were identified through information obtained from telephone interviews or…
MaizeGDB update: New tools, data, and interface for the maize model organism database
USDA-ARS?s Scientific Manuscript database
MaizeGDB is a highly curated, community-oriented database and informatics service to researchers focused on the crop plant and model organism Zea mays ssp. mays. Although some form of the maize community database has existed over the last 25 years, there have only been two major releases. In 1991, ...
Advanced transportation system studies. Alternate propulsion subsystem concepts: Propulsion database
NASA Technical Reports Server (NTRS)
Levack, Daniel
1993-01-01
The Advanced Transportation System Studies alternate propulsion subsystem concepts propulsion database interim report is presented. The objective of the database development task is to produce a propulsion database which is easy to use and modify while also being comprehensive in the level of detail available. The database is to be available on the Macintosh computer system. The task is to extend across all three years of the contract. Consequently, a significant fraction of the effort in this first year of the task was devoted to the development of the database structure to ensure a robust base for the following years' efforts. Nonetheless, significant point design propulsion system descriptions and parametric models were also produced. Each of the two propulsion databases, parametric propulsion database and propulsion system database, are described. The descriptions include a user's guide to each code, write-ups for models used, and sample output. The parametric database has models for LOX/H2 and LOX/RP liquid engines, solid rocket boosters using three different propellants, a hybrid rocket booster, and a NERVA derived nuclear thermal rocket engine.
NASA Astrophysics Data System (ADS)
Ruddell, B. L.; Merwade, V.
2010-12-01
Hydrology and geoscience education at the undergraduate and graduate levels may benefit greatly from a structured approach to pedagogy that utilizes modeling, authentic data, and simulation exercises to engage students in practice-like activities. Extensive evidence in the educational literature suggests that students retain more of their instruction, and attain higher levels of mastery over content, when interactive and practice-like activities are used to contextualize traditional lecture-based and theory-based instruction. However, it is also important that these activities carefully link the use of data and modeling to abstract theory, to promote transfer of knowledge to other contexts. While this type of data-based activity has been practiced in the hydrology classroom for decades, the hydrology community still lacks a set of standards and a mechanism for community-based development, publication, and review of this type of curriculum material. A community-based initiative is underway to develop a set curriculum materials to teach hydrology in the engineering and geoscience university classroom using outcomes-based, pedagogically rigorous modules that use authentic data and modeling experiences to complement traditional lecture-based instruction. A preliminary design for a community cyberinfrastructure for shared module development and publication, and for module topics and outcomes and ametadata and module interoperability standards, will be presented, along with the results of a series of community surveys and workshops informing this design.
GDRMS: a system for automatic extraction of the disease-centre relation
NASA Astrophysics Data System (ADS)
Yang, Ronggen; Zhang, Yue; Gong, Lejun
2012-01-01
With the rapidly increasing of biomedical literature, the deluge of new articles is leading to information overload. Extracting the available knowledge from the huge amount of biomedical literature has become a major challenge. GDRMS is developed as a tool that extracts the relationship between disease and gene, gene and gene from biomedical literatures using text mining technology. It is a ruled-based system which also provides disease-centre network visualization, constructs the disease-gene database, and represents a gene engine for understanding the function of the gene. The main focus of GDRMS is to provide a valuable opportunity to explore the relationship between disease and gene for the research community about etiology of disease.
ERIC Educational Resources Information Center
Ohland, Matthew W.; Long, Russell A.
2016-01-01
Sharing longitudinal student record data and merging data from different sources is critical to addressing important questions being asked of higher education. The Multiple-Institution Database for Investigating Engineering Longitudinal Development (MIDFIELD) is a multi-institution, longitudinal, student record level dataset that is used to answer…
Astronomical databases of Nikolaev Observatory
NASA Astrophysics Data System (ADS)
Protsyuk, Y.; Mazhaev, A.
2008-07-01
Several astronomical databases were created at Nikolaev Observatory during the last years. The databases are built by using MySQL search engine and PHP scripts. They are available on NAO web-site http://www.mao.nikolaev.ua.
National Institute of Standards and Technology Data Gateway
SRD 30 NIST Structural Ceramics Database (Web, free access) The NIST Structural Ceramics Database (WebSCD) provides evaluated materials property data for a wide range of advanced ceramics known variously as structural ceramics, engineering ceramics, and fine ceramics.
Engineering Review Information System
NASA Technical Reports Server (NTRS)
Grems, III, Edward G. (Inventor); Henze, James E. (Inventor); Bixby, Jonathan A. (Inventor); Roberts, Mark (Inventor); Mann, Thomas (Inventor)
2015-01-01
A disciplinal engineering review computer information system and method by defining a database of disciplinal engineering review process entities for an enterprise engineering program, opening a computer supported engineering item based upon the defined disciplinal engineering review process entities, managing a review of the opened engineering item according to the defined disciplinal engineering review process entities, and closing the opened engineering item according to the opened engineering item review.
Duchrow, Timo; Shtatland, Timur; Guettler, Daniel; Pivovarov, Misha; Kramer, Stefan; Weissleder, Ralph
2009-01-01
Background The breadth of biological databases and their information content continues to increase exponentially. Unfortunately, our ability to query such sources is still often suboptimal. Here, we introduce and apply community voting, database-driven text classification, and visual aids as a means to incorporate distributed expert knowledge, to automatically classify database entries and to efficiently retrieve them. Results Using a previously developed peptide database as an example, we compared several machine learning algorithms in their ability to classify abstracts of published literature results into categories relevant to peptide research, such as related or not related to cancer, angiogenesis, molecular imaging, etc. Ensembles of bagged decision trees met the requirements of our application best. No other algorithm consistently performed better in comparative testing. Moreover, we show that the algorithm produces meaningful class probability estimates, which can be used to visualize the confidence of automatic classification during the retrieval process. To allow viewing long lists of search results enriched by automatic classifications, we added a dynamic heat map to the web interface. We take advantage of community knowledge by enabling users to cast votes in Web 2.0 style in order to correct automated classification errors, which triggers reclassification of all entries. We used a novel framework in which the database "drives" the entire vote aggregation and reclassification process to increase speed while conserving computational resources and keeping the method scalable. In our experiments, we simulate community voting by adding various levels of noise to nearly perfectly labelled instances, and show that, under such conditions, classification can be improved significantly. Conclusion Using PepBank as a model database, we show how to build a classification-aided retrieval system that gathers training data from the community, is completely controlled by the database, scales well with concurrent change events, and can be adapted to add text classification capability to other biomedical databases. The system can be accessed at . PMID:19799796
NASA Astrophysics Data System (ADS)
Shimomura, Y.; Aymar, R.; Chuyanov, V. A.; Huguet, M.; Matsumoto, H.; Mizoguchi, T.; Murakami, Y.; Polevoi, A. R.; Shimada, M.; ITER Joint Central Team; ITER Home Teams
2001-03-01
ITER is planned to be the first fusion experimental reactor in the world operating for research in physics and engineering. The first ten years of operation will be devoted primarily to physics issues at low neutron fluence and the following ten years of operation to engineering testing at higher fluence. ITER can accommodate various plasma configurations and plasma operation modes, such as inductive high Q modes, long pulse hybrid modes and non-inductive steady state modes, with large ranges of plasma current, density, beta and fusion power, and with various heating and current drive methods. This flexibility will provide an advantage for coping with uncertainties in the physics database, in studying burning plasmas, in introducing advanced features and in optimizing the plasma performance for the different programme objectives. Remote sites will be able to participate in the ITER experiment. This concept will provide an advantage not only in operating ITER for 24 hours a day but also in involving the worldwide fusion community and in promoting scientific competition among the ITER Parties.
F-16XL and F-18 High Speed Acoustic Flight Test Databases
NASA Technical Reports Server (NTRS)
Kelly, J. J.; Wilson, M. R.; Rawls, J., Jr.; Norum, T. D.; Golub, R. A.
1999-01-01
This report presents the recorded acoustic data and the computed narrow-band and 1/3-octave band spectra produced by F-18 and F-16XL aircraft in subsonic flight over an acoustic array. Both broadband-shock noise and turbulent mixing noise are observed in the spectra. Radar and c-band tracking systems provided the aircraft position which enabled directivity and smear angles from the aircraft to each microphone to be computed. These angles are based on source emission time and thus give some idea about the directivity of the radiated sound field due to jet noise. A follow-on static test was also conducted where acoustic and engine data were obtained. The acoustic data described in the report has application to community noise analysis, noise source characterization and validation of prediction models. A detailed description of the signal processing procedures is provided. Follow-on static tests of each aircraft were also conducted for which engine data and far-field acoustic data are presented.
Rosset, Saharon; Aharoni, Ehud; Neuvirth, Hani
2014-07-01
Issues of publication bias, lack of replicability, and false discovery have long plagued the genetics community. Proper utilization of public and shared data resources presents an opportunity to ameliorate these problems. We present an approach to public database management that we term Quality Preserving Database (QPD). It enables perpetual use of the database for testing statistical hypotheses while controlling false discovery and avoiding publication bias on the one hand, and maintaining testing power on the other hand. We demonstrate it on a use case of a replication server for GWAS findings, underlining its practical utility. We argue that a shift to using QPD in managing current and future biological databases will significantly enhance the community's ability to make efficient and statistically sound use of the available data resources. © 2014 WILEY PERIODICALS, INC.
Tinkering and Technical Self-Efficacy of Engineering Students at the Community College
ERIC Educational Resources Information Center
Baker, Dale R.; Wood, Lorelei; Corkins, James; Krause, Stephen
2015-01-01
Self-efficacy in engineering is important because individuals with low self-efficacy have lower levels of achievement and persistence in engineering majors. To examine self-efficacy among community college engineering students, an instrument to specifically measure two important aspects of engineering, tinkering and technical self-efficacy, was…
Hyper-X Engine Testing in the NASA Langley 8-Foot High Temperature Tunnel
NASA Technical Reports Server (NTRS)
Huebner, Lawrence D.; Rock, Kenneth E.; Witte, David W.; Ruf, Edward G.; Andrews, Earl H., Jr.
2000-01-01
Airframe-integrated scramjet engine tests have 8 completed at Mach 7 in the NASA Langley 8-Foot High Temperature Tunnel under the Hyper-X program. These tests provided critical engine data as well as design and database verification for the Mach 7 flight tests of the Hyper-X research vehicle (X-43), which will provide the first-ever airframe- integrated scramjet flight data. The first model tested was the Hyper-X Engine Model (HXEM), and the second was the Hyper-X Flight Engine (HXFE). The HXEM, a partial-width, full-height engine that is mounted on an airframe structure to simulate the forebody features of the X-43, was tested to provide data linking flowpath development databases to the complete airframe-integrated three-dimensional flight configuration and to isolate effects of ground testing conditions and techniques. The HXFE, an exact geometric representation of the X-43 scramjet engine mounted on an airframe structure that duplicates the entire three-dimensional propulsion flowpath from the vehicle leading edge to the vehicle base, was tested to verify the complete design as it will be flight tested. This paper presents an overview of these two tests, their importance to the Hyper-X program, and the significance of their contribution to scramjet database development.
Kılıç, Sefa; Sagitova, Dinara M; Wolfish, Shoshannah; Bely, Benoit; Courtot, Mélanie; Ciufo, Stacy; Tatusova, Tatiana; O'Donovan, Claire; Chibucos, Marcus C; Martin, Maria J; Erill, Ivan
2016-01-01
Domain-specific databases are essential resources for the biomedical community, leveraging expert knowledge to curate published literature and provide access to referenced data and knowledge. The limited scope of these databases, however, poses important challenges on their infrastructure, visibility, funding and usefulness to the broader scientific community. CollecTF is a community-oriented database documenting experimentally validated transcription factor (TF)-binding sites in the Bacteria domain. In its quest to become a community resource for the annotation of transcriptional regulatory elements in bacterial genomes, CollecTF aims to move away from the conventional data-repository paradigm of domain-specific databases. Through the adoption of well-established ontologies, identifiers and collaborations, CollecTF has progressively become also a portal for the annotation and submission of information on transcriptional regulatory elements to major biological sequence resources (RefSeq, UniProtKB and the Gene Ontology Consortium). This fundamental change in database conception capitalizes on the domain-specific knowledge of contributing communities to provide high-quality annotations, while leveraging the availability of stable information hubs to promote long-term access and provide high-visibility to the data. As a submission portal, CollecTF generates TF-binding site information through direct annotation of RefSeq genome records, definition of TF-based regulatory networks in UniProtKB entries and submission of functional annotations to the Gene Ontology. As a database, CollecTF provides enhanced search and browsing, targeted data exports, binding motif analysis tools and integration with motif discovery and search platforms. This innovative approach will allow CollecTF to focus its limited resources on the generation of high-quality information and the provision of specialized access to the data.Database URL: http://www.collectf.org/. © The Author(s) 2016. Published by Oxford University Press.
Semantically Enabling Knowledge Representation of Metamorphic Petrology Data
NASA Astrophysics Data System (ADS)
West, P.; Fox, P. A.; Spear, F. S.; Adali, S.; Nguyen, C.; Hallett, B. W.; Horkley, L. K.
2012-12-01
More and more metamorphic petrology data is being collected around the world, and is now being organized together into different virtual data portals by means of virtual organizations. For example, there is the virtual data portal Petrological Database (PetDB, http://www.petdb.org) of the Ocean Floor that is organizing scientific information about geochemical data of ocean floor igneous and metamorphic rocks; and also The Metamorphic Petrology Database (MetPetDB, http://metpetdb.rpi.edu) that is being created by a global community of metamorphic petrologists in collaboration with software engineers and data managers at Rensselaer Polytechnic Institute. The current focus is to provide the ability for scientists and researchers to register their data and search the databases for information regarding sample collections. What we present here is the next step in evolution of the MetPetDB portal, utilizing semantically enabled features such as discovery, data casting, faceted search, knowledge representation, and linked data as well as organizing information about the community and collaboration within the virtual community itself. We take the information that is currently represented in a relational database and make it available through web services, SPARQL endpoints, semantic and triple-stores where inferencing is enabled. We will be leveraging research that has taken place in virtual observatories, such as the Virtual Solar Terrestrial Observatory (VSTO) and the Biological and Chemical Oceanography Data Management Office (BCO-DMO); vocabulary work done in various communities such as Observations and Measurements (ISO 19156), FOAF (Friend of a Friend), Bibo (Bibliography Ontology), and domain specific ontologies; enabling provenance traces of samples and subsamples using the different provenance ontologies; and providing the much needed linking of data from the various research organizations into a common, collaborative virtual observatory. In addition to better representing and presenting the actual data, we also look to organize and represent the knowledge information and expertise behind the data. Domain experts hold a lot of knowledge in their minds, in their presentations and publications, and elsewhere. Not only is this a technical issue, this is also a social issue in that we need to be able to encourage the domain experts to share their knowledge in a way that can be searched and queried over. With this additional focus in MetPetDB the site can be used more efficiently by other domain experts, but can also be utilized by non-specialists as well in order to educate people of the importance of the work being done as well as enable future domain experts.
Gacesa, Ranko; Zucko, Jurica; Petursdottir, Solveig K; Gudmundsdottir, Elisabet Eik; Fridjonsson, Olafur H; Diminic, Janko; Long, Paul F; Cullum, John; Hranueli, Daslav; Hreggvidsson, Gudmundur O; Starcevic, Antonio
2017-06-01
The MEGGASENSE platform constructs relational databases of DNA or protein sequences. The default functional analysis uses 14 106 hidden Markov model (HMM) profiles based on sequences in the KEGG database. The Solr search engine allows sophisticated queries and a BLAST search function is also incorporated. These standard capabilities were used to generate the SCATT database from the predicted proteome of Streptomyces cattleya . The implementation of a specialised metagenome database (AMYLOMICS) for bioprospecting of carbohydrate-modifying enzymes is described. In addition to standard assembly of reads, a novel 'functional' assembly was developed, in which screening of reads with the HMM profiles occurs before the assembly. The AMYLOMICS database incorporates additional HMM profiles for carbohydrate-modifying enzymes and it is illustrated how the combination of HMM and BLAST analyses helps identify interesting genes. A variety of different proteome and metagenome databases have been generated by MEGGASENSE.
An Improved Database System for Program Assessment
ERIC Educational Resources Information Center
Haga, Wayne; Morris, Gerard; Morrell, Joseph S.
2011-01-01
This research paper presents a database management system for tracking course assessment data and reporting related outcomes for program assessment. It improves on a database system previously presented by the authors and in use for two years. The database system presented is specific to assessment for ABET (Accreditation Board for Engineering and…
Development of an Engineering Soil Database
2017-12-27
systems such as agricultural and geological soil classifications and soil parameters. Tier 3 Data were converted into equivalent USCS classification...14 2.7 U.S. Department of Agriculture (USDA) textural soil classification ............................ 16 2.7.1 Properties of USDA textural...Defense ERDC U.S. Army Engineer Research and Development Center ESDB European Soil Database FAO Food and Agriculture Organization (of the United
Implementing Relational Operations in an Object-Oriented Database
1992-03-01
computer aided software engineering (CASE) and computer aided design (CAD) tools. There has been some research done in the area of combining...35 2. Prograph Database Engine .................................................................. 38 III. W HY A N R/O...in most business applications where the bulk of data being stored and manipulated is simply textual or numeric data that can be stored and manipulated
ERIC Educational Resources Information Center
Pace, Diana; Witucki, Laurie; Blumreich, Kathleen
2008-01-01
This paper describes the rationale and the step by step process for setting up a WISE (Women in Science and Engineering) learning community at one institution. Background information on challenges for women in science and engineering and the benefits of a learning community for female students in these major areas are described. Authors discuss…
NASA Technical Reports Server (NTRS)
Rasmussen, Robert; Bennett, Matthew
2006-01-01
The State Analysis Database Tool software establishes a productive environment for collaboration among software and system engineers engaged in the development of complex interacting systems. The tool embodies State Analysis, a model-based system engineering methodology founded on a state-based control architecture (see figure). A state represents a momentary condition of an evolving system, and a model may describe how a state evolves and is affected by other states. The State Analysis methodology is a process for capturing system and software requirements in the form of explicit models and states, and defining goal-based operational plans consistent with the models. Requirements, models, and operational concerns have traditionally been documented in a variety of system engineering artifacts that address different aspects of a mission s lifecycle. In State Analysis, requirements, models, and operations information are State Analysis artifacts that are consistent and stored in a State Analysis Database. The tool includes a back-end database, a multi-platform front-end client, and Web-based administrative functions. The tool is structured to prompt an engineer to follow the State Analysis methodology, to encourage state discovery and model description, and to make software requirements and operations plans consistent with model descriptions.
[Scientometrics and bibliometrics of biomedical engineering periodicals and papers].
Zhao, Ping; Xu, Ping; Li, Bingyan; Wang, Zhengrong
2003-09-01
This investigation was made to reveal the current status, research trend and research level of biomedical engineering in Chinese mainland by means of scientometrics and to assess the quality of the four domestic publications by bibliometrics. We identified all articles of four related publications by searching Chinese and foreign databases from 1997 to 2001. All articles collected or cited by these databases were searched and statistically analyzed for finding out the relevant distributions, including databases, years, authors, institutions, subject headings and subheadings. The source of sustentation funds and the related articles were analyzed too. The results showed that two journals were cited by two foreign databases and five Chinese databases simultaneously. The output of Journal of Biomedical Engineering was the highest. Its quantity of original papers cited by EI, CA and the totality of papers sponsored by funds were higher than those of the others, but the quantity and percentage per year of biomedical articles cited by EI were decreased in all. Inland core authors and institutions had come into being in the field of biomedical engineering. Their research topics were mainly concentrated on ten subject headings which included biocompatible materials, computer-assisted signal processing, electrocardiography, computer-assisted image processing, biomechanics, algorithms, electroencephalography, automatic data processing, mechanical stress, hemodynamics, mathematical computing, microcomputers, theoretical models, etc. The main subheadings were concentrated on instrumentation, physiopathology, diagnosis, therapy, ultrasonography, physiology, analysis, surgery, pathology, method, etc.
NASA Astrophysics Data System (ADS)
Bowring, J. F.; McLean, N. M.; Walker, J. D.; Gehrels, G. E.; Rubin, K. H.; Dutton, A.; Bowring, S. A.; Rioux, M. E.
2015-12-01
The Cyber Infrastructure Research and Development Lab for the Earth Sciences (CIRDLES.org) has worked collaboratively for the last decade with geochronologists from EARTHTIME and EarthChem to build cyberinfrastructure geared to ensuring transparency and reproducibility in geoscience workflows and is engaged in refining and extending that work to serve additional geochronology domains during the next decade. ET_Redux (formerly U-Pb_Redux) is a free open-source software system that provides end-to-end support for the analysis of U-Pb geochronological data. The system reduces raw mass spectrometer (TIMS and LA-ICPMS) data to U-Pb dates, allows users to interpret ages from these data, and then facilitates the seamless federation of the results from one or more labs into a community web-accessible database using standard and open techniques. This EarthChem database - GeoChron.org - depends on keyed references to the System for Earth Sample Registration (SESAR) database that stores metadata about registered samples. These keys are each a unique International Geo Sample Number (IGSN) assigned to a sample and to its derivatives. ET_Redux provides for interaction with this archive, allowing analysts to store, maintain, retrieve, and share their data and analytical results electronically with whomever they choose. This initiative has created an open standard for the data elements of a complete reduction and analysis of U-Pb data, and is currently working to complete the same for U-series geochronology. We have demonstrated the utility of interdisciplinary collaboration between computer scientists and geoscientists in achieving a working and useful system that provides transparency and supports reproducibility, allowing geochemists to focus on their specialties. The software engineering community also benefits by acquiring research opportunities to improve development process methodologies used in the design, implementation, and sustainability of domain-specific software.
Engaging Community College Students Using an Engineering Learning Community
NASA Astrophysics Data System (ADS)
Maccariella, James, Jr.
The study investigated whether community college engineering student success was tied to a learning community. Three separate data collection sources were utilized: surveys, interviews, and existing student records. Mann-Whitney tests were used to assess survey data, independent t-tests were used to examine pre-test data, and independent t-tests, analyses of covariance (ANCOVA), chi-square tests, and logistic regression were used to examine post-test data. The study found students that participated in the Engineering TLC program experienced a significant improvement in grade point values for one of the three post-test courses studied. In addition, the analysis revealed the odds of fall-to-spring retention were 5.02 times higher for students that participated in the Engineering TLC program, and the odds of graduating or transferring were 4.9 times higher for students that participated in the Engineering TLC program. However, when confounding variables were considered in the study (engineering major, age, Pell Grant participation, gender, ethnicity, and full-time/part-time status), the analyses revealed no significant relationship between participation in the Engineering TLC program and course success, fall-to-spring retention, and graduation/transfer. Thus, the confounding variables provided alternative explanations for results. The Engineering TLC program was also found to be effective in providing mentoring opportunities, engagement and motivation opportunities, improved self confidence, and a sense of community. It is believed the Engineering TLC program can serve as a model for other community college engineering programs, by striving to build a supportive environment, and provide guidance and encouragement throughout an engineering student's program of study.
First Look--The Aerospace Database.
ERIC Educational Resources Information Center
Kavanagh, Stephen K.; Miller, Jay G.
1986-01-01
Presents overview prepared by producer of database newly available in 1985 that covers 10 subject categories: engineering, geosciences, chemistry and materials, space sciences, aeronautics, astronautics, mathematical and computer sciences, physics, social sciences, and life sciences. Database development, unique features, document delivery, sample…
Tchoua, Roselyne B; Qin, Jian; Audus, Debra J; Chard, Kyle; Foster, Ian T; de Pablo, Juan
2016-09-13
Structured databases of chemical and physical properties play a central role in the everyday research activities of scientists and engineers. In materials science, researchers and engineers turn to these databases to quickly query, compare, and aggregate various properties, thereby allowing for the development or application of new materials. The vast majority of these databases have been generated manually, through decades of labor-intensive harvesting of information from the literature; yet, while there are many examples of commonly used databases, a significant number of important properties remain locked within the tables, figures, and text of publications. The question addressed in our work is whether, and to what extent, the process of data collection can be automated. Students of the physical sciences and engineering are often confronted with the challenge of finding and applying property data from the literature, and a central aspect of their education is to develop the critical skills needed to identify such data and discern their meaning or validity. To address shortcomings associated with automated information extraction, while simultaneously preparing the next generation of scientists for their future endeavors, we developed a novel course-based approach in which students develop skills in polymer chemistry and physics and apply their knowledge by assisting with the semi-automated creation of a thermodynamic property database.
Blending Education and Polymer Science: Semiautomated Creation of a Thermodynamic Property Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tchoua, Roselyne B.; Qin, Jian; Audus, Debra J.
Structured databases of chemical and physical properties play a central role in the everyday research activities of scientists and engineers. In materials science, researchers and engineers turn to these databases to quickly query, compare, and aggregate various properties, thereby allowing for the development or application of new materials. The vast majority of these databases have been generated manually, through decades of labor-intensive harvesting of information from the literature, yet while there are many examples of commonly used databases, a significant number of important properties remain locked within the tables, figures, and text of publications. The question addressed in our workmore » is whether and to what extent the process of data collection can be automated. Students of the physical sciences and engineering are often confronted with the challenge of finding and applying property data from the literature, and a central aspect of their education is to develop the critical skills needed to identify such data and discern their meaning or validity. To address shortcomings associated with automated information extraction while simultaneously preparing the next generation of scientists for their future endeavors, we developed a novel course-based approach in which students develop skills in polymer chemistry and physics and apply their knowledge by assisting with the semiautomated creation of a thermodynamic property database.« less
ERIC Educational Resources Information Center
Blakley, Jacquelyn
2016-01-01
This study examined the experiences of African American women in engineering technology programs in community colleges. There is a lack of representation of African American women in engineering technology programs throughout higher education, especially in community/technical colleges. There is also lack of representation of African American…
NASA Astrophysics Data System (ADS)
Tellman, B.; Sullivan, J.; Kettner, A.; Brakenridge, G. R.; Slayback, D. A.; Kuhn, C.; Doyle, C.
2016-12-01
There is an increasing need to understand flood vulnerability as the societal and economic effects of flooding increases. Risk models from insurance companies and flood models from hydrologists must be calibrated based on flood observations in order to make future predictions that can improve planning and help societies reduce future disasters. Specifically, to improve these models both traditional methods of flood prediction from physically based models as well as data-driven techniques, such as machine learning, require spatial flood observation to validate model outputs and quantify uncertainty. A key dataset that is missing for flood model validation is a global historical geo-database of flood event extents. Currently, the most advanced database of historical flood extent is hosted and maintained at the Dartmouth Flood Observatory (DFO) that has catalogued 4320 floods (1985-2015) but has only mapped 5% of these floods. We are addressing this data gap by mapping the inventory of floods in the DFO database to create a first-of- its-kind, comprehensive, global and historical geospatial database of flood events. To do so, we combine water detection algorithms on MODIS and Landsat 5,7 and 8 imagery in Google Earth Engine to map discrete flood events. The created database will be available in the Earth Engine Catalogue for download by country, region, or time period. This dataset can be leveraged for new data-driven hydrologic modeling using machine learning algorithms in Earth Engine's highly parallelized computing environment, and we will show examples for New York and Senegal.
Dynamic Flood Vulnerability Mapping with Google Earth Engine
NASA Astrophysics Data System (ADS)
Tellman, B.; Kuhn, C.; Max, S. A.; Sullivan, J.
2015-12-01
Satellites capture the rate and character of environmental change from local to global levels, yet integrating these changes into flood exposure models can be cost or time prohibitive. We explore an approach to global flood modeling by leveraging satellite data with computing power in Google Earth Engine to dynamically map flood hazards. Our research harnesses satellite imagery in two main ways: first to generate a globally consistent flood inundation layer and second to dynamically model flood vulnerability. Accurate and relevant hazard maps rely on high quality observation data. Advances in publicly available spatial, spectral, and radar data together with cloud computing allow us to improve existing efforts to develop a comprehensive flood extent database to support model training and calibration. This talk will demonstrate the classification results of algorithms developed in Earth Engine designed to detect flood events by combining observations from MODIS, Landsat 8, and Sentinel-1. Our method to derive flood footprints increases the number, resolution, and precision of spatial observations for flood events both in the US, recorded in the NCDC (National Climatic Data Center) storm events database, and globally, as recorded events from the Colorado Flood Observatory database. This improved dataset can then be used to train machine learning models that relate spatial temporal flood observations to satellite derived spatial temporal predictor variables such as precipitation, antecedent soil moisture, and impervious surface. This modeling approach allows us to rapidly update models with each new flood observation, providing near real time vulnerability maps. We will share the water detection algorithms used with each satellite and discuss flood detection results with examples from Bihar, India and the state of New York. We will also demonstrate how these flood observations are used to train machine learning models and estimate flood exposure. The final stage of our comprehensive approach to flood vulnerability couples inundation extent with social data to determine which flood exposed communities have the greatest propensity for loss. Specifically, by linking model outputs to census derived social vulnerability estimates (Indian and US, respectively) to predict how many people are at risk.
A Framework for Mapping User-Designed Forms to Relational Databases
ERIC Educational Resources Information Center
Khare, Ritu
2011-01-01
In the quest for database usability, several applications enable users to design custom forms using a graphical interface, and forward engineer the forms into new databases. The path-breaking aspect of such applications is that users are completely shielded from the technicalities of database creation. Despite this innovation, the process of…
Sagace: A web-based search engine for biomedical databases in Japan
2012-01-01
Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data) and biological resource banks (such as mouse models of disease and cell lines). With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/. PMID:23110816
Automating U-Pb IDTIMS data reduction and reporting: Cyberinfrastructure meets geochronology
NASA Astrophysics Data System (ADS)
Bowring, J. F.; McLean, N.; Walker, J. D.; Ash, J. M.
2009-12-01
We demonstrate the efficacy of an interdisciplinary effort between software engineers and geochemists to produce working cyberinfrastructure for geochronology. This collaboration between CIRDLES, EARTHTIME and EarthChem has produced the software programs Tripoli and U-Pb_Redux as the cyber-backbone for the ID-TIMS community. This initiative incorporates shared isotopic tracers, data-reduction algorithms and the archiving and retrieval of data and results. The resulting system facilitates detailed inter-laboratory comparison and a new generation of cooperative science. The resolving power of geochronological data in the earth sciences is dependent on the precision and accuracy of many isotopic measurements and corrections. Recent advances in U-Pb geochronology have reinvigorated its application to problems such as precise timescale calibration, processes of crustal evolution, and early solar system dynamics. This project provides a heretofore missing common data reduction protocol, thus promoting the interpretation of precise geochronology and enabling inter-laboratory comparison. U-Pb_Redux is an open-source software program that provides end-to-end support for the analysis of uranium-lead geochronological data. The system reduces raw mass spectrometer data to U-Pb dates, allows users to interpret ages from these data, and then provides for the seamless federation of the results, coming from many labs, into a community web-accessible database using standard and open techniques. This EarthChem GeoChron database depends also on keyed references to the SESAR sample database. U-Pb_Redux currently provides interactive concordia and weighted mean plots and uncertainty contribution visualizations; it produces publication-quality concordia and weighted mean plots and customizable data tables. This initiative has achieved the goal of standardizing the data elements of a complete reduction and analysis of uranium-lead data, which are expressed using extensible markup language schema definition (XSD) artifacts. U-Pb_Redux leverages the freeware program Tripoli, which imports raw mass spectrometer data files and supports interactive review and archiving of isotopic data. Tripoli facilitates the visualization of temporal trends and scatter during measurement, statistically rigorous filtering of data and supports oxide and fractionation corrections. The Cyber Infrastructure Research and Development Lab for the Earth Sciences (CIRDLES) collaboratively integrates domain-specific software engineering with the efforts EARTHTIME and Earthchem. The EARTHTIME initiative pursues consensus-based approaches to geochemical data reduction, and the EarthChem initiative pursues the creation of data repositories for all geochemical data. CIRDLES develops software and systems for geochronology. This collaboration benefits the earth sciences by enabling geochemists to focus on their specialties using robust software that produces reliable results. This collaboration benefits software engineering by providing research opportunities to improve process methodologies used in the design and implementation of domain-specific solutions.
NASA Astrophysics Data System (ADS)
Williams, J. W.; Ashworth, A. C.; Betancourt, J. L.; Bills, B.; Blois, J.; Booth, R.; Buckland, P.; Charles, D.; Curry, B. B.; Goring, S. J.; Davis, E.; Grimm, E. C.; Graham, R. W.; Smith, A. J.
2015-12-01
Community-supported data repositories (CSDRs) in paleoecology and paleoclimatology have a decades-long tradition and serve multiple critical scientific needs. CSDRs facilitate synthetic large-scale scientific research by providing open-access and curated data that employ community-supported metadata and data standards. CSDRs serve as a 'middle tail' or boundary organization between information scientists and the long-tail community of individual geoscientists collecting and analyzing paleoecological data. Over the past decades, a distributed network of CSDRs has emerged, each serving a particular suite of data and research communities, e.g. Neotoma Paleoecology Database, Paleobiology Database, International Tree Ring Database, NOAA NCEI for Paleoclimatology, Morphobank, iDigPaleo, and Integrated Earth Data Alliance. Recently, these groups have organized into a common Paleobiology Data Consortium dedicated to improving interoperability and sharing best practices and protocols. The Neotoma Paleoecology Database offers one example of an active and growing CSDR, designed to facilitate research into ecological and evolutionary dynamics during recent past global change. Neotoma combines a centralized database structure with distributed scientific governance via multiple virtual constituent data working groups. The Neotoma data model is flexible and can accommodate a variety of paleoecological proxies from many depositional contests. Data input into Neotoma is done by trained Data Stewards, drawn from their communities. Neotoma data can be searched, viewed, and returned to users through multiple interfaces, including the interactive Neotoma Explorer map interface, REST-ful Application Programming Interfaces (APIs), the neotoma R package, and the Tilia stratigraphic software. Neotoma is governed by geoscientists and provides community engagement through training workshops for data contributors, stewards, and users. Neotoma is engaged in the Paleobiological Data Consortium and other efforts to improve interoperability among cyberinfrastructure in the paleogeosciences.
An Overview of the Literature: Research in P-12 Engineering Education
ERIC Educational Resources Information Center
Mendoza Díaz, Noemi V.; Cox, Monica F.
2012-01-01
This paper presents an extensive overview of preschool to 12th grade (P-12) engineering education literature published between 2001 and 2011. Searches were conducted through education and engineering library engines and databases as well as queries in established publications in engineering education. More than 50 publications were found,…
Towards BioDBcore: a community-defined information specification for biological databases
Gaudet, Pascale; Bairoch, Amos; Field, Dawn; Sansone, Susanna-Assunta; Taylor, Chris; Attwood, Teresa K.; Bateman, Alex; Blake, Judith A.; Bult, Carol J.; Cherry, J. Michael; Chisholm, Rex L.; Cochrane, Guy; Cook, Charles E.; Eppig, Janan T.; Galperin, Michael Y.; Gentleman, Robert; Goble, Carole A.; Gojobori, Takashi; Hancock, John M.; Howe, Douglas G.; Imanishi, Tadashi; Kelso, Janet; Landsman, David; Lewis, Suzanna E.; Mizrachi, Ilene Karsch; Orchard, Sandra; Ouellette, B. F. Francis; Ranganathan, Shoba; Richardson, Lorna; Rocca-Serra, Philippe; Schofield, Paul N.; Smedley, Damian; Southan, Christopher; Tan, Tin Wee; Tatusova, Tatiana; Whetzel, Patricia L.; White, Owen; Yamasaki, Chisato
2011-01-01
The present article proposes the adoption of a community-defined, uniform, generic description of the core attributes of biological databases, BioDBCore. The goals of these attributes are to provide a general overview of the database landscape, to encourage consistency and interoperability between resources and to promote the use of semantic and syntactic standards. BioDBCore will make it easier for users to evaluate the scope and relevance of available resources. This new resource will increase the collective impact of the information present in biological databases. PMID:21097465
Towards BioDBcore: a community-defined information specification for biological databases
Gaudet, Pascale; Bairoch, Amos; Field, Dawn; Sansone, Susanna-Assunta; Taylor, Chris; Attwood, Teresa K.; Bateman, Alex; Blake, Judith A.; Bult, Carol J.; Cherry, J. Michael; Chisholm, Rex L.; Cochrane, Guy; Cook, Charles E.; Eppig, Janan T.; Galperin, Michael Y.; Gentleman, Robert; Goble, Carole A.; Gojobori, Takashi; Hancock, John M.; Howe, Douglas G.; Imanishi, Tadashi; Kelso, Janet; Landsman, David; Lewis, Suzanna E.; Karsch Mizrachi, Ilene; Orchard, Sandra; Ouellette, B.F. Francis; Ranganathan, Shoba; Richardson, Lorna; Rocca-Serra, Philippe; Schofield, Paul N.; Smedley, Damian; Southan, Christopher; Tan, Tin W.; Tatusova, Tatiana; Whetzel, Patricia L.; White, Owen; Yamasaki, Chisato
2011-01-01
The present article proposes the adoption of a community-defined, uniform, generic description of the core attributes of biological databases, BioDBCore. The goals of these attributes are to provide a general overview of the database landscape, to encourage consistency and interoperability between resources; and to promote the use of semantic and syntactic standards. BioDBCore will make it easier for users to evaluate the scope and relevance of available resources. This new resource will increase the collective impact of the information present in biological databases. PMID:21205783
ERIC Educational Resources Information Center
Laugerman, Marcia; Shelley, Mack; Rover, Diane; Mickelson, Steve
2015-01-01
This study uses a unique synthesized set of data for community college students transferring to engineering by combining several cohorts of longitudinal data along with transcript-level data, from both the Community College and the University, to measure success rates in engineering. The success rates are calculated by developing Kaplan-Meier…
Organisms as cooperative ecosystem engineers in intertidal flats
NASA Astrophysics Data System (ADS)
Passarelli, Claire; Olivier, Frédéric; Paterson, David M.; Meziane, Tarik; Hubas, Cédric
2014-09-01
The importance of facilitative interactions and organismal ecosystem engineering for establishing the structure of communities is increasingly being recognised for many different ecosystems. For example, soft-bottom tidal flats host a wide range of ecosystem engineers, probably because the harsh physico-chemical environmental conditions render these species of particular importance for community structure and function. These environments are therefore interesting when focusing on how ecosystem engineers interact and the consequences of these interactions on community dynamics. In this review, we initially detail the influence on benthic systems of two kinds of ecosystem engineers that are particularly common in tidal flats. Firstly, we examine species providing biogenic structures, which are often the only source of habitat complexity in these environments. Secondly, we focus on species whose activities alter sediment stability, which is a crucial feature structuring the dynamics of communities in tidal flats. The impacts of these engineers on both environment and communities were assessed but in addition the interaction between ecosystem engineers was examined. Habitat cascades occur when one engineer favours the development of another, which in turn creates or modifies and improves habitat for other species. Non-hierarchical interactions have often been shown to display non-additive effects, so that the effects of the association cannot be predicted from the effects of individual organisms. Here we propose the term of “cooperative ecosystem engineering” when two species interact in a way which enhances habitat suitability as a result of a combined engineering effect. Finally, we conclude by describing the potential threats for ecosystem engineers in intertidal areas, potential effects on their interactions and their influence on communities and ecosystem function.
Canto: an online tool for community literature curation.
Rutherford, Kim M; Harris, Midori A; Lock, Antonia; Oliver, Stephen G; Wood, Valerie
2014-06-15
Detailed curation of published molecular data is essential for any model organism database. Community curation enables researchers to contribute data from their papers directly to databases, supplementing the activity of professional curators and improving coverage of a growing body of literature. We have developed Canto, a web-based tool that provides an intuitive curation interface for both curators and researchers, to support community curation in the fission yeast database, PomBase. Canto supports curation using OBO ontologies, and can be easily configured for use with any species. Canto code and documentation are available under an Open Source license from http://curation.pombase.org/. Canto is a component of the Generic Model Organism Database (GMOD) project (http://www.gmod.org/). © The Author 2014. Published by Oxford University Press.
Integrating ecological and engineering concepts of resilience in microbial communities
Song, Hyun -Seob; Renslow, Ryan S.; Fredrickson, Jim K.; ...
2015-12-01
We note that many definitions of resilience have been proffered for natural and engineered ecosystems, but a conceptual consensus on resilience in microbial communities is still lacking. Here, we argue that the disconnect largely results from the wide variance in microbial community complexity, which range from simple synthetic consortia to complex natural communities, and divergence between the typical practical outcomes emphasized by ecologists and engineers. Viewing microbial communities as elasto-plastic systems, we argue that this gap between the engineering and ecological definitions of resilience stems from their respective emphases on elastic and plastic deformation, respectively. We propose that the twomore » concepts may be fundamentally united around the resilience of function rather than state in microbial communities and the regularity in the relationship between environmental variation and a community’s functional response. Furthermore, we posit that functional resilience is an intrinsic property of microbial communities, suggesting that state changes in response to environmental variation may be a key mechanism driving resilience in microbial communities.« less
Collaborative and Multilingual Approach to Learn Database Topics Using Concept Maps
Calvo, Iñaki
2014-01-01
Authors report on a study using the concept mapping technique in computer engineering education for learning theoretical introductory database topics. In addition, the learning of multilingual technical terminology by means of the collaborative drawing of a concept map is also pursued in this experiment. The main characteristics of a study carried out in the database subject at the University of the Basque Country during the 2011/2012 course are described. This study contributes to the field of concept mapping as these kinds of cognitive tools have proved to be valid to support learning in computer engineering education. It contributes to the field of computer engineering education, providing a technique that can be incorporated with several educational purposes within the discipline. Results reveal the potential that a collaborative concept map editor offers to fulfil the above mentioned objectives. PMID:25538957
Sailors, R. Matthew
1997-01-01
The Arden Syntax specification for sharable computerized medical knowledge bases has not been widely utilized in the medical informatics community because of a lack of tools for developing Arden Syntax knowledge bases (Medical Logic Modules). The MLM Builder is a Microsoft Windows-hosted CASE (Computer Aided Software Engineering) tool designed to aid in the development and maintenance of Arden Syntax Medical Logic Modules (MLMs). The MLM Builder consists of the MLM Writer (an MLM generation tool), OSCAR (an anagram of Object-oriented ARden Syntax Compiler), a test database, and the MLManager (an MLM management information system). Working together, these components form a self-contained, unified development environment for the creation, testing, and maintenance of Arden Syntax Medical Logic Modules.
Urban Climate Change Resilience as a Teaching Tool for a STEM Summer Bridge Program
NASA Astrophysics Data System (ADS)
Rosenzweig, B.; Vorosmarty, C. J.; Socha, A.; Corsi, F.
2015-12-01
Community colleges have been identified as important gateways for the United States' scientific workforce development. However, students who begin their higher education at community colleges often face barriers to developing the skills needed for higher-level STEM careers, including basic training in mathematics, programming, analytical problem solving, and cross-disciplinary communication. As part of the Business Higher Education Forum's Undergraduate STEM Interventions in Industry (USI2) Consortium, we are developing a summer bridge program for students in STEM fields transferring from community college to senior (4-year) colleges at the City University of New York. Our scientific research on New York City climate change resilience will serve as the foundation for the bridge program curriculum. Students will be introduced to systems thinking and improve their analytical skills through guided problem-solving exercises using the New York City Climate Change Resilience Indicators Database currently being developed by the CUNY Environmental Crossroads Initiative. Students will also be supported in conducting an introductory, independent research project using the database. The interdisciplinary nature of climate change resilience assessment will allow students to explore topics related to their STEM field of interest (i.e. engineering, chemistry, and health science), while working collaboratively across disciplines with their peers. We hope that students that participate in the bridge program will continue with their research projects through their tenure at senior colleges, further enhancing their academic training, while actively contributing to the study of urban climate change resilience. The effectiveness of this approach will be independently evaluated by NORC at the University of Chicago, as well as through internal surveying and long-term tracking of participating student cohorts.
Liao, Wenta; Draper, William M
2013-02-21
The mass-to-structure or MTS Search Engine is an Access 2010 database containing theoretical molecular mass information for 19,438 compounds assembled from common sources such as the Merck Index, pesticide and pharmaceutical compilations, and chemical catalogues. This database, which contains no experimental mass spectral data, was developed as an aid to identification of compounds in atmospheric pressure ionization (API)-LC-MS. This paper describes a powerful upgrade to this database, a fully integrated utility for filtering or ranking candidates based on isotope ratios and patterns. The new MTS Search Engine is applied here to the identification of volatile and semivolatile compounds including pesticides, nitrosoamines and other pollutants. Methane and isobutane chemical ionization (CI) GC-MS spectra were obtained from unit mass resolution mass spectrometers to determine MH(+) masses and isotope ratios. Isotopes were measured accurately with errors of <4% and <6%, respectively, for A + 1 and A + 2 peaks. Deconvolution of interfering isotope clusters (e.g., M(+) and [M - H](+)) was required for accurate determination of the A + 1 isotope in halogenated compounds. Integrating the isotope data greatly improved the speed and accuracy of the database identifications. The database accurately identified unknowns from isobutane CI spectra in 100% of cases where as many as 40 candidates satisfied the mass tolerance. The paper describes the development and basic operation of the new MTS Search Engine and details performance testing with over 50 model compounds.
Legacy: Scientific results ODP Legacy: Engineering and science operations ODP Legacy: Samples & ; databases ODP Legacy: Outreach Overview Program Administration | Scientific Results | Engineering &
Jet aircraft engine emissions database development: 1992 military, charter, and nonscheduled traffic
NASA Technical Reports Server (NTRS)
Metwally, Munir
1995-01-01
Studies relating to environmental emissions database for the military, charter, and non-scheduled traffic for the year 1992 were conducted by McDonnell Douglas Aerospace Transport Aircraft. The report also includes a comparison with a previous emission database for year 1990. Discussions of the methodology used in formulating these databases are provided.
Database Resources of the BIG Data Center in 2018
Xu, Xingjian; Hao, Lili; Zhu, Junwei; Tang, Bixia; Zhou, Qing; Song, Fuhai; Chen, Tingting; Zhang, Sisi; Dong, Lili; Lan, Li; Wang, Yanqing; Sang, Jian; Hao, Lili; Liang, Fang; Cao, Jiabao; Liu, Fang; Liu, Lin; Wang, Fan; Ma, Yingke; Xu, Xingjian; Zhang, Lijuan; Chen, Meili; Tian, Dongmei; Li, Cuiping; Dong, Lili; Du, Zhenglin; Yuan, Na; Zeng, Jingyao; Zhang, Zhewen; Wang, Jinyue; Shi, Shuo; Zhang, Yadong; Pan, Mengyu; Tang, Bixia; Zou, Dong; Song, Shuhui; Sang, Jian; Xia, Lin; Wang, Zhennan; Li, Man; Cao, Jiabao; Niu, Guangyi; Zhang, Yang; Sheng, Xin; Lu, Mingming; Wang, Qi; Xiao, Jingfa; Zou, Dong; Wang, Fan; Hao, Lili; Liang, Fang; Li, Mengwei; Sun, Shixiang; Zou, Dong; Li, Rujiao; Yu, Chunlei; Wang, Guangyu; Sang, Jian; Liu, Lin; Li, Mengwei; Li, Man; Niu, Guangyi; Cao, Jiabao; Sun, Shixiang; Xia, Lin; Yin, Hongyan; Zou, Dong; Xu, Xingjian; Ma, Lina; Chen, Huanxin; Sun, Yubin; Yu, Lei; Zhai, Shuang; Sun, Mingyuan; Zhang, Zhang; Zhao, Wenming; Xiao, Jingfa; Bao, Yiming; Song, Shuhui; Hao, Lili; Li, Rujiao; Ma, Lina; Sang, Jian; Wang, Yanqing; Tang, Bixia; Zou, Dong; Wang, Fan
2018-01-01
Abstract The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. PMID:29036542
Development of the Community Health Improvement Navigator Database of Interventions.
Roy, Brita; Stanojevich, Joel; Stange, Paul; Jiwani, Nafisa; King, Raymond; Koo, Denise
2016-02-26
With the passage of the Patient Protection and Affordable Care Act, the requirements for hospitals to achieve tax-exempt status include performing a triennial community health needs assessment and developing a plan to address identified needs. To address community health needs, multisector collaborative efforts to improve both health care and non-health care determinants of health outcomes have been the most effective and sustainable. In 2015, CDC released the Community Health Improvement Navigator to facilitate the development of these efforts. This report describes the development of the database of interventions included in the Community Health Improvement Navigator. The database of interventions allows the user to easily search for multisector, collaborative, evidence-based interventions to address the underlying causes of the greatest morbidity and mortality in the United States: tobacco use and exposure, physical inactivity, unhealthy diet, high cholesterol, high blood pressure, diabetes, and obesity.
Development of the Community Health Improvement Navigator Database of Interventions
Roy, Brita; Stanojevich, Joel; Stange, Paul; Jiwani, Nafisa; King, Raymond; Koo, Denise
2016-01-01
Summary With the passage of the Patient Protection and Affordable Care Act, the requirements for hospitals to achieve tax-exempt status include performing a triennial community health needs assessment and developing a plan to address identified needs. To address community health needs, multisector collaborative efforts to improve both health care and non–health care determinants of health outcomes have been the most effective and sustainable. In 2015, CDC released the Community Health Improvement Navigator to facilitate the development of these efforts. This report describes the development of the database of interventions included in the Community Health Improvement Navigator. The database of interventions allows the user to easily search for multisector, collaborative, evidence-based interventions to address the underlying causes of the greatest morbidity and mortality in the United States: tobacco use and exposure, physical inactivity, unhealthy diet, high cholesterol, high blood pressure, diabetes, and obesity. PMID:26917110
2011-09-06
Presentation Outline A) Review of Soil Model governing equations B) Development of pedo -transfer functions (terrain database to engineering properties) C...lateral earth pressure) UNCLASSIFIED B) Development of pedo -transfer functions Engineering parameters needed by soil model - compression index - rebound...inches, RCI for fine- grained soils, CI for coarse-grained soils. UNCLASSIFIED Pedo -transfer function • Need to transfer existing terrain database
What Counts as Outcomes? Community Perspectives of an Engineering Partnership
ERIC Educational Resources Information Center
Reynolds, Nora Pillard
2014-01-01
This study explored the perspectives of community organization representatives and community residents about a partnership between a College of Engineering and a rural municipality in Nicaragua. The intended community outcomes described by university participants during interviews corresponded with tangible project outcomes, such as access to…
Requirements, Verification, and Compliance (RVC) Database Tool
NASA Technical Reports Server (NTRS)
Rainwater, Neil E., II; McDuffee, Patrick B.; Thomas, L. Dale
2001-01-01
This paper describes the development, design, and implementation of the Requirements, Verification, and Compliance (RVC) database used on the International Space Welding Experiment (ISWE) project managed at Marshall Space Flight Center. The RVC is a systems engineer's tool for automating and managing the following information: requirements; requirements traceability; verification requirements; verification planning; verification success criteria; and compliance status. This information normally contained within documents (e.g. specifications, plans) is contained in an electronic database that allows the project team members to access, query, and status the requirements, verification, and compliance information from their individual desktop computers. Using commercial-off-the-shelf (COTS) database software that contains networking capabilities, the RVC was developed not only with cost savings in mind but primarily for the purpose of providing a more efficient and effective automated method of maintaining and distributing the systems engineering information. In addition, the RVC approach provides the systems engineer the capability to develop and tailor various reports containing the requirements, verification, and compliance information that meets the needs of the project team members. The automated approach of the RVC for capturing and distributing the information improves the productivity of the systems engineer by allowing that person to concentrate more on the job of developing good requirements and verification programs and not on the effort of being a "document developer".
78 FR 15110 - Aviation Rulemaking Advisory Committee; Engine Bird Ingestion Requirements-New Task
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-08
...: During the bird-ingestion rulemaking database (BRDB) working group`s reevaluation of the current engine... engine core ingestion. If the BRDB working group`s reevaluation determines that such requirements are... Task ARAC accepted the task and will establish the Engine Harmonization Working Group (EHWG), under the...
Custom Search Engines: Tools & Tips
ERIC Educational Resources Information Center
Notess, Greg R.
2008-01-01
Few have the resources to build a Google or Yahoo! from scratch. Yet anyone can build a search engine based on a subset of the large search engines' databases. Use Google Custom Search Engine or Yahoo! Search Builder or any of the other similar programs to create a vertical search engine targeting sites of interest to users. The basic steps to…
NASA Technical Reports Server (NTRS)
Chan, David T.; Pinier, Jeremy T.; Wilcox, Floyd J., Jr.; Dalle, Derek J.; Rogers, Stuart E.; Gomez, Reynaldo J.
2016-01-01
The development of the aerodynamic database for the Space Launch System (SLS) booster separation environment has presented many challenges because of the complex physics of the ow around three independent bodies due to proximity e ects and jet inter- actions from the booster separation motors and the core stage engines. This aerodynamic environment is dicult to simulate in a wind tunnel experiment and also dicult to simu- late with computational uid dynamics. The database is further complicated by the high dimensionality of the independent variable space, which includes the orientation of the core stage, the relative positions and orientations of the solid rocket boosters, and the thrust lev- els of the various engines. Moreover, the clearance between the core stage and the boosters during the separation event is sensitive to the aerodynamic uncertainties of the database. This paper will present the development process for Version 3 of the SLS booster separa- tion aerodynamic database and the statistics-based uncertainty quanti cation process for the database.
NASA Technical Reports Server (NTRS)
Li, Chung-Sheng (Inventor); Smith, John R. (Inventor); Chang, Yuan-Chi (Inventor); Jhingran, Anant D. (Inventor); Padmanabhan, Sriram K. (Inventor); Hsiao, Hui-I (Inventor); Choy, David Mun-Hien (Inventor); Lin, Jy-Jine James (Inventor); Fuh, Gene Y. C. (Inventor); Williams, Robin (Inventor)
2004-01-01
Methods and apparatus for providing a multi-tier object-relational database architecture are disclosed. In one illustrative embodiment of the present invention, a multi-tier database architecture comprises an object-relational database engine as a top tier, one or more domain-specific extension modules as a bottom tier, and one or more universal extension modules as a middle tier. The individual extension modules of the bottom tier operationally connect with the one or more universal extension modules which, themselves, operationally connect with the database engine. The domain-specific extension modules preferably provide such functions as search, index, and retrieval services of images, video, audio, time series, web pages, text, XML, spatial data, etc. The domain-specific extension modules may include one or more IBM DB2 extenders, Oracle data cartridges and/or Informix datablades, although other domain-specific extension modules may be used.
Using Internet search engines to estimate word frequency.
Blair, Irene V; Urland, Geoffrey R; Ma, Jennifer E
2002-05-01
The present research investigated Internet search engines as a rapid, cost-effective alternative for estimating word frequencies. Frequency estimates for 382 words were obtained and compared across four methods: (1) Internet search engines, (2) the Kucera and Francis (1967) analysis of a traditional linguistic corpus, (3) the CELEX English linguistic database (Baayen, Piepenbrock, & Gulikers, 1995), and (4) participant ratings of familiarity. The results showed that Internet search engines produced frequency estimates that were highly consistent with those reported by Kucera and Francis and those calculated from CELEX, highly consistent across search engines, and very reliable over a 6-month period of time. Additional results suggested that Internet search engines are an excellent option when traditional word frequency analyses do not contain the necessary data (e.g., estimates for forenames and slang). In contrast, participants' familiarity judgments did not correspond well with the more objective estimates of word frequency. Researchers are advised to use search engines with large databases (e.g., AltaVista) to ensure the greatest representativeness of the frequency estimates.
Ortseifen, Vera; Stolze, Yvonne; Maus, Irena; Sczyrba, Alexander; Bremges, Andreas; Albaum, Stefan P; Jaenicke, Sebastian; Fracowiak, Jochen; Pühler, Alfred; Schlüter, Andreas
2016-08-10
To study the metaproteome of a biogas-producing microbial community, fermentation samples were taken from an agricultural biogas plant for microbial cell and protein extraction and corresponding metagenome analyses. Based on metagenome sequence data, taxonomic community profiling was performed to elucidate the composition of bacterial and archaeal sub-communities. The community's cytosolic metaproteome was represented in a 2D-PAGE approach. Metaproteome databases for protein identification were compiled based on the assembled metagenome sequence dataset for the biogas plant analyzed and non-corresponding biogas metagenomes. Protein identification results revealed that the corresponding biogas protein database facilitated the highest identification rate followed by other biogas-specific databases, whereas common public databases yielded insufficient identification rates. Proteins of the biogas microbiome identified as highly abundant were assigned to the pathways involved in methanogenesis, transport and carbon metabolism. Moreover, the integrated metagenome/-proteome approach enabled the examination of genetic-context information for genes encoding identified proteins by studying neighboring genes on the corresponding contig. Exemplarily, this approach led to the identification of a Methanoculleus sp. contig encoding 16 methanogenesis-related gene products, three of which were also detected as abundant proteins within the community's metaproteome. Thus, metagenome contigs provide additional information on the genetic environment of identified abundant proteins. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Ricks, Kenneth G.; Richardson, James A.; Stern, Harold P.; Taylor, Robert P.; Taylor, Ryan A.
2014-01-01
Retention and graduation rates for engineering disciplines are significantly lower than desired, and research literature offers many possible causes. Engineering learning communities provide the opportunity to study relationships among specific causes and to develop and evaluate activities designed to lessen their impact. This paper details an…
Of Feedback, Noise, and Clarion Calls: Preserving the Quality of Engineering Education.
ERIC Educational Resources Information Center
David, Edward E., Jr.
Although times seemed ripe for far-reaching initiatives to safeguard the quality of engineering education, current political noise about the issue threatens to drown out the engineering community's message. However, the engineering community's theme should be that economic growth in a modern economy, that industrial policy, is based first and…
An integrated knowledge system for wind tunnel testing - Project Engineers' Intelligent Assistant
NASA Technical Reports Server (NTRS)
Lo, Ching F.; Shi, George Z.; Hoyt, W. A.; Steinle, Frank W., Jr.
1993-01-01
The Project Engineers' Intelligent Assistant (PEIA) is an integrated knowledge system developed using artificial intelligence technology, including hypertext, expert systems, and dynamic user interfaces. This system integrates documents, engineering codes, databases, and knowledge from domain experts into an enriched hypermedia environment and was designed to assist project engineers in planning and conducting wind tunnel tests. PEIA is a modular system which consists of an intelligent user-interface, seven modules and an integrated tool facility. Hypermedia technology is discussed and the seven PEIA modules are described. System maintenance and updating is very easy due to the modular structure and the integrated tool facility provides user access to commercial software shells for documentation, reporting, or database updating. PEIA is expected to provide project engineers with technical information, increase efficiency and productivity, and provide a realistic tool for personnel training.
Scale-Up of GRCop: From Laboratory to Rocket Engines
NASA Technical Reports Server (NTRS)
Ellis, David L.
2016-01-01
GRCop is a high temperature, high thermal conductivity copper-based series of alloys designed primarily for use in regeneratively cooled rocket engine liners. It began with laboratory-level production of a few grams of ribbon produced by chill block melt spinning and has grown to commercial-scale production of large-scale rocket engine liners. Along the way, a variety of methods of consolidating and working the alloy were examined, a database of properties was developed and a variety of commercial and government applications were considered. This talk will briefly address the basic material properties used for selection of compositions to scale up, the methods used to go from simple ribbon to rocket engines, the need to develop a suitable database, and the issues related to getting the alloy into a rocket engine or other application.
BioMart Central Portal: an open database network for the biological community
Guberman, Jonathan M.; Ai, J.; Arnaiz, O.; Baran, Joachim; Blake, Andrew; Baldock, Richard; Chelala, Claude; Croft, David; Cros, Anthony; Cutts, Rosalind J.; Di Génova, A.; Forbes, Simon; Fujisawa, T.; Gadaleta, E.; Goodstein, D. M.; Gundem, Gunes; Haggarty, Bernard; Haider, Syed; Hall, Matthew; Harris, Todd; Haw, Robin; Hu, S.; Hubbard, Simon; Hsu, Jack; Iyer, Vivek; Jones, Philip; Katayama, Toshiaki; Kinsella, R.; Kong, Lei; Lawson, Daniel; Liang, Yong; Lopez-Bigas, Nuria; Luo, J.; Lush, Michael; Mason, Jeremy; Moreews, Francois; Ndegwa, Nelson; Oakley, Darren; Perez-Llamas, Christian; Primig, Michael; Rivkin, Elena; Rosanoff, S.; Shepherd, Rebecca; Simon, Reinhard; Skarnes, B.; Smedley, Damian; Sperling, Linda; Spooner, William; Stevenson, Peter; Stone, Kevin; Teague, J.; Wang, Jun; Wang, Jianxin; Whitty, Brett; Wong, D. T.; Wong-Erasmus, Marie; Yao, L.; Youens-Clark, Ken; Yung, Christina; Zhang, Junjun; Kasprzyk, Arek
2011-01-01
BioMart Central Portal is a first of its kind, community-driven effort to provide unified access to dozens of biological databases spanning genomics, proteomics, model organisms, cancer data, ontology information and more. Anybody can contribute an independently maintained resource to the Central Portal, allowing it to be exposed to and shared with the research community, and linking it with the other resources in the portal. Users can take advantage of the common interface to quickly utilize different sources without learning a new system for each. The system also simplifies cross-database searches that might otherwise require several complicated steps. Several integrated tools streamline common tasks, such as converting between ID formats and retrieving sequences. The combination of a wide variety of databases, an easy-to-use interface, robust programmatic access and the array of tools make Central Portal a one-stop shop for biological data querying. Here, we describe the structure of Central Portal and show example queries to demonstrate its capabilities. Database URL: http://central.biomart.org. PMID:21930507
Database resources for the Tuberculosis community
Lew, Jocelyne M.; Mao, Chunhong; Shukla, Maulik; Warren, Andrew; Will, Rebecca; Kuznetsov, Dmitry; Xenarios, Ioannis; Robertson, Brian D.; Gordon, Stephen V.; Schnappinger, Dirk; Cole, Stewart T.; Sobral, Bruno
2013-01-01
Summary Access to online repositories for genomic and associated “-omics” datasets is now an essential part of everyday research activity. It is important therefore that the Tuberculosis community is aware of the databases and tools available to them online, as well as for the database hosts to know what the needs of the research community are. One of the goals of the Tuberculosis Annotation Jamboree, held in Washington DC on March 7th–8th 2012, was therefore to provide an overview of the current status of three key Tuberculosis resources, TubercuList (tuberculist.epfl.ch), TB Database (www.tbdb.org), and Pathosystems Resource Integration Center (PATRIC, www.patricbrc.org). Here we summarize some key updates and upcoming features in TubercuList, and provide an overview of the PATRIC site and its online tools for pathogen RNA-Seq analysis. PMID:23332401
Computer-Aided Systems Engineering for Flight Research Projects Using a Workgroup Database
NASA Technical Reports Server (NTRS)
Mizukami, Masahi
2004-01-01
An online systems engineering tool for flight research projects has been developed through the use of a workgroup database. Capabilities are implemented for typical flight research systems engineering needs in document library, configuration control, hazard analysis, hardware database, requirements management, action item tracking, project team information, and technical performance metrics. Repetitive tasks are automated to reduce workload and errors. Current data and documents are instantly available online and can be worked on collaboratively. Existing forms and conventional processes are used, rather than inventing or changing processes to fit the tool. An integrated tool set offers advantages by automatically cross-referencing data, minimizing redundant data entry, and reducing the number of programs that must be learned. With a simplified approach, significant improvements are attained over existing capabilities for minimal cost. By using a workgroup-level database platform, personnel most directly involved in the project can develop, modify, and maintain the system, thereby saving time and money. As a pilot project, the system has been used to support an in-house flight experiment. Options are proposed for developing and deploying this type of tool on a more extensive basis.
) Water rights and resources engineering Database planning and development Research Interests Collection lean principles to streamline exploration and drilling and reduce error/risk Research, development and Groundwater modeling Quantitative methods in water resource engineering Water resource engineering and
NASA Astrophysics Data System (ADS)
Delvoie, S.; Radu, J.-P.; Ruthy, I.; Charlier, R.
2012-04-01
An engineering geological map can be defined as a geological map with a generalized representation of all the components of a geological environment which are strongly required for spatial planning, design, construction and maintenance of civil engineering buildings. In Wallonia (Belgium) 24 engineering geological maps have been developed between the 70s and the 90s at 1/5,000 or 1/10,000 scale covering some areas of the most industrialized and urbanized cities (Liège, Charleroi and Mons). They were based on soil and subsoil data point (boring, drilling, penetration test, geophysical test, outcrop…). Some displayed data present the depth (with isoheights) or the thickness (with isopachs) of the different subsoil layers up to about 50 m depth. Information about geomechanical properties of each subsoil layer, useful for engineers and urban planners, is also synthesized. However, these maps were built up only on paper and progressively needed to be updated with new soil and subsoil data. The Public Service of Wallonia and the University of Liège have recently initiated a study to evaluate the feasibility to develop engineering geological mapping with a computerized approach. Numerous and various data (about soil and subsoil) are stored into a georelational database (the geotechnical database - using Access, Microsoft®). All the data are geographically referenced. The database is linked to a GIS project (using ArcGIS, ESRI®). Both the database and GIS project consist of a powerful tool for spatial data management and analysis. This approach involves a methodology using interpolation methods to update the previous maps and to extent the coverage to new areas. The location (x, y, z) of each subsoil layer is then computed from data point. The geomechanical data of these layers are synthesized in an explanatory booklet joined to maps.
Special Section: The USMARC Community Information Format.
ERIC Educational Resources Information Center
Lutz, Marilyn; And Others
1992-01-01
Five papers discuss topics related to the USMARC Community Information Format (CIF), including using CIF to create a public service resource network; development of a CIF-based database of materials relating to multicultural and differently-abled populations; background on CIF; development of an information and referral database; and CIF and…
The CHOICES Project: Piloting a Secondary Transition Planning Database
ERIC Educational Resources Information Center
Campbell, Dennis; Baxter, Abigail; Ellis, David; Pardue, Harold
2013-01-01
The CHOICES Project funded by the Institute of Education Sciences (IES), U.S. Department of Education, addresses the need for ready access to information for parents, students, school, and community agency personnel regarding transitional and community support programs. At this time we have created two databases (student information and community…
System for Performing Single Query Searches of Heterogeneous and Dispersed Databases
NASA Technical Reports Server (NTRS)
Maluf, David A. (Inventor); Okimura, Takeshi (Inventor); Gurram, Mohana M. (Inventor); Tran, Vu Hoang (Inventor); Knight, Christopher D. (Inventor); Trinh, Anh Ngoc (Inventor)
2017-01-01
The present invention is a distributed computer system of heterogeneous databases joined in an information grid and configured with an Application Programming Interface hardware which includes a search engine component for performing user-structured queries on multiple heterogeneous databases in real time. This invention reduces overhead associated with the impedance mismatch that commonly occurs in heterogeneous database queries.
The eNanoMapper database for nanomaterial safety information
Chomenidis, Charalampos; Doganis, Philip; Fadeel, Bengt; Grafström, Roland; Hardy, Barry; Hastings, Janna; Hegi, Markus; Jeliazkov, Vedrin; Kochev, Nikolay; Kohonen, Pekka; Munteanu, Cristian R; Sarimveis, Haralambos; Smeets, Bart; Sopasakis, Pantelis; Tsiliki, Georgia; Vorgrimmler, David; Willighagen, Egon
2015-01-01
Summary Background: The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs. Results: The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms. Conclusion: We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the “representational state transfer” (REST) API enables building user friendly interfaces and graphical summaries of the data, and how these resources facilitate the modelling of reproducible quantitative structure–activity relationships for nanomaterials (NanoQSAR). PMID:26425413
The eNanoMapper database for nanomaterial safety information.
Jeliazkova, Nina; Chomenidis, Charalampos; Doganis, Philip; Fadeel, Bengt; Grafström, Roland; Hardy, Barry; Hastings, Janna; Hegi, Markus; Jeliazkov, Vedrin; Kochev, Nikolay; Kohonen, Pekka; Munteanu, Cristian R; Sarimveis, Haralambos; Smeets, Bart; Sopasakis, Pantelis; Tsiliki, Georgia; Vorgrimmler, David; Willighagen, Egon
2015-01-01
The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs. The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms. We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the "representational state transfer" (REST) API enables building user friendly interfaces and graphical summaries of the data, and how these resources facilitate the modelling of reproducible quantitative structure-activity relationships for nanomaterials (NanoQSAR).
Database Access Manager for the Software Engineering Laboratory (DAMSEL) user's guide
NASA Technical Reports Server (NTRS)
1990-01-01
Operating instructions for the Database Access Manager for the Software Engineering Laboratory (DAMSEL) system are presented. Step-by-step instructions for performing various data entry and report generation activities are included. Sample sessions showing the user interface display screens are also included. Instructions for generating reports are accompanied by sample outputs for each of the reports. The document groups the available software functions by the classes of users that may access them.
ERIC Educational Resources Information Center
Quantum: Research & Scholarship, 1998
1998-01-01
Profiles 10 University of New Mexico community programs: University Art Museum, Rio Grande and Four Corners Writing Projects, Blacks in the Southwest (exhibit), New Mexico Engineering Research Institute's Environmental Finance Center, Adolescent Social Action Program, Minority Engineering Programs, Rural Community College Initiative, Valencia…
ERIC Educational Resources Information Center
Currents, 2002
2002-01-01
Offers a descriptive table of databases that help higher education institutions orchestrate advancement operations. Information includes vendor, contact, software, price, database engine/server platform, recommended reporting tools, record capacity, and client type. (EV)
ERIC Educational Resources Information Center
Friedel, Janice
2010-01-01
One of the most remarkable developments in American education in the past half century has been the creation and rapid growth of the nation's community colleges. Built on the curricular pillars of vocational education, transfer programs, and community education, community colleges today are considered the "engines of statewide economic…
Mississippi State University Center for Air Sea Technology FY95 Research Program
NASA Technical Reports Server (NTRS)
Yeske, Lanny; Corbin, James H.
1995-01-01
The Mississippi State University (MSU) Center for Air Sea Technology (CAST) evolved from the Institute for Naval Oceanography's (INO) Experimental Center for Mesoscale Ocean Prediction (ECMOP) which was started in 1989. MSU CAST subsequently began operation on 1 October 1992 under an Office of Naval Research (ONR) two-year grant which ended on 30 September 1994. In FY95 MSU CAST was successful in obtaining five additional research grants from ONR, as well as several other research contracts from the Naval Oceanographic Office via NASA, the Naval Research Laboratory, the Army Corps of Engineers, and private industry. In the past, MSU CAST technical research and development has produced tools, systems, techniques, and procedures that improve efficiency and overcome deficiency for both the operational and research communities residing with the Department of Defense, private industry, and university ocean modeling community. We continued this effort with the following thrust areas: to develop advanced methodologies and tools for model evaluation, validation and visualization, both oceanographic and atmospheric; to develop a system-level capability for conducting temporally and ; spatially scaled ocean simulations driven by or are responsive to ocean models, and take into consideration coupling to atmospheric models; to continue the existing oceanographic/atmospheric data management task with emphasis on distributed databases in a network environment, with database optimization and standardization, including use of Mosaic and World Wide Web (WWW) access; and to implement a high performance parallel computing technology for CAST ocean models
SPIKE – a database, visualization and analysis tool of cellular signaling pathways
Elkon, Ran; Vesterman, Rita; Amit, Nira; Ulitsky, Igor; Zohar, Idan; Weisz, Mali; Mass, Gilad; Orlev, Nir; Sternberg, Giora; Blekhman, Ran; Assa, Jackie; Shiloh, Yosef; Shamir, Ron
2008-01-01
Background Biological signaling pathways that govern cellular physiology form an intricate web of tightly regulated interlocking processes. Data on these regulatory networks are accumulating at an unprecedented pace. The assimilation, visualization and interpretation of these data have become a major challenge in biological research, and once met, will greatly boost our ability to understand cell functioning on a systems level. Results To cope with this challenge, we are developing the SPIKE knowledge-base of signaling pathways. SPIKE contains three main software components: 1) A database (DB) of biological signaling pathways. Carefully curated information from the literature and data from large public sources constitute distinct tiers of the DB. 2) A visualization package that allows interactive graphic representations of regulatory interactions stored in the DB and superposition of functional genomic and proteomic data on the maps. 3) An algorithmic inference engine that analyzes the networks for novel functional interplays between network components. SPIKE is designed and implemented as a community tool and therefore provides a user-friendly interface that allows registered users to upload data to SPIKE DB. Our vision is that the DB will be populated by a distributed and highly collaborative effort undertaken by multiple groups in the research community, where each group contributes data in its field of expertise. Conclusion The integrated capabilities of SPIKE make it a powerful platform for the analysis of signaling networks and the integration of knowledge on such networks with omics data. PMID:18289391
ERIC Educational Resources Information Center
Lundquist, Carol; Frieder, Ophir; Holmes, David O.; Grossman, David
1999-01-01
Describes a scalable, parallel, relational database-drive information retrieval engine. To support portability across a wide range of execution environments, all algorithms adhere to the SQL-92 standard. By incorporating relevance feedback algorithms, accuracy is enhanced over prior database-driven information retrieval efforts. Presents…
Electronic Reference Library: Silverplatter's Database Networking Solution.
ERIC Educational Resources Information Center
Millea, Megan
Silverplatter's Electronic Reference Library (ERL) provides wide area network access to its databases using TCP/IP communications and client-server architecture. ERL has two main components: The ERL clients (retrieval interface) and the ERL server (search engines). ERL clients provide patrons with seamless access to multiple databases on multiple…
Subject Specific Databases: A Powerful Research Tool
ERIC Educational Resources Information Center
Young, Terrence E., Jr.
2004-01-01
Subject specific databases, or vortals (vertical portals), are databases that provide highly detailed research information on a particular topic. They are the smallest, most focused search tools on the Internet and, in recent years, they've been on the rise. Currently, more of the so-called "mainstream" search engines, subject directories, and…
Rrsm: The European Rapid Raw Strong-Motion Database
NASA Astrophysics Data System (ADS)
Cauzzi, C.; Clinton, J. F.; Sleeman, R.; Domingo Ballesta, J.; Kaestli, P.; Galanis, O.
2014-12-01
We introduce the European Rapid Raw Strong-Motion database (RRSM), a Europe-wide system that provides parameterised strong motion information, as well as access to waveform data, within minutes of the occurrence of strong earthquakes. The RRSM significantly differs from traditional earthquake strong motion dissemination in Europe, which has focused on providing reviewed, processed strong motion parameters, typically with significant delays. As the RRSM provides rapid open access to raw waveform data and metadata and does not rely on external manual waveform processing, RRSM information is tailored to seismologists and strong-motion data analysts, earthquake and geotechnical engineers, international earthquake response agencies and the educated general public. Access to the RRSM database is via a portal at http://www.orfeus-eu.org/rrsm/ that allows users to query earthquake information, peak ground motion parameters and amplitudes of spectral response; and to select and download earthquake waveforms. All information is available within minutes of any earthquake with magnitude ≥ 3.5 occurring in the Euro-Mediterranean region. Waveform processing and database population are performed using the waveform processing module scwfparam, which is integrated in SeisComP3 (SC3; http://www.seiscomp3.org/). Earthquake information is provided by the EMSC (http://www.emsc-csem.org/) and all the seismic waveform data is accessed at the European Integrated waveform Data Archive (EIDA) at ORFEUS (http://www.orfeus-eu.org/index.html), where all on-scale data is used in the fully automated processing. As the EIDA community is continually growing, the already significant number of strong motion stations is also increasing and the importance of this product is expected to also increase. Real-time RRSM processing started in June 2014, while past events have been processed in order to provide a complete database back to 2005.
An environmental database for Venice and tidal zones
NASA Astrophysics Data System (ADS)
Macaluso, L.; Fant, S.; Marani, A.; Scalvini, G.; Zane, O.
2003-04-01
The natural environment is a complex, highly variable and physically non reproducible system (not in laboratory, nor in a confined territory). Environmental experimental studies are thus necessarily based on field measurements distributed in time and space. Only extensive data collections can provide the representative samples of the system behavior which are essential for scientific advancement. The assimilation of large data collections into accessible archives must necessarily be implemented in electronic databases. In the case of tidal environments in general, and of the Venice lagoon in particular, it is useful to establish a database, freely accessible to the scientific community, documenting the dynamics of such systems and their response to anthropic pressures and climatic variability. At the Istituto Veneto di Scienze, Lettere ed Arti in Venice (Italy) two internet environmental databases has been developed: one collects information regarding in detail the Venice lagoon; the other co-ordinate the research consortium of the "TIDE" EU RTD project, that attends to three different tidal areas: Venice Lagoon (Italy), Morecambe Bay (England), and Forth Estuary (Scotland). The archives may be accessed through the URL: www.istitutoveneto.it. The first one is freely available and applies to anyone is interested. It is continuously updated and has been structured in order to promote documentation concerning Venetian environment and disseminate this information for educational purposes (see "Dissemination" section). The second one is supplied by scientists and engineers working on this tidal system for various purposes (scientific, management, conservation purposes, etc.); it applies to interested researchers and grows with their own contributions. Both intend to promote scientific communication, to contribute to the realization of a distributed information system collecting homogeneous themes, and to initiate the interconnection among databases regarding different kinds of environment.
Usage of the Jess Engine, Rules and Ontology to Query a Relational Database
NASA Astrophysics Data System (ADS)
Bak, Jaroslaw; Jedrzejek, Czeslaw; Falkowski, Maciej
We present a prototypical implementation of a library tool, the Semantic Data Library (SDL), which integrates the Jess (Java Expert System Shell) engine, rules and ontology to query a relational database. The tool extends functionalities of previous OWL2Jess with SWRL implementations and takes full advantage of the Jess engine, by separating forward and backward reasoning. The optimization of integration of all these technologies is an advancement over previous tools. We discuss the complexity of the query algorithm. As a demonstration of capability of the SDL library, we execute queries using crime ontology which is being developed in the Polish PPBW project.
Narayanan, Shrikanth; Toutios, Asterios; Ramanarayanan, Vikram; Lammert, Adam; Kim, Jangwon; Lee, Sungbok; Nayak, Krishna; Kim, Yoon-Chul; Zhu, Yinghua; Goldstein, Louis; Byrd, Dani; Bresch, Erik; Ghosh, Prasanta; Katsamanis, Athanasios; Proctor, Michael
2014-01-01
USC-TIMIT is an extensive database of multimodal speech production data, developed to complement existing resources available to the speech research community and with the intention of being continuously refined and augmented. The database currently includes real-time magnetic resonance imaging data from five male and five female speakers of American English. Electromagnetic articulography data have also been presently collected from four of these speakers. The two modalities were recorded in two independent sessions while the subjects produced the same 460 sentence corpus used previously in the MOCHA-TIMIT database. In both cases the audio signal was recorded and synchronized with the articulatory data. The database and companion software are freely available to the research community. PMID:25190403
Flood trends and river engineering on the Mississippi River system
Pinter, N.; Jemberie, A.A.; Remo, J.W.F.; Heine, R.A.; Ickes, B.S.
2008-01-01
Along >4000 km of the Mississippi River system, we document that climate, land-use change, and river engineering have contributed to statistically significant increases in flooding over the past 100-150 years. Trends were tested using a database of >8 million hydrological measurements. A geospatial database of historical engineering construction was used to quantify the response of flood levels to each unit of engineering infrastructure. Significant climate- and/or land use-driven increases in flow were detected, but the largest and most pervasive contributors to increased flooding on the Mississippi River system were wing dikes and related navigational structures, followed by progressive levee construction. In the area of the 2008 Upper Mississippi flood, for example, about 2 m of the flood crest is linked to navigational and flood-control engineering. Systemwide, large increases in flood levels were documented at locations and at times of wing-dike and levee construction. Copyright 2008 by the American Geophysical Union.
Database resources for the tuberculosis community.
Lew, Jocelyne M; Mao, Chunhong; Shukla, Maulik; Warren, Andrew; Will, Rebecca; Kuznetsov, Dmitry; Xenarios, Ioannis; Robertson, Brian D; Gordon, Stephen V; Schnappinger, Dirk; Cole, Stewart T; Sobral, Bruno
2013-01-01
Access to online repositories for genomic and associated "-omics" datasets is now an essential part of everyday research activity. It is important therefore that the Tuberculosis community is aware of the databases and tools available to them online, as well as for the database hosts to know what the needs of the research community are. One of the goals of the Tuberculosis Annotation Jamboree, held in Washington DC on March 7th-8th 2012, was therefore to provide an overview of the current status of three key Tuberculosis resources, TubercuList (tuberculist.epfl.ch), TB Database (www.tbdb.org), and Pathosystems Resource Integration Center (PATRIC, www.patricbrc.org). Here we summarize some key updates and upcoming features in TubercuList, and provide an overview of the PATRIC site and its online tools for pathogen RNA-Seq analysis. Copyright © 2012 Elsevier Ltd. All rights reserved.
[The database server for the medical bibliography database at Charles University].
Vejvalka, J; Rojíková, V; Ulrych, O; Vorísek, M
1998-01-01
In the medical community, bibliographic databases are widely accepted as a most important source of information both for theoretical and clinical disciplines. To improve access to medical bibliographic databases at Charles University, a database server (ERL by Silver Platter) was set up at the 2nd Faculty of Medicine in Prague. The server, accessible by Internet 24 hours/7 days, hosts now 14 years' MEDLINE and 10 years' EMBASE Paediatrics. Two different strategies are available for connecting to the server: a specialized client program that communicates over the Internet (suitable for professional searching) and a web-based access that requires no specialized software (except the WWW browser) on the client side. The server is now offered to academic community to host further databases, possibly subscribed by consortia whose individual members would not subscribe them by themselves.
Update on terrestrial ecological classification in the highlands of West Virginia
James P. Vanderhorst
2010-01-01
The West Virginia Natural Heritage Program (WVNHP) maintains databases on the biological diversity of the state, including species and natural communities, to help focus conservation efforts by agencies and organizations. Information on terrestrial communities (also called vegetation, or habitat, depending on user or audience focus) is maintained in two databases. The...
NASA Technical Reports Server (NTRS)
vonOfenheim. William H. C.; Heimerl, N. Lynn; Binkley, Robert L.; Curry, Marty A.; Slater, Richard T.; Nolan, Gerald J.; Griswold, T. Britt; Kovach, Robert D.; Corbin, Barney H.; Hewitt, Raymond W.
1998-01-01
This paper discusses the technical aspects of and the project background for the NASA Image exchange (NIX). NIX, which provides a single entry point to search selected image databases at the NASA Centers, is a meta-search engine (i.e., a search engine that communicates with other search engines). It uses these distributed digital image databases to access photographs, animations, and their associated descriptive information (meta-data). NIX is available for use at the following URL: http://nix.nasa.gov./NIX, which was sponsored by NASAs Scientific and Technical Information (STI) Program, currently serves images from seven NASA Centers. Plans are under way to link image databases from three additional NASA Centers. images and their associated meta-data, which are accessible by NIX, reside at the originating Centers, and NIX utilizes a virtual central site that communicates with each of these sites. Incorporated into the virtual central site are several protocols to support searches from a diverse collection of database engines. The searches are performed in parallel to ensure optimization of response times. To augment the search capability, browse functionality with pre-defined categories has been built into NIX, thereby ensuring dissemination of 'best-of-breed' imagery. As a final recourse, NIX offers access to a help desk via an on-line form to help locate images and information either within the scope of NIX or from available external sources.
The composite load spectra project
NASA Technical Reports Server (NTRS)
Newell, J. F.; Ho, H.; Kurth, R. E.
1990-01-01
Probabilistic methods and generic load models capable of simulating the load spectra that are induced in space propulsion system components are being developed. Four engine component types (the transfer ducts, the turbine blades, the liquid oxygen posts and the turbopump oxidizer discharge duct) were selected as representative hardware examples. The composite load spectra that simulate the probabilistic loads for these components are typically used as the input loads for a probabilistic structural analysis. The knowledge-based system approach used for the composite load spectra project provides an ideal environment for incremental development. The intelligent database paradigm employed in developing the expert system provides a smooth coupling between the numerical processing and the symbolic (information) processing. Large volumes of engine load information and engineering data are stored in database format and managed by a database management system. Numerical procedures for probabilistic load simulation and database management functions are controlled by rule modules. Rules were hard-wired as decision trees into rule modules to perform process control tasks. There are modules to retrieve load information and models. There are modules to select loads and models to carry out quick load calculations or make an input file for full duty-cycle time dependent load simulation. The composite load spectra load expert system implemented today is capable of performing intelligent rocket engine load spectra simulation. Further development of the expert system will provide tutorial capability for users to learn from it.
Flight-determined engine exhaust characteristics of an F404 engine in an F-18 airplane
NASA Technical Reports Server (NTRS)
Ennix, Kimberly A.; Burcham, Frank W., Jr.; Webb, Lannie D.
1993-01-01
Personnel at the NASA Langley Research Center (NASA-Langley) and the NASA Dryden Flight Research Facility (NASA-Dryden) recently completed a joint acoustic flight test program. Several types of aircraft with high nozzle pressure ratio engines were flown to satisfy a twofold objective. First, assessments were made of subsonic climb-to-cruise noise from flights conducted at varying altitudes in a Mach 0.30 to 0.90 range. Second, using data from flights conducted at constant altitude in a Mach 0.30 to 0.95 range, engineers obtained a high quality noise database. This database was desired to validate the Aircraft Noise Prediction Program and other system noise prediction codes. NASA-Dryden personnel analyzed the engine data from several aircraft that were flown in the test program to determine the exhaust characteristics. The analysis of the exhaust characteristics from the F-18 aircraft are reported. An overview of the flight test planning, instrumentation, test procedures, data analysis, engine modeling codes, and results are presented.
Educating the humanitarian engineer.
Passino, Kevin M
2009-12-01
The creation of new technologies that serve humanity holds the potential to help end global poverty. Unfortunately, relatively little is done in engineering education to support engineers' humanitarian efforts. Here, various strategies are introduced to augment the teaching of engineering ethics with the goal of encouraging engineers to serve as effective volunteers for community service. First, codes of ethics, moral frameworks, and comparative analysis of professional service standards lay the foundation for expectations for voluntary service in the engineering profession. Second, standard coverage of global issues in engineering ethics educates humanitarian engineers about aspects of the community that influence technical design constraints encountered in practice. Sample assignments on volunteerism are provided, including a prototypical design problem that integrates community constraints into a technical design problem in a novel way. Third, it is shown how extracurricular engineering organizations can provide a theory-practice approach to education in volunteerism. Sample completed projects are described for both undergraduates and graduate students. The student organization approach is contrasted with the service-learning approach. Finally, long-term goals for establishing better infrastructure are identified for educating the humanitarian engineer in the university, and supporting life-long activities of humanitarian engineers.
Warren, Lesley A.; Kendra, Kathryn E.
2015-01-01
Microbial communities in engineered terrestrial haloalkaline environments have been poorly characterized relative to their natural counterparts and are geologically recent in formation, offering opportunities to explore microbial diversity and assembly in dynamic, geochemically comparable contexts. In this study, the microbial community structure and geochemical characteristics of three geographically dispersed bauxite residue environments along a remediation gradient were assessed and subsequently compared with other engineered and natural haloalkaline systems. In bauxite residues, bacterial communities were similar at the phylum level (dominated by Proteobacteria and Firmicutes) to those found in soda lakes, oil sands tailings, and nuclear wastes; however, they differed at lower taxonomic levels, with only 23% of operational taxonomic units (OTUs) shared with other haloalkaline environments. Although being less diverse than natural analogues, bauxite residue harbored substantial novel bacterial taxa, with 90% of OTUs nonmatchable to cultured representative sequences. Fungal communities were dominated by Ascomycota and Basidiomycota, consistent with previous studies of hypersaline environments, and also harbored substantial novel (73% of OTUs) taxa. In bauxite residues, community structure was clearly linked to geochemical and physical environmental parameters, with 84% of variation in bacterial and 73% of variation in fungal community structures explained by environmental parameters. The major driver of bacterial community structure (salinity) was consistent across natural and engineered environments; however, drivers differed for fungal community structure between natural (pH) and engineered (total alkalinity) environments. This study demonstrates that both engineered and natural terrestrial haloalkaline environments host substantial repositories of microbial diversity, which are strongly shaped by geochemical drivers. PMID:25979895
"Mr. Database" : Jim Gray and the History of Database Technologies.
Hanwahr, Nils C
2017-12-01
Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.
Classroom Laboratory Report: Using an Image Database System in Engineering Education.
ERIC Educational Resources Information Center
Alam, Javed; And Others
1991-01-01
Describes an image database system assembled using separate computer components that was developed to overcome text-only computer hardware storage and retrieval limitations for a pavement design class. (JJK)
Adaptation of Decoy Fusion Strategy for Existing Multi-Stage Search Workflows
NASA Astrophysics Data System (ADS)
Ivanov, Mark V.; Levitsky, Lev I.; Gorshkov, Mikhail V.
2016-09-01
A number of proteomic database search engines implement multi-stage strategies aiming at increasing the sensitivity of proteome analysis. These approaches often employ a subset of the original database for the secondary stage of analysis. However, if target-decoy approach (TDA) is used for false discovery rate (FDR) estimation, the multi-stage strategies may violate the underlying assumption of TDA that false matches are distributed uniformly across the target and decoy databases. This violation occurs if the numbers of target and decoy proteins selected for the second search are not equal. Here, we propose a method of decoy database generation based on the previously reported decoy fusion strategy. This method allows unbiased TDA-based FDR estimation in multi-stage searches and can be easily integrated into existing workflows utilizing popular search engines and post-search algorithms.
Jones, Andrew R; Siepen, Jennifer A; Hubbard, Simon J; Paton, Norman W
2009-03-01
LC-MS experiments can generate large quantities of data, for which a variety of database search engines are available to make peptide and protein identifications. Decoy databases are becoming widely used to place statistical confidence in result sets, allowing the false discovery rate (FDR) to be estimated. Different search engines produce different identification sets so employing more than one search engine could result in an increased number of peptides (and proteins) being identified, if an appropriate mechanism for combining data can be defined. We have developed a search engine independent score, based on FDR, which allows peptide identifications from different search engines to be combined, called the FDR Score. The results demonstrate that the observed FDR is significantly different when analysing the set of identifications made by all three search engines, by each pair of search engines or by a single search engine. Our algorithm assigns identifications to groups according to the set of search engines that have made the identification, and re-assigns the score (combined FDR Score). The combined FDR Score can differentiate between correct and incorrect peptide identifications with high accuracy, allowing on average 35% more peptide identifications to be made at a fixed FDR than using a single search engine.
From Scarcity to Visibility: Gender Differences in the Careers of Doctoral Scientists and Engineers.
ERIC Educational Resources Information Center
Long, J. Scott, Ed.
This study documents the changes that have occurred in the representation of women in science and engineering and the characteristics of women scientists and engineers. Data from two National Science Foundation databases, the Survey of Earned Doctorates for New Ph.D.s and the Survey of Doctoral Recipients for the science & engineering doctoral…
The impact of a living learning community on first-year engineering students
NASA Astrophysics Data System (ADS)
Flynn, Margaret A.; Everett, Jess W.; Whittinghill, Dex
2016-05-01
The purpose of this study was to investigate the impact of an engineering living and learning community (ELC) on first-year engineering students. A control group of non-ELC students was used to compare the experiences of the ELC participants. Analysis of survey data showed that there was significant differences between the ELC students and the non-ELC students in how they responded to questions regarding social support, academic support, connectedness to campus, and satisfaction with the College of Engineering and the institution as a whole. Particularly, there were significant differences between ELC and non-ELC students for questions related to feeling like part of an engineering community, having strong relationships with peers, belonging to a supportive peer network, studying with engineering peers, and spending time with classmates outside of class.
Engineering chemical interactions in microbial communities.
Kenny, Douglas J; Balskus, Emily P
2018-03-05
Microbes living within host-associated microbial communities (microbiotas) rely on chemical communication to interact with surrounding organisms. These interactions serve many purposes, from supplying the multicellular host with nutrients to antagonizing invading pathogens, and breakdown of chemical signaling has potentially negative consequences for both the host and microbiota. Efforts to engineer microbes to take part in chemical interactions represent a promising strategy for modulating chemical signaling within these complex communities. In this review, we discuss prominent examples of chemical interactions found within host-associated microbial communities, with an emphasis on the plant-root microbiota and the intestinal microbiota of animals. We then highlight how an understanding of such interactions has guided efforts to engineer microbes to participate in chemical signaling in these habitats. We discuss engineering efforts in the context of chemical interactions that enable host colonization, promote host health, and exclude pathogens. Finally, we describe prominent challenges facing this field and propose new directions for future engineering efforts.
Community air monitoring and the Village Green Project ...
Cost and logistics are practical issues that have historically constrained the number of locations where long-term, active air pollution measurement is possible. In addition, traditional air monitoring approaches are generally conducted by technical experts with limited engagement with community members. EPA’s Village Green Project (VGP) is a prototype technology designed to add value to a community environment – VGP is a park bench equipped with air and meteorological instruments that measure ozone, fine particles, wind, temperature, and humidity at a one-minute time resolution, with the open-source Arduino microprocessor operating as the system controller. The data are streamed wirelessly to a database, passed through automatic diagnostic quality checks, and then made publically available on an engaging website. The station was designed to minimize power use; it consumes an estimated 15W and operates entirely on solar power, is engineered to run for several days with minimal solar radiation, and is capable of automatically shutting down components of the system to conserve power and restarting when power availability increases. Situated outside a public library in Durham, North Carolina, VGP has also been a gathering location for air quality experts to engage with community members. During the time span of June, 2013 through January, 2014, the station collected about 3500 hours of ozone and PM2.5 data, with over 90% up-time operating only on solar po
Cherrington, Andrea; Ayala, Guadalupe X; Amick, Halle; Allison, Jeroan; Corbie-Smith, Giselle; Scarinci, Isabel
2008-01-01
The purpose of this qualitative study was to examine methods of implementation of the community health worker (CHW) model within diabetes programs, as well as related challenges and lessons learned. Semi-structured interviews were conducted with program managers. Four databases (PubMed, CINAHL, ISI Web of Knowledge, PsycInfo), the CDC's 1998 directory of CHW programs, and Google Search Engine were used to identify CHW programs. Criteria for inclusion were: DM program; used CHW strategy; occurred in United States. Two independent reviewers performed content analyses to identify major themes and findings. Sixteen programs were assessed, all but 3 focused on minority populations. Most CHWs were recruited informally; 6 programs required CHWs to have diabetes. CHW roles and responsibilities varied across programs; educator was the most commonly identified role. Training also varied in terms of both content and intensity. All programs gave CHWs remuneration for their work. Common challenges included difficulties with CHW retention, intervention fidelity and issues related to sustainability. Cultural and gender issues also emerged. Examples of lessons learned included the need for community buy-in and the need to anticipate nondiabetes related issues. Lessons learned from these programs may be useful to others as they apply the CHW model to diabetes management within their own communities. Further research is needed to elucidate the specific features of this model necessary to positively impact health outcomes.
Generation of an Aerothermal Data Base for the X33 Spacecraft
NASA Technical Reports Server (NTRS)
Roberts, Cathy; Huynh, Loc
1998-01-01
The X-33 experimental program is a cooperative program between industry and NASA, managed by Lockheed-Martin Skunk Works to develop an experimental vehicle to demonstrate new technologies for a single-stage-to-orbit, fully reusable launch vehicle (RLV). One of the new technologies to be demonstrated is an advanced Thermal Protection System (TPS) being designed by BF Goodrich (formerly Rohr, Inc.) with support from NASA. The calculation of an aerothermal database is crucial to identifying the critical design environment data for the TPS. The NASA Ames X-33 team has generated such a database using Computational Fluid Dynamics (CFD) analyses, engineering analysis methods and various programs to compare and interpolate the results from the CFD and the engineering analyses. This database, along with a program used to query the database, is used extensively by several X-33 team members to help them in designing the X-33. This paper will describe the methods used to generate this database, the program used to query the database, and will show some of the aerothermal analysis results for the X-33 aircraft.
NASA Astrophysics Data System (ADS)
Hadder, Eric Michael
There are many computer aided engineering tools and software used by aerospace engineers to design and predict specific parameters of an airplane. These tools help a design engineer predict and calculate such parameters such as lift, drag, pitching moment, takeoff range, maximum takeoff weight, maximum flight range and much more. However, there are very limited ways to predict and calculate the minimum control speeds of an airplane in engine inoperative flight. There are simple solutions, as well as complicated solutions, yet there is neither standard technique nor consistency throughout the aerospace industry. To further complicate this subject, airplane designers have the option of using an Automatic Thrust Control System (ATCS), which directly alters the minimum control speeds of an airplane. This work addresses this issue with a tool used to predict and calculate the Minimum Control Speed on the Ground (VMCG) as well as the Minimum Control Airspeed (VMCA) of any existing or design-stage airplane. With simple line art of an airplane, a program called VORLAX is used to generate an aerodynamic database used to calculate the stability derivatives of an airplane. Using another program called Numerical Propulsion System Simulation (NPSS), a propulsion database is generated to use with the aerodynamic database to calculate both VMCG and VMCA. This tool was tested using two airplanes, the Airbus A320 and the Lockheed Martin C130J-30 Super Hercules. The A320 does not use an Automatic Thrust Control System (ATCS), whereas the C130J-30 does use an ATCS. The tool was able to properly calculate and match known values of VMCG and VMCA for both of the airplanes. The fact that this tool was able to calculate the known values of VMCG and VMCA for both airplanes means that this tool would be able to predict the VMCG and VMCA of an airplane in the preliminary stages of design. This would allow design engineers the ability to use an Automatic Thrust Control System (ATCS) as part of the design of an airplane and still have the ability to predict the VMCG and VMCA of the airplane.
Phenotip - a web-based instrument to help diagnosing fetal syndromes antenatally.
Porat, Shay; de Rham, Maud; Giamboni, Davide; Van Mieghem, Tim; Baud, David
2014-12-10
Prenatal ultrasound can often reliably distinguish fetal anatomic anomalies, particularly in the hands of an experienced ultrasonographer. Given the large number of existing syndromes and the significant overlap in prenatal findings, antenatal differentiation for syndrome diagnosis is difficult. We constructed a hierarchic tree of 1140 sonographic markers and submarkers, organized per organ system. Subsequently, a database of prenatally diagnosable syndromes was built. An internet-based search engine was then designed to search the syndrome database based on a single or multiple sonographic markers. Future developments will include a database with magnetic resonance imaging findings as well as further refinements in the search engine to allow prioritization based on incidence of syndromes and markers.
Hanford Environmental Information System (HEIS) Operator`s Manual. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreck, R.I.
1991-10-01
The Hanford Environmental Information System (HEIS) is a consolidated set of automated resources that effectively manage the data gathered during environmental monitoring and restoration of the Hanford Site. The HEIS includes an integrated database that provides consistent and current data to all users and promotes sharing of data by the entire user community. This manual describes the facilities available to the operational user who is responsible for data entry, processing, scheduling, reporting, and quality assurance. A companion manual, the HEIS User`s Manual, describes the facilities available-to the scientist, engineer, or manager who uses the system for environmental monitoring, assessment, andmore » restoration planning; and to the regulator who is responsible for reviewing Hanford Site operations against regulatory requirements and guidelines.« less
NASA Astrophysics Data System (ADS)
Styron, R. H.; Garcia, J.; Pagani, M.
2017-12-01
A global catalog of active faults is a resource of value to a wide swath of the geoscience, earthquake engineering, and hazards risk communities. Though construction of such a dataset has been attempted now and again through the past few decades, success has been elusive. The Global Earthquake Model (GEM) Foundation has been working on this problem, as a fundamental step in its goal of making a global seismic hazard model. Progress on the assembly of the database is rapid, with the concatenation of many national—, orogen—, and continental—scale datasets produced by different research groups throughout the years. However, substantial data gaps exist throughout much of the deforming world, requiring new mapping based on existing publications as well as consideration of seismicity, geodesy and remote sensing data. Thus far, new fault datasets have been created for the Caribbean and Central America, North Africa, and northeastern Asia, with Madagascar, Canada and a few other regions in the queue. The second major task, as formidable as the initial data concatenation, is the 'harmonization' of data. This entails the removal or recombination of duplicated structures, reconciliation of contrastinginterpretations in areas of overlap, and the synthesis of many different types of attributes or metadata into a consistent whole. In a project of this scale, the methods used in the database construction are as critical to project success as the data themselves. After some experimentation, we have settled on an iterative methodology that involves rapid accumulation of data followed by successive episodes of data revision, and a computer-scripted data assembly using GIS file formats that is flexible, reproducible, and as able as possible to cope with updates to the constituent datasets. We find that this approach of initially maximizing coverage and then increasing resolution is the most robust to regional data problems and the most amenable to continued updates and refinement. Combined with the public, open-source nature of this project, GEM is producing a resource that can continue to evolve with the changing knowledge and needs of the community.
Squartini, Andrea
2011-07-26
The associations between bacteria and environment underlie their preferential interactions with given physical or chemical conditions. Microbial ecology aims at extracting conserved patterns of occurrence of bacterial taxa in relation to defined habitats and contexts. In the present report the NCBI nucleotide sequence database is used as dataset to extract information relative to the distribution of each of the 24 phyla of the bacteria superkingdom and of the Archaea. Over two and a half million records are filtered in their cross-association with each of 48 sets of keywords, defined to cover natural or artificial habitats, interactions with plant, animal or human hosts, and physical-chemical conditions. The results are processed showing: (a) how the different descriptors enrich or deplete the proportions at which the phyla occur in the total database; (b) in which order of abundance do the different keywords score for each phylum (preferred habitats or conditions), and to which extent are phyla clustered to few descriptors (specific) or spread across many (cosmopolitan); (c) which keywords individuate the communities ranking highest for diversity and evenness. A number of cues emerge from the results, contributing to sharpen the picture on the functional systematic diversity of prokaryotes. Suggestions are given for a future automated service dedicated to refining and updating such kind of analyses via public bioinformatic engines.
ArrayBridge: Interweaving declarative array processing with high-performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xing, Haoyuan; Floratos, Sofoklis; Blanas, Spyros
Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aimsmore » to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.« less
Searches Conducted for Engineers.
ERIC Educational Resources Information Center
Lorenz, Patricia
This paper reports an industrial information specialist's experience in performing online searches for engineers and surveys the databases used. Engineers seeking assistance fall into three categories: (1) those who recognize the value of online retrieval; (2) referrals by colleagues; and (3) those who do not seek help. As more successful searches…
The Many Faces of a Software Engineer in a Research Community
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marinovici, Maria C.; Kirkham, Harold
2013-10-14
The ability to gather, analyze and make decisions based on real world data is changing nearly every field of human endeavor. These changes are particularly challenging for software engineers working in a scientific community, designing and developing large, complex systems. To avoid the creation of a communications gap (almost a language barrier), the software engineers should possess an ‘adaptive’ skill. In the science and engineering research community, the software engineers must be responsible for more than creating mechanisms for storing and analyzing data. They must also develop a fundamental scientific and engineering understanding of the data. This paper looks atmore » the many faces that a software engineer should have: developer, domain expert, business analyst, security expert, project manager, tester, user experience professional, etc. Observations made during work on a power-systems scientific software development are analyzed and extended to describe more generic software development projects.« less
Conlon, Eddie
2013-12-01
Two issues of particular interest in the Irish context are (1) the motivation for broadening engineering education to include the humanities, and an emphasis on social responsibility and (2) the process by which broadening can take place. Greater community engagement, arising from a socially-driven model of engineering education, is necessary if engineering practice is to move beyond its present captivity by corporate interests.
Multimedia explorer: image database, image proxy-server and search-engine.
Frankewitsch, T.; Prokosch, U.
1999-01-01
Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval. PMID:10566463
Multimedia explorer: image database, image proxy-server and search-engine.
Frankewitsch, T; Prokosch, U
1999-01-01
Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval.
Crowson, Matthew G; Schulz, Kristine; Parham, Kourosh; Vambutas, Andrea; Witsell, David; Lee, Walter T; Shin, Jennifer J; Pynnonen, Melissa A; Nguyen-Huynh, Anh; Ryan, Sheila E; Langman, Alan
2016-07-01
(1) Integrate practice-based patient encounters using the Dartmouth Atlas Medicare database to understand practice treatments for Ménière's disease (MD). (2) Describe differences in the practice patterns between academic and community providers for MD. Practice-based research database review. CHEER (Creating Healthcare Excellence through Education and Research) network academic and community providers. MD patient data were identified with ICD-9 and CPT codes. Demographics, unique visits, and procedures per patient were tabulated. The Dartmouth Atlas of Health Care was used to reference regional health care utilization. Statistical analysis included 1-way analyses of variance, bivariate linear regression, and Student's t tests, with significance set at P < .05. A total of 2071 unique patients with MD were identified from 8 academic and 10 community otolaryngology-head and neck surgery provider centers nationally. Average age was 56.5 years; 63.9% were female; and 91.4% self-reported white ethnicity. There was an average of 3.2 visits per patient. Western providers had the highest average visits per patient. Midwest providers had the highest average procedures per patient. Community providers had more visits per site and per patient than did academic providers. Academic providers had significantly more operative procedures per site (P = .0002) when compared with community providers. Health care service areas with higher total Medicare reimbursements per enrollee did not report significantly more operative procedures being performed. This is the first practice-based clinical research database study to describe MD practice patterns. We demonstrate that academic otolaryngology-head and neck surgery providers perform significantly more operative procedures than do community providers for MD, and we validate these data with an independent Medicare spending database. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.
Filling the gap in functional trait databases: use of ecological hypotheses to replace missing data.
Taugourdeau, Simon; Villerd, Jean; Plantureux, Sylvain; Huguenin-Elie, Olivier; Amiaud, Bernard
2014-04-01
Functional trait databases are powerful tools in ecology, though most of them contain large amounts of missing values. The goal of this study was to test the effect of imputation methods on the evaluation of trait values at species level and on the subsequent calculation of functional diversity indices at community level using functional trait databases. Two simple imputation methods (average and median), two methods based on ecological hypotheses, and one multiple imputation method were tested using a large plant trait database, together with the influence of the percentage of missing data and differences between functional traits. At community level, the complete-case approach and three functional diversity indices calculated from grassland plant communities were included. At the species level, one of the methods based on ecological hypothesis was for all traits more accurate than imputation with average or median values, but the multiple imputation method was superior for most of the traits. The method based on functional proximity between species was the best method for traits with an unbalanced distribution, while the method based on the existence of relationships between traits was the best for traits with a balanced distribution. The ranking of the grassland communities for their functional diversity indices was not robust with the complete-case approach, even for low percentages of missing data. With the imputation methods based on ecological hypotheses, functional diversity indices could be computed with a maximum of 30% of missing data, without affecting the ranking between grassland communities. The multiple imputation method performed well, but not better than single imputation based on ecological hypothesis and adapted to the distribution of the trait values for the functional identity and range of the communities. Ecological studies using functional trait databases have to deal with missing data using imputation methods corresponding to their specific needs and making the most out of the information available in the databases. Within this framework, this study indicates the possibilities and limits of single imputation methods based on ecological hypothesis and concludes that they could be useful when studying the ranking of communities for their functional diversity indices.
Filling the gap in functional trait databases: use of ecological hypotheses to replace missing data
Taugourdeau, Simon; Villerd, Jean; Plantureux, Sylvain; Huguenin-Elie, Olivier; Amiaud, Bernard
2014-01-01
Functional trait databases are powerful tools in ecology, though most of them contain large amounts of missing values. The goal of this study was to test the effect of imputation methods on the evaluation of trait values at species level and on the subsequent calculation of functional diversity indices at community level using functional trait databases. Two simple imputation methods (average and median), two methods based on ecological hypotheses, and one multiple imputation method were tested using a large plant trait database, together with the influence of the percentage of missing data and differences between functional traits. At community level, the complete-case approach and three functional diversity indices calculated from grassland plant communities were included. At the species level, one of the methods based on ecological hypothesis was for all traits more accurate than imputation with average or median values, but the multiple imputation method was superior for most of the traits. The method based on functional proximity between species was the best method for traits with an unbalanced distribution, while the method based on the existence of relationships between traits was the best for traits with a balanced distribution. The ranking of the grassland communities for their functional diversity indices was not robust with the complete-case approach, even for low percentages of missing data. With the imputation methods based on ecological hypotheses, functional diversity indices could be computed with a maximum of 30% of missing data, without affecting the ranking between grassland communities. The multiple imputation method performed well, but not better than single imputation based on ecological hypothesis and adapted to the distribution of the trait values for the functional identity and range of the communities. Ecological studies using functional trait databases have to deal with missing data using imputation methods corresponding to their specific needs and making the most out of the information available in the databases. Within this framework, this study indicates the possibilities and limits of single imputation methods based on ecological hypothesis and concludes that they could be useful when studying the ranking of communities for their functional diversity indices. PMID:24772273
Women Engineering Transfer Students: The Community College Experience
ERIC Educational Resources Information Center
Patterson, Susan J.
2011-01-01
An interpretative philosophical framework was applied to a case study to document the particular experiences and perspectives of ten women engineering transfer students who once attended a community college and are currently enrolled in one of two university professional engineering programs. This study is important because women still do not earn…
Engineering in Communities: Learning by Doing
ERIC Educational Resources Information Center
Goggins, J.
2012-01-01
Purpose: The purpose of this paper is to focus on a number of initiatives in civil engineering undergraduate programmes at the National University of Ireland, Galway (NUIG) that allow students to complete engineering projects in the community, enabling them to learn by doing. Design/methodology/approach: A formal commitment to civic engagement was…
A WorkFlow Engine Oriented Modeling System for Hydrologic Sciences
NASA Astrophysics Data System (ADS)
Lu, B.; Piasecki, M.
2009-12-01
In recent years the use of workflow engines for carrying out modeling and data analyses tasks has gained increased attention in the science and engineering communities. Tasks like processing raw data coming from sensors and passing these raw data streams to filters for QA/QC procedures possibly require multiple and complicated steps that need to be repeated over and over again. A workflow sequence that carries out a number of steps of various complexity is an ideal approach to deal with these tasks because the sequence can be stored, called up and repeated over again and again. This has several advantages: for one it ensures repeatability of processing steps and with that provenance, an issue that is increasingly important in the science and engineering communities. It also permits the hand off of lengthy and time consuming tasks that can be error prone to a chain of processing actions that are carried out automatically thus reducing the chance for error on the one side and freeing up time to carry out other tasks on the other hand. This paper aims to present the development of a workflow engine embedded modeling system which allows to build up working sequences for carrying out numerical modeling tasks regarding to hydrologic science. Trident, which facilitates creating, running and sharing scientific data analysis workflows, is taken as the central working engine of the modeling system. Current existing functionalities of the modeling system involve digital watershed processing, online data retrieval, hydrologic simulation and post-event analysis. They are stored as sequences or modules respectively. The sequences can be invoked to implement their preset tasks in orders, for example, triangulating a watershed from raw DEM. Whereas the modules encapsulated certain functions can be selected and connected through a GUI workboard to form sequences. This modeling system is demonstrated by setting up a new sequence for simulating rainfall-runoff processes which involves embedded Penn State Integrated Hydrologic Model(PIHM) module for hydrologic simulation as a kernel, DEM processing sub-sequence which prepares geospatial data for PIHM, data retrieval module which access time series data from online data repository via web services or from local database, post- data management module which stores , visualizes and analyzes model outputs.
Ocean Drilling Program: Science Operator Search Engine
and products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP's main -USIO site, plus IODP, ODP, and DSDP Publications, together or separately. ODP | Search | Database
NASA Technical Reports Server (NTRS)
Mortlock, Alan; VanAlstyne, Richard
1998-01-01
The report describes development of databases estimating aircraft engine exhaust emissions for the years 1976 and 1984 from global operations of Military, Charter, historic Soviet and Chinese, Unreported Domestic traffic, and General Aviation (GA). These databases were developed under the National Aeronautics and Space Administration's (NASA) Advanced Subsonic Assessment (AST). McDonnell Douglas Corporation's (MDC), now part of the Boeing Company has previously estimated engine exhaust emissions' databases for the baseline year of 1992 and a 2015 forecast year scenario. Since their original creation, (Ward, 1994 and Metwally, 1995) revised technology algorithms have been developed. Additionally, GA databases have been created and all past NIDC emission inventories have been updated to reflect the new technology algorithms. Revised data (Baughcum, 1996 and Baughcum, 1997) for the scheduled inventories have been used in this report to provide a comparison of the total aviation emission forecasts from various components. Global results of two historic years (1976 and 1984), a baseline year (1992) and a forecast year (2015) are presented. Since engine emissions are directly related to fuel usage, an overview of individual aviation annual global fuel use for each inventory component is also given in this report.
Database Resources of the BIG Data Center in 2018.
2018-01-04
The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
TWRS technical baseline database manager definition document
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acree, C.D.
1997-08-13
This document serves as a guide for using the TWRS Technical Baseline Database Management Systems Engineering (SE) support tool in performing SE activities for the Tank Waste Remediation System (TWRS). This document will provide a consistent interpretation of the relationships between the TWRS Technical Baseline Database Management software and the present TWRS SE practices. The Database Manager currently utilized is the RDD-1000 System manufactured by the Ascent Logic Corporation. In other documents, the term RDD-1000 may be used interchangeably with TWRS Technical Baseline Database Manager.
NASA Technical Reports Server (NTRS)
VanDalsem, William R.; Livingston, Mary E.; Melton, John E.; Torres, Francisco J.; Stremel, Paul M.
1999-01-01
Continuous improvement of aerospace product development processes is a driving requirement across much of the aerospace community. As up to 90% of the cost of an aerospace product is committed during the first 10% of the development cycle, there is a strong emphasis on capturing, creating, and communicating better information (both requirements and performance) early in the product development process. The community has responded by pursuing the development of computer-based systems designed to enhance the decision-making capabilities of product development individuals and teams. Recently, the historical foci on sharing the geometrical representation and on configuration management are being augmented: Physics-based analysis tools for filling the design space database; Distributed computational resources to reduce response time and cost; Web-based technologies to relieve machine-dependence; and Artificial intelligence technologies to accelerate processes and reduce process variability. Activities such as the Advanced Design Technologies Testbed (ADTT) project at NASA Ames Research Center study the strengths and weaknesses of the technologies supporting each of these trends, as well as the overall impact of the combination of these trends on a product development event. Lessons learned and recommendations for future activities will be reported.
Moro, Teresa T; Savage, Teresa A; Gehlert, Sarah
2017-11-01
The nature and quality of end-of-life care received by adults with intellectual disabilities in out-of-home, non-institutional community agency residences in Western nations is not well understood. A range of databases and search engines were used to locate conceptual, clinical and research articles from relevant peer-reviewed journals. The present authors present a literature review of the agency, social and healthcare supports that impact end-of-life care for adults with intellectual disabilities. More information is needed about where people with intellectual disabilities are living at the very end of life and where they die. The support needs for adults with intellectual disabilities will change over time, particularly at the end of life. There are some areas, such as removing barriers to providing services, staff training, partnerships between agencies and palliative care providers, and advocacy, where further research may help to improve the end-of-life care for adults with intellectual disabilities. © 2017 John Wiley & Sons Ltd.
A bibliography of literature pertaining to plague (Yersinia pestis)
Ellison, Laura E.; Frank, Megan K. Eberhardt
2011-01-01
Plague is an acute and often fatal zoonotic disease caused by the bacterium Yersinia pestis. Y. pestis mainly cycles between small mammals and their fleas; however, it has the potential to infect humans and frequently causes fatalities if left untreated. It is often considered a disease of the past; however, since the late 1800s, plagueis geographic range has expanded greatly, posing new threats in previously unaffected regions of the world, including the Western United States. A literature search was conducted using Internet resources and databases. The keywords chosen for the searches included plague, Yersinia pestis, management, control, wildlife, prairie dogs, fleas, North America, and mammals. Keywords were used alone or in combination with the other terms. Although this search pertains mostly to North America, citations were included from the international research community, as well. Databases and search engines used included Google (http://www.google.com), Google Scholar (http://scholar.google.com), SciVerse Scopus (http://www.scopus.com), ISI Web of Knowledge (http://apps.isiknowledge.com), and the USGS Library's Digital Desktop (http://library.usgs.gov). The literature-cited sections of manuscripts obtained from keyword searches were cross-referenced to identify additional citations or gray literature that was missed by the Internet search engines. This Open-File Report, published as an Internet-accessible bibliography, is intended to be periodically updated with new citations or older references that may have been missed during this compilation. Hence, the authors would be grateful to receive notice of any new or old papers that the audience (users) think need to be included.
Gilmore, Brynne; McAuliffe, Eilish
2013-09-13
Community Health Workers are widely utilised in low- and middle-income countries and may be an important tool in reducing maternal and child mortality; however, evidence is lacking on their effectiveness for specific types of programmes, specifically programmes of a preventive nature. This review reports findings on a systematic review analysing effectiveness of preventive interventions delivered by Community Health Workers for Maternal and Child Health in low- and middle-income countries. A search strategy was developed according to the Evidence for Policy and Practice Information and Co-ordinating Centre's (EPPI-Centre) guidelines and systematic searching of the following databases occurred between June 8-11th, 2012: CINAHL, Embase, Ovid Nursing Database, PubMed, Scopus, Web of Science and POPLINE. Google, Google Scholar and WHO search engines, as well as relevant systematic reviews and reference lists from included articles were also searched. Inclusion criteria were: i) Target beneficiaries should be pregnant or recently pregnant women and/or children under-5 and/or caregivers of children under-5; ii) Interventions were required to be preventive and delivered by Community Health Workers at the household level. No exclusion criteria were stipulated for comparisons/controls or outcomes. Study characteristics of included articles were extracted using a data sheet and a peer tested quality assessment. A narrative synthesis of included studies was compiled with articles being coded descriptively to synthesise results and draw conclusions. A total of 10,281 studies were initially identified and through the screening process a total of 17 articles detailing 19 studies were included in the review. Studies came from ten different countries and consisted of randomized controlled trials, cluster randomized controlled trials, before and after, case control and cross sectional studies. Overall quality of evidence was found to be moderate. Five main preventive intervention categories emerged: malaria prevention, health education, breastfeeding promotion, essential newborn care and psychosocial support. All categories showed some evidence for the effectiveness of Community Health Workers; however they were found to be especially effective in promoting mother-performed strategies (skin to skin care and exclusive breastfeeding). Community Health Workers were shown to provide a range of preventive interventions for Maternal and Child Health in low- and middle-income countries with some evidence of effective strategies, though insufficient evidence is available to draw conclusions for most interventions and further research is needed.
Environmental Engineering Teaching Reference Community.
ERIC Educational Resources Information Center
Bell, John M.; Brenchley, David L.
Dawson, Fairfax County/U.S.A. is a hypothetical community developed by the authors as a teaching aid for undergraduate and graduate courses in environmental engineering, providing a context for problem solving and role playing. It was contrived to provide students opportunities to: (1) identify important community relationships, (2) appreciate the…
Mathematics, Engineering Science Achievement (MESA). Washington's Community and Technical Colleges
ERIC Educational Resources Information Center
Washington State Board for Community and Technical Colleges, 2014
2014-01-01
Growing Science, Technology, Education, and Mathematics (STEM) talent Washington MESA--Mathematics Engineering Science Achievement--helps under-represented community college students excel in school and ultimately earn STEM bachelor's degrees. MESA has two key programs: one for K-12 students, and the other for community and technical college…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Link, P.K.
A total of 48 papers were presented at the Engineering Geology and Geotechnical Engineering 30th Symposium. These papers are presented in this proceedings under the following headings: site characterization--Pocatello area; site characterization--Boise Area; site assessment; Idaho National Engineering Laboratory; geophysical methods; remediation; geotechnical engineering; and hydrogeology, northern and western Idaho. Individual papers have been processed separately for inclusion in the Energy Science and Technology Database.
Probabilistic simulation of concurrent engineering of propulsion systems
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Singhal, S. N.
1993-01-01
Technology readiness and the available infrastructure is assessed for timely computational simulation of concurrent engineering for propulsion systems. Results for initial coupled multidisciplinary, fabrication-process, and system simulators are presented including uncertainties inherent in various facets of engineering processes. An approach is outlined for computationally formalizing the concurrent engineering process from cradle-to-grave via discipline dedicated workstations linked with a common database.
Hadadi, Noushin; Hafner, Jasmin; Shajkofci, Adrian; Zisaki, Aikaterini; Hatzimanikatis, Vassily
2016-10-21
Because the complexity of metabolism cannot be intuitively understood or analyzed, computational methods are indispensable for studying biochemistry and deepening our understanding of cellular metabolism to promote new discoveries. We used the computational framework BNICE.ch along with cheminformatic tools to assemble the whole theoretical reactome from the known metabolome through expansion of the known biochemistry presented in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. We constructed the ATLAS of Biochemistry, a database of all theoretical biochemical reactions based on known biochemical principles and compounds. ATLAS includes more than 130 000 hypothetical enzymatic reactions that connect two or more KEGG metabolites through novel enzymatic reactions that have never been reported to occur in living organisms. Moreover, ATLAS reactions integrate 42% of KEGG metabolites that are not currently present in any KEGG reaction into one or more novel enzymatic reactions. The generated repository of information is organized in a Web-based database ( http://lcsb-databases.epfl.ch/atlas/ ) that allows the user to search for all possible routes from any substrate compound to any product. The resulting pathways involve known and novel enzymatic steps that may indicate unidentified enzymatic activities and provide potential targets for protein engineering. Our approach of introducing novel biochemistry into pathway design and associated databases will be important for synthetic biology and metabolic engineering.
Retrieving high-resolution images over the Internet from an anatomical image database
NASA Astrophysics Data System (ADS)
Strupp-Adams, Annette; Henderson, Earl
1999-12-01
The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.
Community health workers and mobile technology: a systematic review of the literature.
Braun, Rebecca; Catalani, Caricia; Wimbush, Julian; Israelski, Dennis
2013-01-01
In low-resource settings, community health workers are frontline providers who shoulder the health service delivery burden. Increasingly, mobile technologies are developed, tested, and deployed with community health workers to facilitate tasks and improve outcomes. We reviewed the evidence for the use of mobile technology by community health workers to identify opportunities and challenges for strengthening health systems in resource-constrained settings. We conducted a systematic review of peer-reviewed literature from health, medical, social science, and engineering databases, using PRISMA guidelines. We identified a total of 25 unique full-text research articles on community health workers and their use of mobile technology for the delivery of health services. Community health workers have used mobile tools to advance a broad range of health aims throughout the globe, particularly maternal and child health, HIV/AIDS, and sexual and reproductive health. Most commonly, community health workers use mobile technology to collect field-based health data, receive alerts and reminders, facilitate health education sessions, and conduct person-to-person communication. Programmatic efforts to strengthen health service delivery focus on improving adherence to standards and guidelines, community education and training, and programmatic leadership and management practices. Those studies that evaluated program outcomes provided some evidence that mobile tools help community health workers to improve the quality of care provided, efficiency of services, and capacity for program monitoring. Evidence suggests mobile technology presents promising opportunities to improve the range and quality of services provided by community health workers. Small-scale efforts, pilot projects, and preliminary descriptive studies are increasing, and there is a trend toward using feasible and acceptable interventions that lead to positive program outcomes through operational improvements and rigorous study designs. Programmatic and scientific gaps will need to be addressed by global leaders as they advance the use and assessment of mobile technology tools for community health workers.
Sandia Engineering Analysis Code Access System v. 2.0.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sjaardema, Gregory D.
The Sandia Engineering Analysis Code Access System (SEACAS) is a suite of preprocessing, post processing, translation, visualization, and utility applications supporting finite element analysis software using the Exodus database file format.
An Improved Forensic Science Information Search.
Teitelbaum, J
2015-01-01
Although thousands of search engines and databases are available online, finding answers to specific forensic science questions can be a challenge even to experienced Internet users. Because there is no central repository for forensic science information, and because of the sheer number of disciplines under the forensic science umbrella, forensic scientists are often unable to locate material that is relevant to their needs. The author contends that using six publicly accessible search engines and databases can produce high-quality search results. The six resources are Google, PubMed, Google Scholar, Google Books, WorldCat, and the National Criminal Justice Reference Service. Carefully selected keywords and keyword combinations, designating a keyword phrase so that the search engine will search on the phrase and not individual keywords, and prompting search engines to retrieve PDF files are among the techniques discussed. Copyright © 2015 Central Police University.
A Practical Engineering Approach to Predicting Fatigue Crack Growth in Riveted Lap Joints
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Piascik, Robert S.; Newman, James C., Jr.
1999-01-01
An extensive experimental database has been assembled from very detailed teardown examinations of fatigue cracks found in rivet holes of fuselage structural components. Based on this experimental database, a comprehensive analysis methodology was developed to predict the onset of widespread fatigue damage in lap joints of fuselage structure. Several computer codes were developed with specialized capabilities to conduct the various analyses that make up the comprehensive methodology. Over the past several years, the authors have interrogated various aspects of the analysis methods to determine the degree of computational rigor required to produce numerical predictions with acceptable engineering accuracy. This study led to the formulation of a practical engineering approach to predicting fatigue crack growth in riveted lap joints. This paper describes the practical engineering approach and compares predictions with the results from several experimental studies.
A Practical Engineering Approach to Predicting Fatigue Crack Growth in Riveted Lap Joints
NASA Technical Reports Server (NTRS)
Harris, C. E.; Piascik, R. S.; Newman, J. C., Jr.
2000-01-01
An extensive experimental database has been assembled from very detailed teardown examinations of fatigue cracks found in rivet holes of fuselage structural components. Based on this experimental database, a comprehensive analysis methodology was developed to predict the onset of widespread fatigue damage in lap joints of fuselage structure. Several computer codes were developed with specialized capabilities to conduct the various analyses that make up the comprehensive methodology. Over the past several years, the authors have interrogated various aspects of the analysis methods to determine the degree of computational rigor required to produce numerical predictions with acceptable engineering accuracy. This study led to the formulation of a practical engineering approach to predicting fatigue crack growth in riveted lap joints. This paper describes the practical engineering approach and compares predictions with the results from several experimental studies.
Lenz, Bernard N.
1997-01-01
An important part of the U.S. Geological Survey's (USGS) National Water-Quality Assessment (NAWQA) Program is the analysis of existing data in each of the NAWQA study areas. The Wisconsin Department of Natural Resources (WDNR) has an extensive aquatic benthic macroinvertebrate communities in streams (benthic invertebrates) database maintained by the University of Wisconsin-Stevens Point. This database has data which date back to 1984 and includes data from streams within the Western Lake Michigan Drainages (WMIC) study area (fig. 1). This report looks at the feasibility of USGS scientists supplementing the data they collect with data from the WDNR database when assessing water quality in the study area.
NASA Astrophysics Data System (ADS)
Deshotel, M.; Habib, E. H.
2016-12-01
There is an increasing desire by the water education community to use emerging research resources and technological advances in order to reform current educational practices. Recent years have witnessed some exemplary developments that tap into emerging hydrologic modeling and data sharing resources, innovative digital and visualization technologies, and field experiences. However, such attempts remain largely at the scale of individual efforts and fall short of meeting scalability and sustainability solutions. This can be attributed to number of reasons such as inadequate experience with modeling and data-based educational developments, lack of faculty time to invest in further developments, and lack of resources to further support the project. Another important but often-overlooked reason is the lack of adequate insight on the actual needs of end-users of such developments. Such insight is highly critical to inform how to scale and sustain educational innovations. In this presentation, we share with the hydrologic community experiences gathered from an ongoing experiment where the authors engaged in a hypothesis-driven, customer-discovery process to inform the scalability and sustainability of educational innovations in the field of hydrology and water resources education. The experiment is part of a program called Innovation Corps for Learning (I-Corps L). This program follows a business model approach where a value proposition is initially formulated on the educational innovation. The authors then engaged in a hypothesis-validation process through an intense series of customer interviews with different segments of potential end users, including junior/senior students, student interns, and hydrology professors. The authors also sought insight from engineering firms by interviewing junior engineers and their supervisors to gather feedback on the preparedness of graduating engineers as they enter the workforce in the area of water resources. Exploring the large landscape of potential users is critical in formulating a user-driven approach that can inform the innovation development. The presentation shares the results of this experiment and the insight gained and discusses how such information can inform the community on sustaining and scaling hydrology educational developments.
2014-06-01
and Coastal Data Information Program ( CDIP ). This User’s Guide includes step-by-step instructions for accessing the GLOS/GLCFS database via WaveNet...access, processing and analysis tool; part 3 – CDIP database. ERDC/CHL CHETN-xx-14. Vicksburg, MS: U.S. Army Engineer Research and Development Center
The Wannabee Culture: Why No-One Does What They Used To.
ERIC Educational Resources Information Center
Dixon, Anne
1998-01-01
Electronic publishing has been an agent for change in not just how one publishes but in what one publishes. Describes HyperCite, a joint project with the Institution of Electrical Engineers (IEE) to create INSPEC database. Highlights include the database; the research phase (cross database searching and new interface); and what and how much was…
NASA Technical Reports Server (NTRS)
Cooper, Beth A.
1997-01-01
Workplace and environmental noise issues at NASA Lewis Research Center are effectively managed via a three-part program that addresses hearing conservation, community noise control, and noise control engineering. The Lewis Research Center Noise Exposure Management Program seeks to limit employee noise exposure and maintain community acceptance for critical research while actively pursuing engineered controls for noise generated by more than 100 separate research facilities and the associated services required for their operation.
Palaeo sea-level and ice-sheet databases: problems, strategies and perspectives
NASA Astrophysics Data System (ADS)
Rovere, Alessio; Düsterhus, André; Carlson, Anders; Barlow, Natasha; Bradwell, Tom; Dutton, Andrea; Gehrels, Roland; Hibbert, Fiona; Hijma, Marc; Horton, Benjamin; Klemann, Volker; Kopp, Robert; Sivan, Dorit; Tarasov, Lev; Törnqvist, Torbjorn
2016-04-01
Databases of palaeoclimate data have driven many major developments in understanding the Earth system. The measurement and interpretation of palaeo sea-level and ice-sheet data that form such databases pose considerable challenges to the scientific communities that use them for further analyses. In this paper, we build on the experience of the PALSEA (PALeo constraints on SEA level rise) community, which is a working group inside the PAGES (Past Global Changes) project, to describe the challenges and best strategies that can be adopted to build a self-consistent and standardised database of geological and geochemical data related to palaeo sea levels and ice sheets. Our aim in this paper is to identify key points that need attention and subsequent funding when undertaking the task of database creation. We conclude that any sea-level or ice-sheet database must be divided into three instances: i) measurement; ii) interpretation; iii) database creation. Measurement should include postion, age, description of geological features, and quantification of uncertainties. All must be described as objectively as possible. Interpretation can be subjective, but it should always include uncertainties and include all the possible interpretations, without unjustified a priori exclusions. We propose that, in the creation of a database, an approach based on Accessibility, Transparency, Trust, Availability, Continued updating, Completeness and Communication of content (ATTAC3) must be adopted. Also, it is essential to consider the community structure that creates and benefits of a database. We conclude that funding sources should consider to address not only the creation of original data in specific research-question oriented projects, but also include the possibility to use part of the funding for IT-related and database creation tasks, which are essential to guarantee accessibility and maintenance of the collected data.
Remote online monitoring and measuring system for civil engineering structures
NASA Astrophysics Data System (ADS)
Kujawińska, Malgorzata; Sitnik, Robert; Dymny, Grzegorz; Karaszewski, Maciej; Michoński, Kuba; Krzesłowski, Jakub; Mularczyk, Krzysztof; Bolewicki, Paweł
2009-06-01
In this paper a distributed intelligent system for civil engineering structures on-line measurement, remote monitoring, and data archiving is presented. The system consists of a set of optical, full-field displacement sensors connected to a controlling server. The server conducts measurements according to a list of scheduled tasks and stores the primary data or initial results in a remote centralized database. Simultaneously the server performs checks, ordered by the operator, which may in turn result with an alert or a specific action. The structure of whole system is analyzed along with the discussion on possible fields of application and the ways to provide a relevant security during data transport. Finally, a working implementation consisting of a fringe projection, geometrical moiré, digital image correlation and grating interferometry sensors and Oracle XE database is presented. The results from database utilized for on-line monitoring of a threshold value of strain for an exemplary area of interest at the engineering structure are presented and discussed.
Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework
2012-01-01
Background For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. Results We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. Conclusion The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources. PMID:23216909
Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.
Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John
2012-12-05
For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karp, Peter D.
Pathway Tools is a systems-biology software package written by SRI International (SRI) that produces Pathway/Genome Databases (PGDBs) for organisms with a sequenced genome. Pathway Tools also provides a wide range of capabilities for analyzing predicted metabolic networks and user-generated omics data. More than 5,000 academic, industrial, and government groups have licensed Pathway Tools. This user community includes researchers at all three DOE bioenergy centers, as well as academic and industrial metabolic engineering (ME) groups. An integral part of the Pathway Tools software is MetaCyc, a large, multiorganism database of metabolic pathways and enzymes that SRI and its academic collaborators manuallymore » curate. This project included two main goals: I. Enhance the MetaCyc content of bioenergy-related enzymes and pathways. II. Develop computational tools for engineering metabolic pathways that satisfy specified design goals, in particular for bioenergy-related pathways. In part I, SRI proposed to significantly expand the coverage of bioenergy-related metabolic information in MetaCyc, followed by the generation of organism-specific PGDBs for all energy-relevant organisms sequenced at the DOE Joint Genome Institute (JGI). Part I objectives included: 1: Expand the content of MetaCyc to include bioenergy-related enzymes and pathways. 2: Enhance the Pathway Tools software to enable display of complex polymer degradation processes. 3: Create new PGDBs for the energy-related organisms sequenced by JGI, update existing PGDBs with new MetaCyc content, and make these data available to JBEI via the BioCyc website. In part II, SRI proposed to develop an efficient computational tool for the engineering of metabolic pathways. Part II objectives included: 4: Develop computational tools for generating metabolic pathways that satisfy specified design goals, enabling users to specify parameters such as starting and ending compounds, and preferred or disallowed intermediate compounds. The pathways were to be generated using metabolic reactions from a reference database (DB). 5: Develop computational tools for ranking the pathways generated in objective (4) according to their optimality. The ranking criteria include stoichiometric yield, the number and cost of additional inputs and the cofactor compounds required by the pathway, pathway length, and pathway energetics. 6: Develop tools for visualizing generated pathways to facilitate the evaluation of a large space of generated pathways.« less
Efficient GO2/GH2 Injector Design: A NASA, Industry and University Cooperative Effort
NASA Technical Reports Server (NTRS)
Tucker, P. K.; Klem, M. D.; Fisher, S. C.; Santoro, R. J.
1997-01-01
Developing new propulsion components in the face of shrinking budgets presents a significant challenge. The technical, schedule and funding issues common to any design/development program are complicated by the ramifications of the continuing decrease in funding for the aerospace industry. As a result, new working arrangements are evolving in the rocket industry. This paper documents a successful NASA, industry, and university cooperative effort to design efficient high performance GO2/GH2 rocket injector elements in the current budget environment. The NASA Reusable Launch Vehicle (RLV) Program initially consisted of three vehicle/engine concepts targeted at achieving single stage to orbit. One of the Rocketdyne propulsion concepts, the RS 2100 engine, used a full-flow staged-combustion cycle. Therefore, the RS 2100 main injector would combust GO2/GH 2 propellants. Early in the design phase, but after budget levels and contractual arrangements had been set the limitations of the current gas/gas injector database were identified. Most of the relevant information was at least twenty years old. Designing high performance injectors to meet the RS 2100 requirements would require the database to be updated and significantly enhanced. However, there was no funding available to address the need for more data. NASA proposed a teaming arrangement to acquire the updated information without additional funds from the RLV Program. A determination of the types and amounts of data needed was made along with test facilities with capabilities to meet the data requirements, budget constraints, and schedule. After several iterations a program was finalized and a team established to satisfy the program goals. The Gas/Gas Injector Technology (GGIT) Program had the overall goal of increasing the ability of the rocket engine community to design efficient high-performance, durable gas/gas injectors relevant to RLV requirements. First, the program would provide Rocketdyne with data on preliminary gas/gas injector designs which would enable discrimination among candidate injector designs. Secondly, the program would enhance the national gas/gas database by obtaining high-quality data that increases the understanding of gas/gas injector physics and is suitable for computational fluid dynamics (CFD) code validation. The third program objective was to validate CFD codes for future gas/gas injector design in the RLV program.
The Impact of a Cohort Model Learning Community on First-Year Engineering Student Success
ERIC Educational Resources Information Center
Doolen, Toni L.; Biddlecombe, Erin
2014-01-01
This study investigated the effect of cohort participation in a learning community and collaborative learning techniques on the success of first-year engineering students. Student success was measured as gains in knowledge, skills, and attitudes, student engagement, and persistence in engineering. The study group was comprised of students…
A Survey of Computer Use in Associate Degree Programs in Engineering Technology.
ERIC Educational Resources Information Center
Cunningham, Pearley
As part of its annual program review process, the Department of Engineering Technology at the Community College of Allegheny County, in Pennsylvania, conducted a study of computer usage in community college engineering technology programs across the nation. Specifically, the study sought to determine the types of software, Internet access, average…
Pathways to Engineering: The Validation Experiences of Transfer Students
ERIC Educational Resources Information Center
Zhang, Yi; Ozuna, Taryn
2015-01-01
Community college engineering transfer students are a critical student population of engineering degree recipients and technical workforce in the United States. Focusing on this group of students, we adopted Rendón's (1994) validation theory to explore the students' experiences in community colleges prior to transferring to a four-year…
NASA Technical Reports Server (NTRS)
Topousis, Daria E.; Murphy, Keri; Robinson, Greg
2008-01-01
In 2004, NASA faced major knowledge sharing challenges due to geographically isolated field centers that inhibited personnel from sharing experiences and ideas. Mission failures and new directions for the agency demanded better collaborative tools. In addition, with the push to send astronauts back to the moon and to Mars, NASA recognized that systems engineering would have to improve across the agency. Of the ten field centers, seven had not built a spacecraft in over 30 years, and had lost systems engineering expertise. The Systems Engineering Community of Practice came together to capture the knowledge of its members using the suite of collaborative tools provided by the NASA Engineering Network (NEN.) The NEN provided a secure collaboration space for over 60 practitioners across the agency to assemble and review a NASA systems engineering handbook. Once the handbook was complete, they used the open community area to disseminate it. This case study explores both the technology and the social networking that made the community possible, describes technological approaches that facilitated rapid setup and low maintenance, provides best practices that other organizations could adopt, and discusses the vision for how this community will continue to collaborate across the field centers to benefit the agency as it continues exploring the solar system.
Creating a FIESTA (Framework for Integrated Earth Science and Technology Applications) with MagIC
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Tauxe, L.; Constable, C.
2017-12-01
The Magnetics Information Consortium (https://earthref.org/MagIC) has recently developed a containerized web application to considerably reduce the friction in contributing, exploring and combining valuable and complex datasets for the paleo-, geo- and rock magnetic scientific community. The data produced in this scientific domain are inherently hierarchical and the communities evolving approaches to this scientific workflow, from sampling to taking measurements to multiple levels of interpretations, require a large and flexible data model to adequately annotate the results and ensure reproducibility. Historically, contributing such detail in a consistent format has been prohibitively time consuming and often resulted in only publishing the highly derived interpretations. The new open-source (https://github.com/earthref/MagIC) application provides a flexible upload tool integrated with the data model to easily create a validated contribution and a powerful search interface for discovering datasets and combining them to enable transformative science. MagIC is hosted at EarthRef.org along with several interdisciplinary geoscience databases. A FIESTA (Framework for Integrated Earth Science and Technology Applications) is being created by generalizing MagIC's web application for reuse in other domains. The application relies on a single configuration document that describes the routing, data model, component settings and external services integrations. The container hosts an isomorphic Meteor JavaScript application, MongoDB database and ElasticSearch search engine. Multiple containers can be configured as microservices to serve portions of the application or rely on externally hosted MongoDB, ElasticSearch, or third-party services to efficiently scale computational demands. FIESTA is particularly well suited for many Earth Science disciplines with its flexible data model, mapping, account management, upload tool to private workspaces, reference metadata, image galleries, full text searches and detailed filters. EarthRef's Seamount Catalog of bathymetry and morphology data, EarthRef's Geochemical Earth Reference Model (GERM) databases, and Oregon State University's Marine and Geology Repository (http://osu-mgr.org) will benefit from custom adaptations of FIESTA.
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A.; Jarboe, N.; Tauxe, L.; Constable, C.; Jonestrask, L.
2017-12-01
Challenges are faced by both new and experienced users interested in contributing their data to community repositories, in data discovery, or engaged in potentially transformative science. The Magnetics Information Consortium (https://earthref.org/MagIC) has recently simplified its data model and developed a new containerized web application to reduce the friction in contributing, exploring, and combining valuable and complex datasets for the paleo-, geo-, and rock magnetic scientific community. The new data model more closely reflects the hierarchical workflow in paleomagnetic experiments to enable adequate annotation of scientific results and ensure reproducibility. The new open-source (https://github.com/earthref/MagIC) application includes an upload tool that is integrated with the data model to provide early data validation feedback and ease the friction of contributing and updating datasets. The search interface provides a powerful full text search of contributions indexed by ElasticSearch and a wide array of filters, including specific geographic and geological timescale filtering, to support both novice users exploring the database and experts interested in compiling new datasets with specific criteria across thousands of studies and millions of measurements. The datasets are not large, but they are complex, with many results from evolving experimental and analytical approaches. These data are also extremely valuable due to the cost in collecting or creating physical samples and the, often, destructive nature of the experiments. MagIC is heavily invested in encouraging young scientists as well as established labs to cultivate workflows that facilitate contributing their data in a consistent format. This eLightning presentation includes a live demonstration of the MagIC web application, developed as a configurable container hosting an isomorphic Meteor JavaScript application, MongoDB database, and ElasticSearch search engine. Visitors can explore the MagIC Database through maps and image or plot galleries or search and filter the raw measurements and their derived hierarchy of analytical interpretations.
Lynx: a database and knowledge extraction engine for integrative medicine.
Sulakhe, Dinanath; Balasubramanian, Sandhya; Xie, Bingqing; Feng, Bo; Taylor, Andrew; Wang, Sheng; Berrocal, Eduardo; Dave, Utpal; Xu, Jinbo; Börnigen, Daniela; Gilliam, T Conrad; Maltsev, Natalia
2014-01-01
We have developed Lynx (http://lynx.ci.uchicago.edu)--a web-based database and a knowledge extraction engine, supporting annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Its underlying knowledge base (LynxKB) integrates various classes of information from >35 public databases and private collections, as well as manually curated data from our group and collaborators. Lynx provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization to assist the user in extracting meaningful knowledge from LynxKB and experimental data, whereas its service-oriented architecture provides public access to LynxKB and its analytical tools via user-friendly web services and interfaces.
ERIC Educational Resources Information Center
Gilbert, Dorie J.; Held, Mary Lehman; Ellzey, Janet L.; Bailey, William T.; Young, Laurie B.
2015-01-01
This article reviews the literature on challenges faced by engineering faculty in educating their students on community-engaged, sustainable technical solutions in developing countries. We review a number of approaches to increasing teaching modules on social and community components of international development education, from adding capstone…
The BioMart community portal: an innovative alternative to large, centralized data repositories
Smedley, Damian; Haider, Syed; Durinck, Steffen; Pandini, Luca; Provero, Paolo; Allen, James; Arnaiz, Olivier; Awedh, Mohammad Hamza; Baldock, Richard; Barbiera, Giulia; Bardou, Philippe; Beck, Tim; Blake, Andrew; Bonierbale, Merideth; Brookes, Anthony J.; Bucci, Gabriele; Buetti, Iwan; Burge, Sarah; Cabau, Cédric; Carlson, Joseph W.; Chelala, Claude; Chrysostomou, Charalambos; Cittaro, Davide; Collin, Olivier; Cordova, Raul; Cutts, Rosalind J.; Dassi, Erik; Genova, Alex Di; Djari, Anis; Esposito, Anthony; Estrella, Heather; Eyras, Eduardo; Fernandez-Banet, Julio; Forbes, Simon; Free, Robert C.; Fujisawa, Takatomo; Gadaleta, Emanuela; Garcia-Manteiga, Jose M.; Goodstein, David; Gray, Kristian; Guerra-Assunção, José Afonso; Haggarty, Bernard; Han, Dong-Jin; Han, Byung Woo; Harris, Todd; Harshbarger, Jayson; Hastings, Robert K.; Hayes, Richard D.; Hoede, Claire; Hu, Shen; Hu, Zhi-Liang; Hutchins, Lucie; Kan, Zhengyan; Kawaji, Hideya; Keliet, Aminah; Kerhornou, Arnaud; Kim, Sunghoon; Kinsella, Rhoda; Klopp, Christophe; Kong, Lei; Lawson, Daniel; Lazarevic, Dejan; Lee, Ji-Hyun; Letellier, Thomas; Li, Chuan-Yun; Lio, Pietro; Liu, Chu-Jun; Luo, Jie; Maass, Alejandro; Mariette, Jerome; Maurel, Thomas; Merella, Stefania; Mohamed, Azza Mostafa; Moreews, Francois; Nabihoudine, Ibounyamine; Ndegwa, Nelson; Noirot, Céline; Perez-Llamas, Cristian; Primig, Michael; Quattrone, Alessandro; Quesneville, Hadi; Rambaldi, Davide; Reecy, James; Riba, Michela; Rosanoff, Steven; Saddiq, Amna Ali; Salas, Elisa; Sallou, Olivier; Shepherd, Rebecca; Simon, Reinhard; Sperling, Linda; Spooner, William; Staines, Daniel M.; Steinbach, Delphine; Stone, Kevin; Stupka, Elia; Teague, Jon W.; Dayem Ullah, Abu Z.; Wang, Jun; Ware, Doreen; Wong-Erasmus, Marie; Youens-Clark, Ken; Zadissa, Amonida; Zhang, Shi-Jian; Kasprzyk, Arek
2015-01-01
The BioMart Community Portal (www.biomart.org) is a community-driven effort to provide a unified interface to biomedical databases that are distributed worldwide. The portal provides access to numerous database projects supported by 30 scientific organizations. It includes over 800 different biological datasets spanning genomics, proteomics, model organisms, cancer data, ontology information and more. All resources available through the portal are independently administered and funded by their host organizations. The BioMart data federation technology provides a unified interface to all the available data. The latest version of the portal comes with many new databases that have been created by our ever-growing community. It also comes with better support and extensibility for data analysis and visualization tools. A new addition to our toolbox, the enrichment analysis tool is now accessible through graphical and web service interface. The BioMart community portal averages over one million requests per day. Building on this level of service and the wealth of information that has become available, the BioMart Community Portal has introduced a new, more scalable and cheaper alternative to the large data stores maintained by specialized organizations. PMID:25897122
BioMart Central Portal: an open database network for the biological community.
Guberman, Jonathan M; Ai, J; Arnaiz, O; Baran, Joachim; Blake, Andrew; Baldock, Richard; Chelala, Claude; Croft, David; Cros, Anthony; Cutts, Rosalind J; Di Génova, A; Forbes, Simon; Fujisawa, T; Gadaleta, E; Goodstein, D M; Gundem, Gunes; Haggarty, Bernard; Haider, Syed; Hall, Matthew; Harris, Todd; Haw, Robin; Hu, S; Hubbard, Simon; Hsu, Jack; Iyer, Vivek; Jones, Philip; Katayama, Toshiaki; Kinsella, R; Kong, Lei; Lawson, Daniel; Liang, Yong; Lopez-Bigas, Nuria; Luo, J; Lush, Michael; Mason, Jeremy; Moreews, Francois; Ndegwa, Nelson; Oakley, Darren; Perez-Llamas, Christian; Primig, Michael; Rivkin, Elena; Rosanoff, S; Shepherd, Rebecca; Simon, Reinhard; Skarnes, B; Smedley, Damian; Sperling, Linda; Spooner, William; Stevenson, Peter; Stone, Kevin; Teague, J; Wang, Jun; Wang, Jianxin; Whitty, Brett; Wong, D T; Wong-Erasmus, Marie; Yao, L; Youens-Clark, Ken; Yung, Christina; Zhang, Junjun; Kasprzyk, Arek
2011-01-01
BioMart Central Portal is a first of its kind, community-driven effort to provide unified access to dozens of biological databases spanning genomics, proteomics, model organisms, cancer data, ontology information and more. Anybody can contribute an independently maintained resource to the Central Portal, allowing it to be exposed to and shared with the research community, and linking it with the other resources in the portal. Users can take advantage of the common interface to quickly utilize different sources without learning a new system for each. The system also simplifies cross-database searches that might otherwise require several complicated steps. Several integrated tools streamline common tasks, such as converting between ID formats and retrieving sequences. The combination of a wide variety of databases, an easy-to-use interface, robust programmatic access and the array of tools make Central Portal a one-stop shop for biological data querying. Here, we describe the structure of Central Portal and show example queries to demonstrate its capabilities.
An ontology-based search engine for protein-protein interactions
2010-01-01
Background Keyword matching or ID matching is the most common searching method in a large database of protein-protein interactions. They are purely syntactic methods, and retrieve the records in the database that contain a keyword or ID specified in a query. Such syntactic search methods often retrieve too few search results or no results despite many potential matches present in the database. Results We have developed a new method for representing protein-protein interactions and the Gene Ontology (GO) using modified Gödel numbers. This representation is hidden from users but enables a search engine using the representation to efficiently search protein-protein interactions in a biologically meaningful way. Given a query protein with optional search conditions expressed in one or more GO terms, the search engine finds all the interaction partners of the query protein by unique prime factorization of the modified Gödel numbers representing the query protein and the search conditions. Conclusion Representing the biological relations of proteins and their GO annotations by modified Gödel numbers makes a search engine efficiently find all protein-protein interactions by prime factorization of the numbers. Keyword matching or ID matching search methods often miss the interactions involving a protein that has no explicit annotations matching the search condition, but our search engine retrieves such interactions as well if they satisfy the search condition with a more specific term in the ontology. PMID:20122195
An ontology-based search engine for protein-protein interactions.
Park, Byungkyu; Han, Kyungsook
2010-01-18
Keyword matching or ID matching is the most common searching method in a large database of protein-protein interactions. They are purely syntactic methods, and retrieve the records in the database that contain a keyword or ID specified in a query. Such syntactic search methods often retrieve too few search results or no results despite many potential matches present in the database. We have developed a new method for representing protein-protein interactions and the Gene Ontology (GO) using modified Gödel numbers. This representation is hidden from users but enables a search engine using the representation to efficiently search protein-protein interactions in a biologically meaningful way. Given a query protein with optional search conditions expressed in one or more GO terms, the search engine finds all the interaction partners of the query protein by unique prime factorization of the modified Gödel numbers representing the query protein and the search conditions. Representing the biological relations of proteins and their GO annotations by modified Gödel numbers makes a search engine efficiently find all protein-protein interactions by prime factorization of the numbers. Keyword matching or ID matching search methods often miss the interactions involving a protein that has no explicit annotations matching the search condition, but our search engine retrieves such interactions as well if they satisfy the search condition with a more specific term in the ontology.
Decision making in family medicine
Labrecque, Michel; Ratté, Stéphane; Frémont, Pierre; Cauchon, Michel; Ouellet, Jérôme; Hogg, William; McGowan, Jessie; Gagnon, Marie-Pierre; Njoya, Merlin; Légaré, France
2013-01-01
Abstract Objective To compare the ability of users of 2 medical search engines, InfoClinique and the Trip database, to provide correct answers to clinical questions and to explore the perceived effects of the tools on the clinical decision-making process. Design Randomized trial. Setting Three family medicine units of the family medicine program of the Faculty of Medicine at Laval University in Quebec city, Que. Participants Fifteen second-year family medicine residents. Intervention Residents generated 30 structured questions about therapy or preventive treatment (2 questions per resident) based on clinical encounters. Using an Internet platform designed for the trial, each resident answered 20 of these questions (their own 2, plus 18 of the questions formulated by other residents, selected randomly) before and after searching for information with 1 of the 2 search engines. For each question, 5 residents were randomly assigned to begin their search with InfoClinique and 5 with the Trip database. Main outcome measures The ability of residents to provide correct answers to clinical questions using the search engines, as determined by third-party evaluation. After answering each question, participants completed a questionnaire to assess their perception of the engine’s effect on the decision-making process in clinical practice. Results Of 300 possible pairs of answers (1 answer before and 1 after the initial search), 254 (85%) were produced by 14 residents. Of these, 132 (52%) and 122 (48%) pairs of answers concerned questions that had been assigned an initial search with InfoClinique and the Trip database, respectively. Both engines produced an important and similar absolute increase in the proportion of correct answers after searching (26% to 62% for InfoClinique, for an increase of 36%; 24% to 63% for the Trip database, for an increase of 39%; P = .68). For all 30 clinical questions, at least 1 resident produced the correct answer after searching with either search engine. The mean (SD) time of the initial search for each question was 23.5 (7.6) minutes with InfoClinique and 22.3 (7.8) minutes with the Trip database (P = .30). Participants’ perceptions of each engine’s effect on the decision-making process were very positive and similar for both search engines. Conclusion Family medicine residents’ ability to provide correct answers to clinical questions increased dramatically and similarly with the use of both InfoClinique and the Trip database. These tools have strong potential to increase the quality of medical care. PMID:24130286
Tags Extarction from Spatial Documents in Search Engines
NASA Astrophysics Data System (ADS)
Borhaninejad, S.; Hakimpour, F.; Hamzei, E.
2015-12-01
Nowadays the selective access to information on the Web is provided by search engines, but in the cases which the data includes spatial information the search task becomes more complex and search engines require special capabilities. The purpose of this study is to extract the information which lies in spatial documents. To that end, we implement and evaluate information extraction from GML documents and a retrieval method in an integrated approach. Our proposed system consists of three components: crawler, database and user interface. In crawler component, GML documents are discovered and their text is parsed for information extraction; storage. The database component is responsible for indexing of information which is collected by crawlers. Finally the user interface component provides the interaction between system and user. We have implemented this system as a pilot system on an Application Server as a simulation of Web. Our system as a spatial search engine provided searching capability throughout the GML documents and thus an important step to improve the efficiency of search engines has been taken.
Impact of Commercial Search Engines and International Databases on Engineering Teaching and Research
ERIC Educational Resources Information Center
Chanson, Hubert
2007-01-01
For the last three decades, the engineering higher education and professional environments have been completely transformed by the "electronic/digital information revolution" that has included the introduction of personal computer, the development of email and world wide web, and broadband Internet connections at home. Herein the writer compares…
ERIC Educational Resources Information Center
Tolley, Patricia Ann Separ
2009-01-01
The purpose of this correlational study was to examine the effects of a residential learning community and enrollment in an introductory engineering course to engineering students' perceptions of the freshman year experience, academic performance, and persistence. The sample included students enrolled in a large, urban, public, research university…
Tu, Shin-Ping; Feng, Sherry; Storch, Richard; Yip, Mei-Po; Sohng, HeeYon; Fu, Mingang; Chun, Alan
2012-11-01
Impressive results in patient care and cost reduction have increased the demand for systems-engineering methodologies in large health care systems. This Report from the Field describes the feasibility of applying systems-engineering techniques at a community health center currently lacking the dedicated expertise and resources to perform these activities.
Tu, Shin-Ping; Feng, Sherry; Storch, Richard; Yip, Mei-Po; Sohng, HeeYon; Fu, Mingang; Chun, Alan
2013-01-01
Summary Impressive results in patient care and cost reduction have increased the demand for systems-engineering methodologies in large health care systems. This Report from the Field describes the feasibility of applying systems-engineering techniques at a community health center currently lacking the dedicated expertise and resources to perform these activities. PMID:23698657
EarthRef.org: Exploring aspects of a Cyber Infrastructure in Earth Science and Education
NASA Astrophysics Data System (ADS)
Staudigel, H.; Koppers, A.; Tauxe, L.; Constable, C.; Helly, J.
2004-12-01
EarthRef.org is the common host and (co-) developer of a range of earth science databases and IT resources providing a test bed for a Cyberinfrastructure in Earth Science and Education (CIESE). EarthRef.org data base efforts include in particular the Geochemical Earth Reference Model (GERM), the Magnetics Information Consortium (MagIC), the Educational Resources for Earth Science Education (ERESE) project, the Seamount Catalog, the Mid-Ocean Ridge Catalog, the Radio-Isotope Geochronology (RiG) initiative for CHRONOS, and the Microbial Observatory for Fe oxidizing microbes on Loihi Seamount (FeMO; the most recent development). These diverse databases are developed under a single database umbrella and webserver at the San Diego Supercomputing Center. All the data bases have similar structures, with consistent metadata concepts, a common database layout, and automated upload wizards. Shared resources include supporting databases like an address book, a reference/publication catalog, and a common digital archive making database development and maintenance cost-effective, while guaranteeing interoperability. The EarthRef.org CIESE provides a common umbrella for synthesis information as well as sample-based data, and it bridges the gap between science and science education in middle and high schools, validating the potential for a system wide data infrastructure in a CIESE. EarthRef.org experiences have shown that effective communication with the respective communities is a key part of a successful CIESE facilitating both utility and community buy-in. GERM has been particularly successful at developing a metadata scheme for geochemistry and in the development of a new electronic journal (G-cubed) that has made much progress in data publication and linkages between journals and community data bases. GERM also has worked, through editors and publishers, towards interfacing databases with the publication process, to accomplish a more scholarly and database friendly data publication environment, and to interface with the respective science communities. MagIC has held several workshops that have resulted in an integrated data archival environment using metadata that are interchangeable with the geochemical metadata. MagIC archives a wide array of paleo and rock magnetic directional, intensity and magnetic property data as well as integrating computational tools. ERESE brought together librarians, teachers, and scientists to create an educational environment that supports inquiry driven education and the use of science data. Experiences in EarthRef.org demonstrates the feasibility of an effective, community wide CIESE for data publication, archival and modeling, as well as the outreach to the educational community.
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M
2011-07-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I.; Marcotte, Edward M.
2011-01-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for all possible PSMs and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for all detected proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses. PMID:21488652
WaveNet: A Web-Based Metocean Data Access, Processing, and Analysis Tool. Part 3 - CDIP Database
2014-06-01
and Analysis Tool; Part 3 – CDIP Database by Zeki Demirbilek, Lihwa Lin, and Derek Wilson PURPOSE: This Coastal and Hydraulics Engineering...Technical Note (CHETN) describes coupling of the Coastal Data Information Program ( CDIP ) database to WaveNet, the first module of MetOcnDat (Meteorological...provides a step-by-step procedure to access, process, and analyze wave and wind data from the CDIP database. BACKGROUND: WaveNet addresses a basic
Thermal Protection System Imagery Inspection Management System -TIIMS
NASA Technical Reports Server (NTRS)
Goza, Sharon; Melendrez, David L.; Henningan, Marsha; LaBasse, Daniel; Smith, Daniel J.
2011-01-01
TIIMS is used during the inspection phases of every mission to provide quick visual feedback, detailed inspection data, and determination to the mission management team. This system consists of a visual Web page interface, an SQL database, and a graphical image generator. These combine to allow a user to ascertain quickly the status of the inspection process, and current determination of any problem zones. The TIIMS system allows inspection engineers to enter their determinations into a database and to link pertinent images and video to those database entries. The database then assigns criteria to each zone and tile, and via query, sends the information to a graphical image generation program. Using the official TIPS database tile positions and sizes, the graphical image generation program creates images of the current status of the orbiter, coloring zones, and tiles based on a predefined key code. These images are then displayed on a Web page using customized JAVA scripts to display the appropriate zone of the orbiter based on the location of the user's cursor. The close-up graphic and database entry for that particular zone can then be seen by selecting the zone. This page contains links into the database to access the images used by the inspection engineer when they make the determination entered into the database. Status for the inspection zones changes as determinations are refined and shown by the appropriate color code.
Making your database available through Wikipedia: the pros and cons.
Finn, Robert D; Gardner, Paul P; Bateman, Alex
2012-01-01
Wikipedia, the online encyclopedia, is the most famous wiki in use today. It contains over 3.7 million pages of content; with many pages written on scientific subject matters that include peer-reviewed citations, yet are written in an accessible manner and generally reflect the consensus opinion of the community. In this, the 19th Annual Database Issue of Nucleic Acids Research, there are 11 articles that describe the use of a wiki in relation to a biological database. In this commentary, we discuss how biological databases can be integrated with Wikipedia, thereby utilising the pre-existing infrastructure, tools and above all, large community of authors (or Wikipedians). The limitations to the content that can be included in Wikipedia are highlighted, with examples drawn from articles found in this issue and other wiki-based resources, indicating why other wiki solutions are necessary. We discuss the merits of using open wikis, like Wikipedia, versus other models, with particular reference to potential vandalism. Finally, we raise the question about the future role of dedicated database biocurators in context of the thousands of crowdsourced, community annotations that are now being stored in wikis.
Making your database available through Wikipedia: the pros and cons
Finn, Robert D.; Gardner, Paul P.; Bateman, Alex
2012-01-01
Wikipedia, the online encyclopedia, is the most famous wiki in use today. It contains over 3.7 million pages of content; with many pages written on scientific subject matters that include peer-reviewed citations, yet are written in an accessible manner and generally reflect the consensus opinion of the community. In this, the 19th Annual Database Issue of Nucleic Acids Research, there are 11 articles that describe the use of a wiki in relation to a biological database. In this commentary, we discuss how biological databases can be integrated with Wikipedia, thereby utilising the pre-existing infrastructure, tools and above all, large community of authors (or Wikipedians). The limitations to the content that can be included in Wikipedia are highlighted, with examples drawn from articles found in this issue and other wiki-based resources, indicating why other wiki solutions are necessary. We discuss the merits of using open wikis, like Wikipedia, versus other models, with particular reference to potential vandalism. Finally, we raise the question about the future role of dedicated database biocurators in context of the thousands of crowdsourced, community annotations that are now being stored in wikis. PMID:22144683
Teaching English Engineering Terminology in a Hypermedia Environment.
ERIC Educational Resources Information Center
Stamison-Atmatzidi, M.; And Others
1995-01-01
Discusses a hypermedia prototype system constituting a hypermedia dictionary environment and a database of field-specific reading passages with related exercises, for utilization in the teaching of English engineering terminology in foreign language environments. (eight references) (CK)
NASA Technical Reports Server (NTRS)
Cooper, Beth A.
1995-01-01
NASA Lewis Research Center is home to more than 100 experimental research testing facilities and laboratories, including large wind tunnels and engine test cells, which in combination create a varied and complex noise environment. Much of the equipment was manufactured prior to the enactment of legislation limiting product noise emissions or occupational noise exposure. Routine facility maintenance and associated construction also contributes to a noise exposure management responsibility which is equal in magnitude and scope to that of several small industrial companies. The Noise Program, centrally managed within the Office of Environmental Programs at LRC, maintains overall responsibility for hearing conservation, community noise control, and acoustical and noise control engineering. Centralized management of the LRC Noise Program facilitates the timely development and implementation of engineered noise control solutions for problems identified via either the Hearing Conservation of Community Noise Program. The key element of the Lewis Research Center Noise Program, Acoustical and Noise Control Engineering Services, is focused on developing solutions that permanently reduce employee and community noise exposure and maximize research productivity by reducing or eliminating administrative and operational controls and by improving the safety and comfort of the work environment. The Hearing Conservation Program provides noise exposure assessment, medical monitoring, and training for civil servant and contractor employees. The Community Noise Program aims to maintain the support of LRC's neighboring communities while enabling necessary research operations to accomplish their programmatic goals. Noise control engineering capability resides within the Noise Program. The noise control engineering, based on specific exposure limits, is a fundamental consideration throughout the design phase of new test facilities, labs, and office buildings. In summary, the Noise Program addresses hearing conservation, community noise control, and acoustical and noise control engineering.
The radiopurity.org material database
NASA Astrophysics Data System (ADS)
Cooley, J.; Loach, J. C.; Poon, A. W. P.
2018-01-01
The database at http://www.radiopurity.org is the world's largest public database of material radio-purity mea-surements. These measurements are used by members of the low-background physics community to build experiments that search for neutrinos, neutrinoless double-beta decay, WIMP dark matter, and other exciting physics. This paper summarizes the current status and the future plan of this database.
THE DRINKING WATER TREATABILITY DATABASE (Slides)
The Drinking Water Treatability Database (TDB) assembles referenced data on the control of contaminants in drinking water, housed on an interactive, publicly-available, USEPA web site (www.epa.gov/tdb). The TDB is of use to drinking water utilities, treatment process design engin...
Moran, W. P.; Messick, C.; Guerette, P.; Anderson, R.; Bradham, D.; Wofford, J. L.; Velez, R.
1994-01-01
Primary care physicians provide longitudinal care for chronically ill individuals in concert with many other community-based disciplines. The care management of these individuals requires data not traditionally collected during the care of well, or acutely ill individuals. These data not only concern the patient, in the form of patient functional status, mental status and affect, but also pertain to the caregiver, home environment, and the formal community health and social service system. The goal of the Community Care Coordination Network is to build a primary care-based information system to share patient data and communicate patient related information among the community-based multi-disciplinary teams. One objective of the Community Care Coordination Network is to create a Community Care Database for chronically ill individuals by identifying those data elements necessary for efficient multi-disciplinary care. PMID:7949995
Parente, Eugenio; Cocolin, Luca; De Filippis, Francesca; Zotta, Teresa; Ferrocino, Ilario; O'Sullivan, Orla; Neviani, Erasmo; De Angelis, Maria; Cotter, Paul D; Ercolini, Danilo
2016-02-16
Amplicon targeted high-throughput sequencing has become a popular tool for the culture-independent analysis of microbial communities. Although the data obtained with this approach are portable and the number of sequences available in public databases is increasing, no tool has been developed yet for the analysis and presentation of data obtained in different studies. This work describes an approach for the development of a database for the rapid exploration and analysis of data on food microbial communities. Data from seventeen studies investigating the structure of bacterial communities in dairy, meat, sourdough and fermented vegetable products, obtained by 16S rRNA gene targeted high-throughput sequencing, were collated and analysed using Gephi, a network analysis software. The resulting database, which we named FoodMicrobionet, was used to analyse nodes and network properties and to build an interactive web-based visualisation. The latter allows the visual exploration of the relationships between Operational Taxonomic Units (OTUs) and samples and the identification of core- and sample-specific bacterial communities. It also provides additional search tools and hyperlinks for the rapid selection of food groups and OTUs and for rapid access to external resources (NCBI taxonomy, digital versions of the original articles). Microbial interaction network analysis was carried out using CoNet on datasets extracted from FoodMicrobionet: the complexity of interaction networks was much lower than that found for other bacterial communities (human microbiome, soil and other environments). This may reflect both a bias in the dataset (which was dominated by fermented foods and starter cultures) and the lower complexity of food bacterial communities. Although some technical challenges exist, and are discussed here, the net result is a valuable tool for the exploration of food bacterial communities by the scientific community and food industry. Copyright © 2015. Published by Elsevier B.V.
Theater as a Community-Building Strategy for Women in Engineering: Theory and Practice
NASA Astrophysics Data System (ADS)
Chesler, Naomi C.; Chesler, Mark A.
Previously, the authors have suggested that peer mentoring through a caring community would improve the quality of life for female faculty members in engineering and could have a positive effect on retention and career advancement. Here, the authors present the background psychosocial literature for choosing participatory theater as a strategy to develop a caring community and report on a pilot study in which participatory theater activities were used within a workshop format for untenured female faculty members in engineering. The authors identify the key differences between participatory theater and other strategies for community building that may enhance participants' sense of commonality and the strength and utility of their community as a mentoring and support mechanism and discuss the ways in which these efforts could have a broader, longer term impact.
NASA Astrophysics Data System (ADS)
Single, Peg Boyle; Muller, Carol B.; Cunningham, Christine M.; Single, Richard M.
In this article, we report on electronic discussion lists (e-lists) sponsored by MentorNet, the National Electronic Industrial Mentoring Network for Women in Engineering and Science. Using the Internet, the MentorNet program connects students in engineering and science with mentors working in industry. These e-lists are a feature of MentorNet's larger electronic mentoring program and were sponsored to foster the establishment of community among women engineering and science students and men and women professionals in those fields. This research supports the hypothesis that electronic communications can be used to develop community among engineering and science students and professionals and identifies factors influencing the emergence of electronic communities (e-communities). The e-lists that emerged into self-sustaining e-communities were focused on topic-based themes, such as balancing personal and work life, issues pertaining to women in engineering and science, and job searching. These e-communities were perceived to be safe places, embraced a diversity of opinions and experiences, and sanctioned personal and meaningful postings on the part of the participants. The e-communities maintained three to four simultaneous threaded discussions and were sustained by professionals who served as facilitators by seeding the e-lists with discussion topics. The e-lists were sponsored to provide women students participating in MentorNet with access to groups of technical and scientific professionals. In addition to providing benefits to the students, the e-lists also provided the professionals with opportunities to engage in peer mentoring with other, mostly female, technical and scientific professionals. We discuss the implications of our findings for developing e-communities and for serving the needs of women in technical and scientific fields.
ERIC Educational Resources Information Center
Pezzoli, Jean A.
In June 1992, Maui Community College (MCC), in Hawaii, conducted a survey of the communities of Maui, Molokai, Lanai, and Hana to determine perceived needs for an associate degree and certificate program in electronics and computer engineering. Questionnaires were mailed to 500 firms utilizing electronic or computer services, seeking information…
OrChem - An open source chemistry search engine for Oracle(R).
Rijnbeek, Mark; Steinbeck, Christoph
2009-10-22
Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net.
Quigley, Dianne
2015-02-01
A collaborative team of environmental sociologists, community psychologists, religious studies scholars, environmental studies/science researchers and engineers has been working together to design and implement new training in research ethics, culture and community-based approaches for place-based communities and cultural groups. The training is designed for short and semester-long graduate courses at several universities in the northeastern US. The team received a 3 year grant from the US National Science Foundation's Ethics Education in Science and Engineering in 2010. This manuscript details the curriculum topics developed that incorporate ethical principles, particularly for group protections/benefits within the field practices of environmental/engineering researchers.
Relational Database Design in Information Science Education.
ERIC Educational Resources Information Center
Brooks, Terrence A.
1985-01-01
Reports on database management system (dbms) applications designed by library school students for university community at University of Iowa. Three dbms design issues are examined: synthesis of relations, analysis of relations (normalization procedure), and data dictionary usage. Database planning prior to automation using data dictionary approach…
An open-source, mobile-friendly search engine for public medical knowledge.
Samwald, Matthias; Hanbury, Allan
2014-01-01
The World Wide Web has become an important source of information for medical practitioners. To complement the capabilities of currently available web search engines we developed FindMeEvidence, an open-source, mobile-friendly medical search engine. In a preliminary evaluation, the quality of results from FindMeEvidence proved to be competitive with those from TRIP Database, an established, closed-source search engine for evidence-based medicine.
The Impact of a Living Learning Community on First-Year Engineering Students
ERIC Educational Resources Information Center
Flynn, Margaret A.; Everett, Jess W.; Whittinghill, Dex
2016-01-01
The purpose of this study was to investigate the impact of an engineering living and learning community (ELC) on first-year engineering students. A control group of non-ELC students was used to compare the experiences of the ELC participants. Analysis of survey data showed that there was significant differences between the ELC students and the…
ERIC Educational Resources Information Center
Gambino, Christine; Gryn, Thomas
2011-01-01
This brief will discuss patterns of science and engineering educational attainment within the foreign-born population living in the United States, using data from the 2010 American Community Survey (ACS). The analysis is restricted to the population aged 25 and older, and the results are presented on science and engineering degree attainment by…
Burns, Jacky; Conway, David I; Gnich, Wendy; Macpherson, Lorna M D
2017-03-08
Poor health and health inequalities persist despite increasing investment in health improvement programmes across high-income countries. Evidence suggests that to reduce health inequalities, a range of activities targeted at different levels within society and throughout the life course should be employed. There is a particular focus on addressing inequalities in early years as this may influence the experience of health in adulthood. To address the wider determinants of health at a community level, a key intervention which can be considered is supporting patients to access wider community resources. This can include processes such as signposting, referral and facilitation. There is a lack of evidence synthesis in relation to the most effective methods for linking individuals from health services to other services within communities, especially when considering interventions aimed at families with young children. The aim of this study is to understand the way health services can best help parents, carers and families with pre-school children to engage with local services, groups and agencies to address their wider health and social needs. The review may inform future guidance to support families to address wider determinants of health. The study is a systematic review, and papers will be identified from the following electronic databases: Web of Science, Embase, MEDLINE and CINAHL. A grey literature search will be conducted using an internet search engine and specific grey literature databases (TRiP, EThOS and Open Grey). Reference lists/bibliographies of selected papers will be searched. Quality will be assessed using the Effective Public Health Practice Project Quality Assessment Tool for quantitative studies and the CASP tool for qualitative studies. Data will be synthesised in a narrative form and weighted by study quality. It is important to understand how health services can facilitate access to wider services for their patients to address the wider determinants of health. This may impact on the experience of health inequalities. This review focuses on how this can be achieved for families with pre-school children, and the evidence obtained will be useful for informing future guidance on this topic. PROSPERO CRD42016034066.
Integrated Risk Information System (IRIS)
Diesel engine exhaust ; CASRN N.A . Human health assessment information on a chemical substance is included in the IRIS database only after a comprehensive review of toxicity data , as outlined in the IRIS assessment development process . Sections I ( Health Hazard Assessments for Noncarcinogenic Ef
THE DRINKING WATER TREATABILITY DATABASE (Conference Paper)
The Drinking Water Treatability Database (TDB) assembles referenced data on the control of contaminants in drinking water, housed on an interactive, publicly-available, USEPA web site (www.epa.gov/tdb). The TDB is of use to drinking water utilities, treatment process design engin...
Four Current Awareness Databases: Coverage and Currency Compared.
ERIC Educational Resources Information Center
Jaguszewski, Janice M.; Kempf, Jody L.
1995-01-01
Discusses the usability and content of the following table of contents (TOC) databases selected by science and engineering librarians at the University of Minnesota Twin Cities: Current Contents on Diskette (CCoD), CARL Uncover2, Inside Information, and Contents1st. (AEF)
COMPUTER-AIDED SCIENCE POLICY ANALYSIS AND RESEARCH (WEBCASPAR)
WebCASPAR is a database system containing information about academic science and engineering resources and is available on the World Wide Web. Included in the database is information from several of SRS's academic surveys plus information from a variety of other sources, includin...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Hyun -Seob; Renslow, Ryan S.; Fredrickson, Jim K.
We note that many definitions of resilience have been proffered for natural and engineered ecosystems, but a conceptual consensus on resilience in microbial communities is still lacking. Here, we argue that the disconnect largely results from the wide variance in microbial community complexity, which range from simple synthetic consortia to complex natural communities, and divergence between the typical practical outcomes emphasized by ecologists and engineers. Viewing microbial communities as elasto-plastic systems, we argue that this gap between the engineering and ecological definitions of resilience stems from their respective emphases on elastic and plastic deformation, respectively. We propose that the twomore » concepts may be fundamentally united around the resilience of function rather than state in microbial communities and the regularity in the relationship between environmental variation and a community’s functional response. Furthermore, we posit that functional resilience is an intrinsic property of microbial communities, suggesting that state changes in response to environmental variation may be a key mechanism driving resilience in microbial communities.« less
Constructing a Geology Ontology Using a Relational Database
NASA Astrophysics Data System (ADS)
Hou, W.; Yang, L.; Yin, S.; Ye, J.; Clarke, K.
2013-12-01
In geology community, the creation of a common geology ontology has become a useful means to solve problems of data integration, knowledge transformation and the interoperation of multi-source, heterogeneous and multiple scale geological data. Currently, human-computer interaction methods and relational database-based methods are the primary ontology construction methods. Some human-computer interaction methods such as the Geo-rule based method, the ontology life cycle method and the module design method have been proposed for applied geological ontologies. Essentially, the relational database-based method is a reverse engineering of abstracted semantic information from an existing database. The key is to construct rules for the transformation of database entities into the ontology. Relative to the human-computer interaction method, relational database-based methods can use existing resources and the stated semantic relationships among geological entities. However, two problems challenge the development and application. One is the transformation of multiple inheritances and nested relationships and their representation in an ontology. The other is that most of these methods do not measure the semantic retention of the transformation process. In this study, we focused on constructing a rule set to convert the semantics in a geological database into a geological ontology. According to the relational schema of a geological database, a conversion approach is presented to convert a geological spatial database to an OWL-based geological ontology, which is based on identifying semantics such as entities, relationships, inheritance relationships, nested relationships and cluster relationships. The semantic integrity of the transformation was verified using an inverse mapping process. In a geological ontology, an inheritance and union operations between superclass and subclass were used to present the nested relationship in a geochronology and the multiple inheritances relationship. Based on a Quaternary database of downtown of Foshan city, Guangdong Province, in Southern China, a geological ontology was constructed using the proposed method. To measure the maintenance of semantics in the conversation process and the results, an inverse mapping from the ontology to a relational database was tested based on a proposed conversation rule. The comparison of schema and entities and the reduction of tables between the inverse database and the original database illustrated that the proposed method retains the semantic information well during the conversation process. An application for abstracting sandstone information showed that semantic relationships among concepts in the geological database were successfully reorganized in the constructed ontology. Key words: geological ontology; geological spatial database; multiple inheritance; OWL Acknowledgement: This research is jointly funded by the Specialized Research Fund for the Doctoral Program of Higher Education of China (RFDP) (20100171120001), NSFC (41102207) and the Fundamental Research Funds for the Central Universities (12lgpy19).
Lynx: a database and knowledge extraction engine for integrative medicine
Sulakhe, Dinanath; Balasubramanian, Sandhya; Xie, Bingqing; Feng, Bo; Taylor, Andrew; Wang, Sheng; Berrocal, Eduardo; Dave, Utpal; Xu, Jinbo; Börnigen, Daniela; Gilliam, T. Conrad; Maltsev, Natalia
2014-01-01
We have developed Lynx (http://lynx.ci.uchicago.edu)—a web-based database and a knowledge extraction engine, supporting annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Its underlying knowledge base (LynxKB) integrates various classes of information from >35 public databases and private collections, as well as manually curated data from our group and collaborators. Lynx provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization to assist the user in extracting meaningful knowledge from LynxKB and experimental data, whereas its service-oriented architecture provides public access to LynxKB and its analytical tools via user-friendly web services and interfaces. PMID:24270788
Application of materials database (MAT.DB.) to materials education
NASA Technical Reports Server (NTRS)
Liu, Ping; Waskom, Tommy L.
1994-01-01
Finding the right material for the job is an important aspect of engineering. Sometimes the choice is as fundamental as selecting between steel and aluminum. Other times, the choice may be between different compositions in an alloy. Discovering and compiling materials data is a demanding task, but it leads to accurate models for analysis and successful materials application. Mat. DB. is a database management system designed for maintaining information on the properties and processing of engineered materials, including metals, plastics, composites, and ceramics. It was developed by the Center for Materials Data of American Society for Metals (ASM) International. The ASM Center for Materials Data collects and reviews material property data for publication in books, reports, and electronic database. Mat. DB was developed to aid the data management and material applications.
Mobile Source Observation Database (MSOD)
The Mobile Source Observation Database (MSOD) is a relational database being developed by the Assessment and Standards Division (ASD) of the US Environmental Protection Agency Office of Transportation and Air Quality (formerly the Office of Mobile Sources). The MSOD contains emission test data from in-use mobile air- pollution sources such as cars, trucks, and engines from trucks and nonroad vehicles. Data in the database was collected from 1982 to the present. The data is intended to be representative of in-use vehicle emissions in the United States.
SACD's Support of the Hyper-X Program
NASA Technical Reports Server (NTRS)
Robinson, Jeffrey S.; Martin, John G.
2006-01-01
NASA s highly successful Hyper-X program demonstrated numerous hypersonic air-breathing vehicle related technologies including scramjet performance, advanced materials and hot structures, GN&C, and integrated vehicle performance resulting in, for the first time ever, acceleration of a vehicle powered by a scramjet engine. The Systems Analysis and Concepts Directorate (SACD) at NASA s Langley Research Center played a major role in the integrated team providing critical support, analysis, and leadership to the Hyper-X Program throughout the program s entire life and were key to its ultimate success. Engineers in SACD s Vehicle Analysis Branch (VAB) were involved in all stages and aspects of the program, from conceptual design prior to contract award, through preliminary design and hardware development, and in to, during, and after each of the three flights. Working closely with other engineers at Langley and Dryden, as well as industry partners, roughly 20 members of SACD were involved throughout the evolution of the Hyper-X program in nearly all disciplines, including lead roles in several areas. Engineers from VAB led the aerodynamic database development, the propulsion database development, and the stage separation analysis and database development effort. Others played major roles in structures, aerothermal, GN&C, trajectory analysis and flight simulation, as well as providing CFD support for aerodynamic, propulsion, and aerothermal analysis.
NASA Astrophysics Data System (ADS)
Jarboe, N.; Minnett, R.; Constable, C.; Koppers, A. A.; Tauxe, L.
2013-12-01
The Magnetics Information Consortium (MagIC) is dedicated to supporting the paleomagnetic, geomagnetic, and rock magnetic communities through the development and maintenance of an online database (http://earthref.org/MAGIC/), data upload and quality control, searches, data downloads, and visualization tools. While MagIC has completed importing some of the IAGA paleomagnetic databases (TRANS, PINT, PSVRL, GPMDB) and continues to import others (ARCHEO, MAGST and SECVR), further individual data uploading from the community contributes a wealth of easily-accessible rich datasets. Previously uploading of data to the MagIC database required the use of an Excel spreadsheet using either a Mac or PC. The new method of uploading data utilizes an HTML 5 web interface where the only computer requirement is a modern browser. This web interface will highlight all errors discovered in the dataset at once instead of the iterative error checking process found in the previous Excel spreadsheet data checker. As a web service, the community will always have easy access to the most up-to-date and bug free version of the data upload software. The filtering search mechanism of the MagIC database has been changed to a more intuitive system where the data from each contribution is displayed in tables similar to how the data is uploaded (http://earthref.org/MAGIC/search/). Searches themselves can be saved as a permanent URL, if desired. The saved search URL could then be used as a citation in a publication. When appropriate, plots (equal area, Zijderveld, ARAI, demagnetization, etc.) are associated with the data to give the user a quicker understanding of the underlying dataset. The MagIC database will continue to evolve to meet the needs of the paleomagnetic, geomagnetic, and rock magnetic communities.
Atomic and Molecular Databases, VAMDC (Virtual Atomic and Molecular Data Centre)
NASA Astrophysics Data System (ADS)
Dubernet, Marie-Lise; Zwölf, Carlo Maria; Moreau, Nicolas; Awa Ba, Yaya; VAMDC Consortium
2015-08-01
The "Virtual Atomic and Molecular Data Centre Consortium",(VAMDC Consortium, http://www.vamdc.eu) is a Consortium bound by an Memorandum of Understanding aiming at ensuring the sustainability of the VAMDC e-infrastructure. The current VAMDC e-infrastructure inter-connects about 30 atomic and molecular databases with the number of connected databases increasing every year: some databases are well-known databases such as CDMS, JPL, HITRAN, VALD,.., other databases have been created since the start of VAMDC. About 90% of our databases are used for astrophysical applications. The data can be queried, retrieved, visualized in a single format from a general portal (http://portal.vamdc.eu) and VAMDC is also developing standalone tools in order to retrieve and handle the data. VAMDC provides software and support in order to include databases within the VAMDC e-infrastructure. One current feature of VAMDC is the constrained environnement of description of data that ensures a higher quality for distribution of data; a future feature is the link of VAMDC with evaluation/validation groups. The talk will present the VAMDC Consortium and the VAMDC e infrastructure with its underlying technology, its services, its science use cases and its etension towards other communities than the academic research community.
Indian Health Service: Community Health
... Community Health Representatives (CHRs) Office of Environmental Health & Engineering (OEHE) Environmental Health Support Center Training (EHSCT) IHS ... Contracting Tribes - 08E17 Office of Environmental Health and Engineering - 10N14C Office of Finance and Accounting - 10E54 Office ...
ERIC Educational Resources Information Center
Jones, Enid B., Ed.
Background papers and recommendations from the American Association of Community Colleges' (AACC's) 1992 roundtable on issues facing minority students in mathematics, science, and engineering (MSE) education are presented. The first paper, "Community College Networks," by Wm. Carroll Marsalis and Glenna A. Mosby, describes the Tennessee Valley…
NASA Technical Reports Server (NTRS)
Pinelli, Thomas E.; Barclay, Rebecca O.; Keene, Michael L.; Kennedy, John M.; Hecht, Laura F.
1995-01-01
When students graduate and enter the world of work, they must make the transition from an academic to a professional knowledge community. Kenneth Bruffee's model of the social construction of knowledge suggests that language and written communication play a critical role in the reacculturation process that enables successful movement from one knowledge community to another. We present the results of a national (mail) survey that examined the technical communications abilities, skills, and competencies of 1,673 aerospace engineering students, who represent an academic knowledge community. These results are examined within the context of the technical communications behaviors and practices reported by 2,355 aerospace engineers and scientists employed in government and industry, who represent a professional knowledge community that the students expect to join. Bruffee's claim of the importance of language and written communication in the successful transition from an academic to a professional knowledge community is supported by the responses from the two communities we surveyed. Implications are offered for facilitating the reacculturation process of students to entry-level engineering professionals.
The MAO NASU Plate Archive Database. Current Status and Perspectives
NASA Astrophysics Data System (ADS)
Pakuliak, L. K.; Sergeeva, T. P.
2006-04-01
The preliminary online version of the database of the MAO NASU plate archive is constructed on the basis of the relational database management system MySQL and permits an easy supplement of database with new collections of astronegatives, provides a high flexibility in constructing SQL-queries for data search optimization, PHP Basic Authorization protected access to administrative interface and wide range of search parameters. The current status of the database will be reported and the brief description of the search engine and means of the database integrity support will be given. Methods and means of the data verification and tasks for the further development will be discussed.
Databases of Conformations and NMR Structures of Glycan Determinants.
Sarkar, Anita; Drouillard, Sophie; Rivet, Alain; Perez, Serge
2015-12-01
The present study reports a comprehensive nuclear magnetic resonance (NMR) characterization and a systematic conformational sampling of the conformational preferences of 170 glycan moieties of glycosphingolipids as produced in large-scale quantities by bacterial fermentation. These glycans span across a variety of families including the blood group antigens (A, B and O), core structures (Types 1, 2 and 4), fucosylated oligosaccharides (core and lacto-series), sialylated oligosaccharides (Types 1 and 2), Lewis antigens, GPI-anchors and globosides. A complementary set of about 100 glycan determinants occurring in glycoproteins and glycosaminoglycans has also been structurally characterized using molecular mechanics-based computation. The experimental and computational data generated are organized in two relational databases that can be queried by the user through a user-friendly search engine. The NMR ((1)H and (13)C, COSY, TOCSY, HMQC, HMBC correlation) spectra and 3D structures are available for visualization and download in commonly used structure formats. Emphasis has been given to the use of a common nomenclature for the structural encoding of the carbohydrates and each glycan molecule is described by four different types of representations in order to cope with the different usages in chemistry and biology. These web-based databases were developed with non-proprietary software and are open access for the scientific community available at http://glyco3d.cermav.cnrs.fr. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
The BioMart community portal: an innovative alternative to large, centralized data repositories.
Smedley, Damian; Haider, Syed; Durinck, Steffen; Pandini, Luca; Provero, Paolo; Allen, James; Arnaiz, Olivier; Awedh, Mohammad Hamza; Baldock, Richard; Barbiera, Giulia; Bardou, Philippe; Beck, Tim; Blake, Andrew; Bonierbale, Merideth; Brookes, Anthony J; Bucci, Gabriele; Buetti, Iwan; Burge, Sarah; Cabau, Cédric; Carlson, Joseph W; Chelala, Claude; Chrysostomou, Charalambos; Cittaro, Davide; Collin, Olivier; Cordova, Raul; Cutts, Rosalind J; Dassi, Erik; Di Genova, Alex; Djari, Anis; Esposito, Anthony; Estrella, Heather; Eyras, Eduardo; Fernandez-Banet, Julio; Forbes, Simon; Free, Robert C; Fujisawa, Takatomo; Gadaleta, Emanuela; Garcia-Manteiga, Jose M; Goodstein, David; Gray, Kristian; Guerra-Assunção, José Afonso; Haggarty, Bernard; Han, Dong-Jin; Han, Byung Woo; Harris, Todd; Harshbarger, Jayson; Hastings, Robert K; Hayes, Richard D; Hoede, Claire; Hu, Shen; Hu, Zhi-Liang; Hutchins, Lucie; Kan, Zhengyan; Kawaji, Hideya; Keliet, Aminah; Kerhornou, Arnaud; Kim, Sunghoon; Kinsella, Rhoda; Klopp, Christophe; Kong, Lei; Lawson, Daniel; Lazarevic, Dejan; Lee, Ji-Hyun; Letellier, Thomas; Li, Chuan-Yun; Lio, Pietro; Liu, Chu-Jun; Luo, Jie; Maass, Alejandro; Mariette, Jerome; Maurel, Thomas; Merella, Stefania; Mohamed, Azza Mostafa; Moreews, Francois; Nabihoudine, Ibounyamine; Ndegwa, Nelson; Noirot, Céline; Perez-Llamas, Cristian; Primig, Michael; Quattrone, Alessandro; Quesneville, Hadi; Rambaldi, Davide; Reecy, James; Riba, Michela; Rosanoff, Steven; Saddiq, Amna Ali; Salas, Elisa; Sallou, Olivier; Shepherd, Rebecca; Simon, Reinhard; Sperling, Linda; Spooner, William; Staines, Daniel M; Steinbach, Delphine; Stone, Kevin; Stupka, Elia; Teague, Jon W; Dayem Ullah, Abu Z; Wang, Jun; Ware, Doreen; Wong-Erasmus, Marie; Youens-Clark, Ken; Zadissa, Amonida; Zhang, Shi-Jian; Kasprzyk, Arek
2015-07-01
The BioMart Community Portal (www.biomart.org) is a community-driven effort to provide a unified interface to biomedical databases that are distributed worldwide. The portal provides access to numerous database projects supported by 30 scientific organizations. It includes over 800 different biological datasets spanning genomics, proteomics, model organisms, cancer data, ontology information and more. All resources available through the portal are independently administered and funded by their host organizations. The BioMart data federation technology provides a unified interface to all the available data. The latest version of the portal comes with many new databases that have been created by our ever-growing community. It also comes with better support and extensibility for data analysis and visualization tools. A new addition to our toolbox, the enrichment analysis tool is now accessible through graphical and web service interface. The BioMart community portal averages over one million requests per day. Building on this level of service and the wealth of information that has become available, the BioMart Community Portal has introduced a new, more scalable and cheaper alternative to the large data stores maintained by specialized organizations. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Aerodynamic Database Development for the Hyper-X Airframe Integrated Scramjet Propulsion Experiments
NASA Technical Reports Server (NTRS)
Engelund, Walter C.; Holland, Scott D.; Cockrell, Charles E., Jr.; Bittner, Robert D.
2000-01-01
This paper provides an overview of the activities associated with the aerodynamic database which is being developed in support of NASA's Hyper-X scramjet flight experiments. Three flight tests are planned as part of the Hyper-X program. Each will utilize a small, nonrecoverable research vehicle with an airframe integrated scramjet propulsion engine. The research vehicles will be individually rocket boosted to the scramjet engine test points at Mach 7 and Mach 10. The research vehicles will then separate from the first stage booster vehicle and the scramjet engine test will be conducted prior to the terminal decent phase of the flight. An overview is provided of the activities associated with the development of the Hyper-X aerodynamic database, including wind tunnel test activities and parallel CFD analysis efforts for all phases of the Hyper-X flight tests. A brief summary of the Hyper-X research vehicle aerodynamic characteristics is provided, including the direct and indirect effects of the airframe integrated scramjet propulsion system operation on the basic airframe stability and control characteristics. Brief comments on the planned post flight data analysis efforts are also included.
The Better Mousetrap...Can Be Built by Engineers.
ERIC Educational Resources Information Center
McBride, Matthew
2003-01-01
Describes the growth of the INSPEC database developed by the Institution of Electrical Engineers. Highlights include an historical background of its growth from "Science Abstracts"; production methods, including computerization; indexing, including controlled (thesaurus-based), uncontrolled, chemical, and numerical indexing; and the…
Electronic journals: Their use by teachers/researchers of engineering and social sciences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martins, Fernanda, E-mail: mmartins@letras.up.pt; Machado, Diana, E-mail: mmartins@letras.up.pt; Fernandes, Alberto, E-mail: mmartins@letras.up.pt
Libraries must attend the needs of their different users. Academics are usually a particular kind of users with specific needs. Universities are environments where scientific communication is essential and where electronic format of journals is becoming more and more frequently used. This way it becomes increasingly important to understand how academics from different scientific areas use the available electronic resources. The aim of this study is to better understand the existing differences among the users of electronic journals in Engineering and Social Sciences. The research undertaken was mainly focused on the study of the use of electronic journals by teachers/researchersmore » from the Faculties of Engineering and of Arts from the University of Porto, Portugal. In this study an international survey was used in order to characterize the levels of use and access of electronic journals by these communities. The ways of seeking and using scientific information, namely in terms frequency of access, the number of articles consulted, the use of databases and the preference of publishing in electronic journals were analyzed. A set of comparisons were established and results indicate an extensive use of the electronic format, regardless the faculty. However, some differences emerge when it comes to details. Such is the case of the usage rate of reference management software which is considerably more used by Engineering academics than Social Science ones. Generally, electronic journals meeting the information needs of its users and are increasingly used as a preferred means of research. Though, some particular differences in the use of them have emerged, when comparing academics from these two faculties.« less
Evolution of the NASA/IPAC Extragalactic Database (NED) into a Data Mining Discovery Engine
NASA Astrophysics Data System (ADS)
Mazzarella, Joseph M.; NED Team
2017-06-01
We review recent advances and ongoing work in evolving the NASA/IPAC Extragalactic Database (NED) beyond an object reference database into a data mining discovery engine. Updates to the infrastructure and data integration techniques are enabling more than a 10-fold expansion; NED will soon contain over a billion objects with their fundamental attributes fused across the spectrum via cross-identifications among the largest sky surveys (e.g., GALEX, SDSS, 2MASS, AllWISE, EMU), and over 100,000 smaller but scientifically important catalogs and journal articles. The recent discovery of super-luminous spiral galaxies exemplifies the opportunities for data mining and science discovery directly from NED's rich data synthesis. Enhancements to the user interface, including new APIs, VO protocols, and queries involving derived physical quantities, are opening new pathways for panchromatic studies of large galaxy samples. Examples are shown of graphics characterizing the content of NED, as well as initial steps in exploring the database via interactive statistical visualizations.
Humanitarian engineering in the engineering curriculum
NASA Astrophysics Data System (ADS)
Vandersteen, Jonathan Daniel James
There are many opportunities to use engineering skills to improve the conditions for marginalized communities, but our current engineering education praxis does not instruct on how engineering can be a force for human development. In a time of great inequality and exploitation, the desire to work with the impoverished is prevalent, and it has been proposed to adjust the engineering curriculum to include a larger focus on human needs. This proposed curriculum philosophy is called humanitarian engineering. Professional engineers have played an important role in the modern history of power, wealth, economic development, war, and industrialization; they have also contributed to infrastructure, sanitation, and energy sources necessary to meet human need. Engineers are currently at an important point in time when they must look back on their history in order to be more clear about how to move forward. The changing role of the engineer in history puts into context the call for a more balanced, community-centred engineering curriculum. Qualitative, phenomenographic research was conducted in order to understand the need, opportunity, benefits, and limitations of a proposed humanitarian engineering curriculum. The potential role of the engineer in marginalized communities and details regarding what a humanitarian engineering program could look like were also investigated. Thirty-two semi-structured research interviews were conducted in Canada and Ghana in order to collect a pool of understanding before a phenomenographic analysis resulted in five distinct outcome spaces. The data suggests that an effective curriculum design will include teaching technical skills in conjunction with instructing about issues of social justice, social location, cultural awareness, root causes of marginalization, a broader understanding of technology, and unlearning many elements about the role of the engineer and the dominant economic/political ideology. Cross-cultural engineering development placements are a valuable pedagogical experience but risk benefiting the student disproportionately more than the receiving community. Local development placements offer different rewards and liabilities. To conclude, a major adjustment in engineering curriculum to address human development is appropriate and this new curriculum should include both local and international placements. However, the great force of altruism must be directed towards creating meaningful and lasting change.
Aeronautical Engineering: A Continuing Bibliography. Supplement 421
NASA Technical Reports Server (NTRS)
2000-01-01
This supplemental issue of Aeronautical Engineering, A Continuing Bibliography with Indexes (NASA/SP#2000-7037) lists reports, articles, and other documents recently announced in the NASA STI Database. The coverage includes documents on the engineering and theoretical aspects of design, construction, evaluation, testing, operation, and performance of aircraft (including aircraft engines) and associated components, equipment, and systems. It also includes research and development in aerodynamics, aeronautics, and ground support equipment for aeronautical vehicles.
Life-Cycle Cost Database. Volume II. Appendices E, F, and G. Sample Data Development.
1983-01-01
Bendix Field Engineering Corporation Columbia, Maryland 21045 5 CONTENTS Page GENERAL 8 Introduction Objective Engineering Survey SYSTEM DESCRIPTION...in a typical administrative type building over a 25-year period. 1.3 ENGINEERING SURVEY An on-site survey was conducted by Bendix Field Engineering...Damp Mop and Buff Buff Routine Vacuum Strip and Refinish Heavy Duty Vacuum Machine, Scrub and Surface Shampoo Pick Up Extraction Clean Repair Location
Info Center WI Regional Primate Resource Center About the Project The PrimateLit database provides communities. Coverage of the database spans 1940 to present and includes all publication categories (articles will also be found in a search of the whole database. Books Received includes review copies of books
ERIC Educational Resources Information Center
Wimberly, Charles A.; Wynne, Lewis N.
1974-01-01
The future of many post-secondary institutions may rest with their ability to shift from a strict engineering format to one incorporating community service programs. Retraining competent unemployed technicians and engineers from aerospace and military sectors for construction, community service, and environmental protection can be an important…
Engel, Stacia R.; Cherry, J. Michael
2013-01-01
The first completed eukaryotic genome sequence was that of the yeast Saccharomyces cerevisiae, and the Saccharomyces Genome Database (SGD; http://www.yeastgenome.org/) is the original model organism database. SGD remains the authoritative community resource for the S. cerevisiae reference genome sequence and its annotation, and continues to provide comprehensive biological information correlated with S. cerevisiae genes and their products. A diverse set of yeast strains have been sequenced to explore commercial and laboratory applications, and a brief history of those strains is provided. The publication of these new genomes has motivated the creation of new tools, and SGD will annotate and provide comparative analyses of these sequences, correlating changes with variations in strain phenotypes and protein function. We are entering a new era at SGD, as we incorporate these new sequences and make them accessible to the scientific community, all in an effort to continue in our mission of educating researchers and facilitating discovery. Database URL: http://www.yeastgenome.org/ PMID:23487186
Aeronautical Engineering: A Continuing Bibliography with Indexes. SUPPL-422
NASA Technical Reports Server (NTRS)
2000-01-01
This report lists reports, articles and other documents recently announced in the NASA STI Database. The coverage includes documents on the engineering and theoretical aspects of design, construction, evaluation, testing, operation, and performance of aircraft (including aircraft engines) and associated components, equipment and systems. It also includes research and development in aerodynamics, aeronautics, and ground support equipment for aeronautical vehicles.
Aeronautical Engineering: A Continuing Bibliography with Indexes. Supplement 405
NASA Technical Reports Server (NTRS)
1999-01-01
This report lists reports, articles and other documents recently announced in the NASA STI Database. The coverage includes documents on the engineering and theoretical aspects of design, construction, evaluation, testing, operation, and performance of aircraft (including aircraft engines) and associated components, equipment, and systems. It also includes research and development in aerodynamics, aeronautics, and ground support equipment for aeronautical vehicles.
Aeronautical Engineering: A Continuing Bibliography With Indexes. Supplement 392
NASA Technical Reports Server (NTRS)
1999-01-01
This report lists reports, articles and other documents recently announced in the NASA STI Database. The coverage includes documents on the engineering and theoretical aspects of design, construction, evaluation, testing, operation, and performance of aircraft (including aircraft engines) and associated components, equipment, and systems. It also includes research and development in aerodynamics, aeronautics, and ground support equipment for aeronautical vehicles.
Aeronautical engineering: A continuing bibliography with indexes (supplement 319)
NASA Technical Reports Server (NTRS)
1995-01-01
This report lists 349 reports, articles and other documents recently announced in the NASA STI Database. The coverage includes documents on the engineering and theoretical aspects of design, construction, evaluation, testing, operation, and performance of aircraft (including aircraft engines) and associated components, equipment, and systems. It also includes research and development in aerodynamics, aeronautics, and ground support equipment for aeronautical vehicles.
Partridge, Chris; de Cesare, Sergio; Mitchell, Andrew; Odell, James
2018-01-01
Formalization is becoming more common in all stages of the development of information systems, as a better understanding of its benefits emerges. Classification systems are ubiquitous, no more so than in domain modeling. The classification pattern that underlies these systems provides a good case study of the move toward formalization in part because it illustrates some of the barriers to formalization, including the formal complexity of the pattern and the ontological issues surrounding the "one and the many." Powersets are a way of characterizing the (complex) formal structure of the classification pattern, and their formalization has been extensively studied in mathematics since Cantor's work in the late nineteenth century. One can use this formalization to develop a useful benchmark. There are various communities within information systems engineering (ISE) that are gradually working toward a formalization of the classification pattern. However, for most of these communities, this work is incomplete, in that they have not yet arrived at a solution with the expressiveness of the powerset benchmark. This contrasts with the early smooth adoption of powerset by other information systems communities to, for example, formalize relations. One way of understanding the varying rates of adoption is recognizing that the different communities have different historical baggage. Many conceptual modeling communities emerged from work done on database design, and this creates hurdles to the adoption of the high level of expressiveness of powersets. Another relevant factor is that these communities also often feel, particularly in the case of domain modeling, a responsibility to explain the semantics of whatever formal structures they adopt. This paper aims to make sense of the formalization of the classification pattern in ISE and surveys its history through the literature, starting from the relevant theoretical works of the mathematical literature and gradually shifting focus to the ISE literature. The literature survey follows the evolution of ISE's understanding of how to formalize the classification pattern. The various proposals are assessed using the classical example of classification; the Linnaean taxonomy formalized using powersets as a benchmark for formal expressiveness. The broad conclusion of the survey is that (1) the ISE community is currently in the early stages of the process of understanding how to formalize the classification pattern, particularly in the requirements for expressiveness exemplified by powersets, and (2) that there is an opportunity to intervene and speed up the process of adoption by clarifying this expressiveness. Given the central place that the classification pattern has in domain modeling, this intervention has the potential to lead to significant improvements.
Benchmarking distributed data warehouse solutions for storing genomic variant information
Wiewiórka, Marek S.; Wysakowicz, Dawid P.; Okoniewski, Michał J.
2017-01-01
Abstract Genomic-based personalized medicine encompasses storing, analysing and interpreting genomic variants as its central issues. At a time when thousands of patientss sequenced exomes and genomes are becoming available, there is a growing need for efficient database storage and querying. The answer could be the application of modern distributed storage systems and query engines. However, the application of large genomic variant databases to this problem has not been sufficiently far explored so far in the literature. To investigate the effectiveness of modern columnar storage [column-oriented Database Management System (DBMS)] and query engines, we have developed a prototypic genomic variant data warehouse, populated with large generated content of genomic variants and phenotypic data. Next, we have benchmarked performance of a number of combinations of distributed storages and query engines on a set of SQL queries that address biological questions essential for both research and medical applications. In addition, a non-distributed, analytical database (MonetDB) has been used as a baseline. Comparison of query execution times confirms that distributed data warehousing solutions outperform classic relational DBMSs. Moreover, pre-aggregation and further denormalization of data, which reduce the number of distributed join operations, significantly improve query performance by several orders of magnitude. Most of distributed back-ends offer a good performance for complex analytical queries, while the Optimized Row Columnar (ORC) format paired with Presto and Parquet with Spark 2 query engines provide, on average, the lowest execution times. Apache Kudu on the other hand, is the only solution that guarantees a sub-second performance for simple genome range queries returning a small subset of data, where low-latency response is expected, while still offering decent performance for running analytical queries. In summary, research and clinical applications that require the storage and analysis of variants from thousands of samples can benefit from the scalability and performance of distributed data warehouse solutions. Database URL: https://github.com/ZSI-Bio/variantsdwh PMID:29220442
Analysis and Development of a Web-Enabled Planning and Scheduling Database Application
2013-09-01
establishes an entity—relationship diagram for the desired process, constructs an operable database using MySQL , and provides a web- enabled interface for...development, develop, design, process, re- engineering, reengineering, MySQL , structured query language, SQL, myPHPadmin. 15. NUMBER OF PAGES 107 16...relationship diagram for the desired process, constructs an operable database using MySQL , and provides a web-enabled interface for the population of
NASA Technical Reports Server (NTRS)
Knighton, Donna L.
1992-01-01
A Flight Test Engineering Database Management System (FTE DBMS) was designed and implemented at the NASA Dryden Flight Research Facility. The X-29 Forward Swept Wing Advanced Technology Demonstrator flight research program was chosen for the initial system development and implementation. The FTE DBMS greatly assisted in planning and 'mass production' card preparation for an accelerated X-29 research program. Improved Test Plan tracking and maneuver management for a high flight-rate program were proven, and flight rates of up to three flights per day, two times per week were maintained.
More Databases Searched by a Business Generalist--Part 2: A Veritable Cornucopia of Sources.
ERIC Educational Resources Information Center
Meredith, Meri
1986-01-01
This second installment describes databases irregularly searched in the Business Information Center, Cummins Engine Company (Columbus, Indiana). Highlights include typical research topics (happenings among similar manufacturers); government topics (Department of Defense contracts); market and industry topics; corporate intelligence; and personnel,…
Scoping of flood hazard mapping needs for Kennebec County, Maine
Dudley, Robert W.; Schalk, Charles W.
2006-01-01
This report was prepared by the U.S. Geological Survey (USGS) Maine Water Science Center as the deliverable for scoping of flood hazard mapping needs for Kennebec County, Maine, under Federal Emergency Management Agency (FEMA) Inter-Agency Agreement Number HSFE01-05-X-0018. This section of the report explains the objective of the task and the purpose of the report. The Federal Emergency Management Agency (FEMA) developed a plan in 1997 to modernize the FEMA flood mapping program. FEMA flood maps delineate flood hazard areas in support of the National Flood Insurance Program (NFIP). FEMA's plan outlined the steps necessary to update FEMA's flood maps for the nation to a seamless digital format and streamline FEMA's operations in raising public awareness of the importance of the maps and responding to requests to revise them. The modernization of flood maps involves conversion of existing information to digital format and integration of improved flood hazard data as needed. To determine flood mapping modernization needs, FEMA has established specific scoping activities to be done on a county-by-county basis for identifying and prioritizing requisite flood-mapping activities for map modernization. The U.S. Geological Survey (USGS), in cooperation with FEMA and the Maine State Planning Office Floodplain Management Program, began scoping work in 2005 for Kennebec County. Scoping activities included assembling existing data and map needs information for communities in Kennebec County (efforts were made to not duplicate those of pre-scoping completed in March 2005), documentation of data, contacts, community meetings, and prioritized mapping needs in a final scoping report (this document), and updating the Mapping Needs Update Support System (MNUSS) Database or its successor with information gathered during the scoping process. The average age of the FEMA floodplain maps in Kennebec County, Maine is 16 years. Most of these studies were in the late 1970's to the mid 1980s. However, in the ensuing 20-30 years, development has occurred in many of the watersheds, and the characteristics of the watersheds have changed with time. Therefore, many of the older studies may not depict current conditions nor accurately estimate risk in terms of flood heights. The following is the scope of work as defined in the FEMA/USGS Statement of Work: Task 1: Collect data from a variety of sources including community surveys, other Federal and State Agencies, National Flood Insurance Program (NFIP) State Coordinators, Community Assistance Visits (CAVs) and FEMA archives. Lists of mapping needs will be obtained from the MNUSS database, community surveys, and CAVs, if available. FEMA archives will be inventoried for effective FIRM panels, FIS reports, and other flood-hazard data or existing study data. Best available base map information, topographic data, flood-hazard data, and hydrologic and hydraulic data will be identified. Data from the Maine Floodplain Management Program database also will be utilized. Task 2: Contact communities in Kennebec County to notify them that FEMA and the State have selected them for a map update, and that a project scope will be developed with their input. Topics to be reviewed with the communities include (1) Purpose of the Flood Map Project (for example, the update needs that have prompted the map update); (2) The community's mapping needs; (3) The community's available mapping, hydrologic, hydraulic, and flooding information; (4) target schedule for completing the project; and (5) The community's engineering, planning, and geographic information system (GIS) capabilities. On the basis of the collected information from Task 1 and community contacts/meetings in Task 2, the USGS will develop a Draft Project Scope for the identified mapping needs of the communities in Kennebec County. The following items will be addressed in the Draft Project Scope: review of available information, determine if and how e
Scoping of flood hazard mapping needs for Somerset County, Maine
Dudley, Robert W.; Schalk, Charles W.
2006-01-01
This report was prepared by the U.S. Geological Survey (USGS) Maine Water Science Center as the deliverable for scoping of flood hazard mapping needs for Somerset County, Maine, under Federal Emergency Management Agency (FEMA) Inter-Agency Agreement Number HSFE01-05-X-0018. This section of the report explains the objective of the task and the purpose of the report. The Federal Emergency Management Agency (FEMA) developed a plan in 1997 to modernize the FEMA flood mapping program. FEMA flood maps delineate flood hazard areas in support of the National Flood Insurance Program (NFIP). FEMA's plan outlined the steps necessary to update FEMA's flood maps for the nation to a seamless digital format and streamline FEMA's operations in raising public awareness of the importance of the maps and responding to requests to revise them. The modernization of flood maps involves conversion of existing information to digital format and integration of improved flood hazard data as needed. To determine flood mapping modernization needs, FEMA has established specific scoping activities to be done on a county-by-county basis for identifying and prioritizing requisite flood-mapping activities for map modernization. The U.S. Geological Survey (USGS), in cooperation with FEMA and the Maine State Planning Office Floodplain Management Program, began scoping work in 2005 for Somerset County. Scoping activities included assembling existing data and map needs information for communities in Somerset County (efforts were made to not duplicate those of pre-scoping completed in March 2005), documentation of data, contacts, community meetings, and prioritized mapping needs in a final scoping report (this document), and updating the Mapping Needs Update Support System (MNUSS) Database or its successor with information gathered during the scoping process. The average age of the FEMA floodplain maps in Somerset County, Maine is 18.1 years. Most of these studies were in the late 1970's to the mid 1980s. However, in the ensuing 20-30 years, development has occurred in many of the watersheds, and the characteristics of the watersheds have changed with time. Therefore, many of the older studies may not depict current conditions nor accurately estimate risk in terms of flood heights. The following is the scope of work as defined in the FEMA/USGS Statement of Work: Task 1: Collect data from a variety of sources including community surveys, other Federal and State Agencies, National Flood Insurance Program (NFIP) State Coordinators, Community Assistance Visits (CAVs) and FEMA archives. Lists of mapping needs will be obtained from the MNUSS database, community surveys, and CAVs, if available. FEMA archives will be inventoried for effective FIRM panels, FIS reports, and other flood-hazard data or existing study data. Best available base map information, topographic data, flood-hazard data, and hydrologic and hydraulic data will be identified. Data from the Maine Floodplain Management Program database also will be utilized. Task 2: Contact communities in Somerset County to notify them that FEMA and the State have selected them for a map update, and that a project scope will be developed with their input. Topics to be reviewed with the communities include (1) Purpose of the Flood Map Project (for example, the update needs that have prompted the map update); (2) The community's mapping needs; (3) The community's available mapping, hydrologic, hydraulic, and flooding information; (4) target schedule for completing the project; and (5) The community's engineering, planning, and geographic information system (GIS) capabilities. On the basis of the collected information from Task 1 and community contacts/meetings in Task 2, the USGS will develop a Draft Project Scope for the identified mapping needs of the communities in Somerset County. The following items will be addressed in the Draft Project Scope: review of available information, determine if and ho
Scoping of flood hazard mapping needs for Cumberland County, Maine
Dudley, Robert W.; Schalk, Charles W.
2006-01-01
This report was prepared by the U.S. Geological Survey (USGS) Maine Water Science Center as the deliverable for scoping of flood hazard mapping needs for Cumberland County, Maine, under Federal Emergency Management Agency (FEMA) Inter-Agency Agreement Number HSFE01-05-X-0018. This section of the report explains the objective of the task and the purpose of the report. The Federal Emergency Management Agency (FEMA) developed a plan in 1997 to modernize the FEMA flood mapping program. FEMA flood maps delineate flood hazard areas in support of the National Flood Insurance Program (NFIP). FEMA's plan outlined the steps necessary to update FEMA's flood maps for the nation to a seamless digital format and streamline FEMA's operations in raising public awareness of the importance of the maps and responding to requests to revise them. The modernization of flood maps involves conversion of existing information to digital format and integration of improved flood hazard data as needed. To determine flood mapping modernization needs, FEMA has established specific scoping activities to be done on a county-by-county basis for identifying and prioritizing requisite flood-mapping activities for map modernization. The U.S. Geological Survey (USGS), in cooperation with FEMA and the Maine State Planning Office Floodplain Management Program, began scoping work in 2005 for Cumberland County. Scoping activities included assembling existing data and map needs information for communities in Cumberland County, documentation of data, contacts, community meetings, and prioritized mapping needs in a final scoping report (this document), and updating the Mapping Needs Update Support System (MNUSS) Database or its successor with information gathered during the scoping process. The average age of the FEMA floodplain maps in Cumberland County, Maine is 21 years. Most of these studies were in the early to mid 1980s. However, in the ensuing 20-25 years, development has occurred in many of the watersheds, and the characteristics of the watersheds have changed with time. Therefore, many of the older studies may not depict current conditions nor accurately estimate risk in terms of flood heights. The following is the scope of work as defined in the FEMA/USGS Statement of Work: Task 1: Collect data from a variety of sources including community surveys, other Federal and State Agencies, National Flood Insurance Program (NFIP) State Coordinators, Community Assistance Visits (CAVs) and FEMA archives. Lists of mapping needs will be obtained from the MNUSS database, community surveys, and CAVs, if available. FEMA archives will be inventoried for effective FIRM panels, FIS reports, and other flood-hazard data or existing study data. Best available base map information, topographic data, flood-hazard data, and hydrologic and hydraulic data will be identified. Data from the Maine Floodplain Management Program database also will be utilized. Task 2: Contact communities in Cumberland County to notify them that FEMA and the State have selected them for a map update, and that a project scope will be developed with their input. Topics to be reviewed with the communities include (1) Purpose of the Flood Map Project (for example, the update needs that have prompted the map update); (2) The community's mapping needs; (3) The community's available mapping, hydrologic, hydraulic, and flooding information; (4) target schedule for completing the project; and (5) The community's engineering, planning, and geographic information system (GIS) capabilities. On the basis of the collected information from Task 1 and community contacts/meetings in Task 2, the USGS will develop a Draft Project Scope for the identified mapping needs of the communities in Cumberland County. The following items will be addressed in the Draft Project Scope: review of available information, determine if and how effective FIS data can be used in new project, and identify other data needed to
SAMSON Technology Demonstrator
2014-06-01
requested. The SAMSON TD was testing with two different policy engines: 1. A custom XACML-based element matching engine using a MySQL database for...performed during the course of the event. Full information protection across the sphere of access management, information protection and auditing was in...
ERIC Educational Resources Information Center
Dalvi, Tejaswini; Wendell, Kristen
2015-01-01
A team of science teacher educators working in collaboration with local elementary schools explored opportunities for science and engineering "learning by doing" in the particular context of urban elementary school communities. In this article, the authors present design task that helps students identify and find solutions to a…
Gene regulation knowledge commons: community action takes care of DNA binding transcription factors
Tripathi, Sushil; Vercruysse, Steven; Chawla, Konika; Christie, Karen R.; Blake, Judith A.; Huntley, Rachael P.; Orchard, Sandra; Hermjakob, Henning; Thommesen, Liv; Lægreid, Astrid; Kuiper, Martin
2016-01-01
A large gap remains between the amount of knowledge in scientific literature and the fraction that gets curated into standardized databases, despite many curation initiatives. Yet the availability of comprehensive knowledge in databases is crucial for exploiting existing background knowledge, both for designing follow-up experiments and for interpreting new experimental data. Structured resources also underpin the computational integration and modeling of regulatory pathways, which further aids our understanding of regulatory dynamics. We argue how cooperation between the scientific community and professional curators can increase the capacity of capturing precise knowledge from literature. We demonstrate this with a project in which we mobilize biological domain experts who curate large amounts of DNA binding transcription factors, and show that they, although new to the field of curation, can make valuable contributions by harvesting reported knowledge from scientific papers. Such community curation can enhance the scientific epistemic process. Database URL: http://www.tfcheckpoint.org PMID:27270715
Communities are concerned over pollution levels and seek methods to systematically identify and prioritize the environmental stressors in their communities. Geographic information system (GIS) maps of environmental information can be useful tools for communities in their assessm...
Global Dynamic Exposure and the OpenBuildingMap
NASA Astrophysics Data System (ADS)
Schorlemmer, D.; Beutin, T.; Hirata, N.; Hao, K. X.; Wyss, M.; Cotton, F.; Prehn, K.
2015-12-01
Detailed understanding of local risk factors regarding natural catastrophes requires in-depth characterization of the local exposure. Current exposure capture techniques have to find the balance between resolution and coverage. We aim at bridging this gap by employing a crowd-sourced approach to exposure capturing focusing on risk related to earthquake hazard. OpenStreetMap (OSM), the rich and constantly growing geographical database, is an ideal foundation for us. More than 2.5 billion geographical nodes, more than 150 million building footprints (growing by ~100'000 per day), and a plethora of information about school, hospital, and other critical facility locations allow us to exploit this dataset for risk-related computations. We will harvest this dataset by collecting exposure and vulnerability indicators from explicitly provided data (e.g. hospital locations), implicitly provided data (e.g. building shapes and positions), and semantically derived data, i.e. interpretation applying expert knowledge. With this approach, we can increase the resolution of existing exposure models from fragility classes distribution via block-by-block specifications to building-by-building vulnerability. To increase coverage, we will provide a framework for collecting building data by any person or community. We will implement a double crowd-sourced approach to bring together the interest and enthusiasm of communities with the knowledge of earthquake and engineering experts. The first crowd-sourced approach aims at collecting building properties in a community by local people and activists. This will be supported by tailored building capture tools for mobile devices for simple and fast building property capturing. The second crowd-sourced approach involves local experts in estimating building vulnerability that will provide building classification rules that translate building properties into vulnerability and exposure indicators as defined in the Building Taxonomy 2.0 developed by the Global Earthquake Model (GEM). These indicators will then be combined with a hazard model using the GEM OpenQuake engine to compute a risk model. The free/open framework we will provide can be used on commodity hardware for local to regional exposure capturing and for communities to understand their earthquake risk.
Development of a prototype system for statewide asthma surveillance.
Deprez, Ronald D; Asdigian, Nancy L; Oliver, L Christine; Anderson, Norman; Caldwell, Edgar; Baggott, Lee Ann
2002-12-01
We developed and evaluated a statewide and community-level asthma surveillance system. Databases and measures included a community prevalence survey, hospital admissions data, emergency department/outpatient clinic visit records, and a physician survey of diagnosis and treatment practices. We evaluated the system in 5 Maine communities varying in population and income. Asthma hospitalizations were high in the rural/low-socioeconomic-status communities studied, although diagnosed asthma was low. Males were more likely than females to experience asthma symptoms, although they were less likely to have been diagnosed with asthma or to have used hospital-based asthma care. Databases were useful for estimating asthma burden and identifying service needs as well as high-risk groups. They were less useful in estimating severity or in identifying environmental risks.
Natural shorelines promote the stability of fish communities in an urbanized coastal system.
Scyphers, Steven B; Gouhier, Tarik C; Grabowski, Jonathan H; Beck, Michael W; Mareska, John; Powers, Sean P
2015-01-01
Habitat loss and fragmentation are leading causes of species extinctions in terrestrial, aquatic and marine systems. Along coastlines, natural habitats support high biodiversity and valuable ecosystem services but are often replaced with engineered structures for coastal protection or erosion control. We coupled high-resolution shoreline condition data with an eleven-year time series of fish community structure to examine how coastal protection structures impact community stability. Our analyses revealed that the most stable fish communities were nearest natural shorelines. Structurally complex engineered shorelines appeared to promote greater stability than simpler alternatives as communities nearest vertical walls, which are among the most prevalent structures, were most dissimilar from natural shorelines and had the lowest stability. We conclude that conserving and restoring natural habitats is essential for promoting ecological stability. However, in scenarios when natural habitats are not viable, engineered landscapes designed to mimic the complexity of natural habitats may provide similar ecological functions.
Natural Shorelines Promote the Stability of Fish Communities in an Urbanized Coastal System
Scyphers, Steven B.; Gouhier, Tarik C.; Grabowski, Jonathan H.; Beck, Michael W.; Mareska, John; Powers, Sean P.
2015-01-01
Habitat loss and fragmentation are leading causes of species extinctions in terrestrial, aquatic and marine systems. Along coastlines, natural habitats support high biodiversity and valuable ecosystem services but are often replaced with engineered structures for coastal protection or erosion control. We coupled high-resolution shoreline condition data with an eleven-year time series of fish community structure to examine how coastal protection structures impact community stability. Our analyses revealed that the most stable fish communities were nearest natural shorelines. Structurally complex engineered shorelines appeared to promote greater stability than simpler alternatives as communities nearest vertical walls, which are among the most prevalent structures, were most dissimilar from natural shorelines and had the lowest stability. We conclude that conserving and restoring natural habitats is essential for promoting ecological stability. However, in scenarios when natural habitats are not viable, engineered landscapes designed to mimic the complexity of natural habitats may provide similar ecological functions. PMID:26039407
Engineering knowledge requirements for sand and dust on Mars
NASA Technical Reports Server (NTRS)
Kaplan, D. I.
1991-01-01
The successful landing of human beings on Mars and the establishment of a permanent outpost there will require an understanding of the Martian environment by the engineers. A key feature of the Martian environment is the nearly ubiquitous presence of sand and dust. The process which the engineering community will undertake to determine the sensitivities of their designs to the current level of knowledge about Mars sand and dust is emphasized. The interaction of the engineering community with the space exploration initiative (SEI) mission planners and management is described.
Assessment of community noise for a medium-range airplane with open-rotor engines
NASA Astrophysics Data System (ADS)
Kopiev, V. F.; Shur, M. L.; Travin, A. K.; Belyaev, I. V.; Zamtfort, B. S.; Medvedev, Yu. V.
2017-11-01
Community noise of a hypothetical medium-range airplane equipped with open-rotor engines is assessed by numerical modeling of the aeroacoustic characteristics of an isolated open rotor with the simplest blade geometry. Various open-rotor configurations are considered at constant thrust, and the lowest-noise configuration is selected. A two-engine medium-range airplane at known thrust of bypass turbofan engines at different segments of the takeoff-landing trajectory is considered, after the replacement of those engines by the open-rotor engines. It is established that a medium-range airplane with two open-rotor engines meets the requirements of Chapter 4 of the ICAO standard with a significant margin. It is shown that airframe noise makes a significant contribution to the total noise of an airplane with open-rotor engines at landing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pawlowski, Alexander; Splitter, Derek A
It is well known that spark ignited engine performance and efficiency is closely coupled to fuel octane number. The present work combines historical and recent trends in spark ignition engines to build a database of engine design, performance, and fuel octane requirements over the past 80 years. The database consists of engine compression ratio, required fuel octane number, peak mean effective pressure, specific output, and combined unadjusted fuel economy for passenger vehicles and light trucks. Recent trends in engine performance, efficiency, and fuel octane number requirement were used to develop correlations of fuel octane number utilization, performance, specific output. Themore » results show that historically, engine compression ratio and specific output have been strongly coupled to fuel octane number. However, over the last 15 years the sales weighted averages of compression ratios, specific output, and fuel economy have increased, while the fuel octane number requirement has remained largely unchanged. Using the developed correlations, 10-year-out projections of engine performance, design, and fuel economy are estimated for various fuel octane numbers, both with and without turbocharging. The 10-year-out projection shows that only by keeping power neutral while using 105 RON fuel will allow the vehicle fleet to meet CAFE targets if only the engine is relied upon to decrease fuel consumption. If 98 RON fuel is used, a power neutral fleet will have to reduce vehicle weight by 5%.« less
de Vries, Rob B M; Leenaars, Marlies; Tra, Joppe; Huijbregtse, Robbertjan; Bongers, Erik; Jansen, John A; Gordijn, Bert; Ritskes-Hoitinga, Merel
2015-07-01
An underexposed ethical issue raised by tissue engineering is the use of laboratory animals in tissue engineering research. Even though this research results in suffering and loss of life in animals, tissue engineering also has great potential for the development of alternatives to animal experiments. With the objective of promoting a joint effort of tissue engineers and alternative experts to fully realise this potential, this study provides the first comprehensive overview of the possibilities of using tissue-engineered constructs as a replacement of laboratory animals. Through searches in two large biomedical databases (PubMed, Embase) and several specialised 3R databases, 244 relevant primary scientific articles, published between 1991 and 2011, were identified. By far most articles reviewed related to the use of tissue-engineered skin/epidermis for toxicological applications such as testing for skin irritation. This review article demonstrates, however, that the potential for the development of alternatives also extends to other tissues such as other epithelia and the liver, as well as to other fields of application such as drug screening and basic physiology. This review discusses which impediments need to be overcome to maximise the contributions that the field of tissue engineering can make, through the development of alternative methods, to the reduction of the use and suffering of laboratory animals. Copyright © 2013 John Wiley & Sons, Ltd.
Making open data work for plant scientists.
Leonelli, Sabina; Smirnoff, Nicholas; Moore, Jonathan; Cook, Charis; Bastow, Ruth
2013-11-01
Despite the clear demand for open data sharing, its implementation within plant science is still limited. This is, at least in part, because open data-sharing raises several unanswered questions and challenges to current research practices. In this commentary, some of the challenges encountered by plant researchers at the bench when generating, interpreting, and attempting to disseminate their data have been highlighted. The difficulties involved in sharing sequencing, transcriptomics, proteomics, and metabolomics data are reviewed. The benefits and drawbacks of three data-sharing venues currently available to plant scientists are identified and assessed: (i) journal publication; (ii) university repositories; and (iii) community and project-specific databases. It is concluded that community and project-specific databases are the most useful to researchers interested in effective data sharing, since these databases are explicitly created to meet the researchers' needs, support extensive curation, and embody a heightened awareness of what it takes to make data reuseable by others. Such bottom-up and community-driven approaches need to be valued by the research community, supported by publishers, and provided with long-term sustainable support by funding bodies and government. At the same time, these databases need to be linked to generic databases where possible, in order to be discoverable to the majority of researchers and thus promote effective and efficient data sharing. As we look forward to a future that embraces open access to data and publications, it is essential that data policies, data curation, data integration, data infrastructure, and data funding are linked together so as to foster data access and research productivity.
Public health preparedness for the impact of global warming on human health.
Wassel, John J
2009-01-01
To assess the changes in weather and weather-associated disturbances related to global warming; the impact on human health of these changes; and the public health preparedness mandated by this impact. Qualitative review of the literature. Articles will be obtained by searching PubMed database, Google, and Google Scholar search engines using terms such as "global warming," "climate change," "human health," "public health," and "preparedness." Sixty-seven journal articles were reviewed. The projections and signs of global environmental changes are worrisome, and there are reasons to believe that related information may have been conservatively interpreted and presented in the recent past. Although the challenges are great, there are many opportunities for devising beneficial solutions at individual, community, and global levels. It is essential for public health professionals to become involved in advocating for change at all of these levels, as well as through professional organizations. We must begin "greening" our own lives and clinical practice, and start talking about these issues with patients. As we build walkable neighborhoods, change methods of energy production, and make water use and food production and distribution more sustainable, the benefits to improved air quality, a stabilized climate, social support, and individual and community health will be dramatic.
Aerospace Engineering Systems and the Advanced Design Technologies Testbed Experience
NASA Technical Reports Server (NTRS)
VanDalsem, William R.; Livingston, Mary E.; Melton, John E.; Torres, Francisco J.; Stremel, Paul M.
1999-01-01
Continuous improvement of aerospace product development processes is a driving requirement across much of the aerospace community. As up to 90% of the cost of an aerospace product is committed during the first 10% of the development cycle, there is a strong emphasis on capturing, creating, and communicating better information (both requirements and performance) early in the product development process. The community has responded by pursuing the development of computer-based systems designed to enhance the decision-making capabilities of product development individuals and teams. Recently, the historical foci on sharing the geometrical representation and on configuration management are being augmented: 1) Physics-based analysis tools for filling the design space database; 2) Distributed computational resources to reduce response time and cost; 3) Web-based technologies to relieve machine-dependence; and 4) Artificial intelligence technologies to accelerate processes and reduce process variability. The Advanced Design Technologies Testbed (ADTT) activity at NASA Ames Research Center was initiated to study the strengths and weaknesses of the technologies supporting each of these trends, as well as the overall impact of the combination of these trends on a product development event. Lessons learned and recommendations for future activities are reported.
Where Is "Community"?: Engineering Education and Sustainable Community Development
ERIC Educational Resources Information Center
Schneider, J.; Leydens, J. A.; Lucena, J.
2008-01-01
Sustainable development initiatives are proliferating in the US and Europe as engineering educators seek to provide students with knowledge and skills to design technologies that are environmentally sustainable. Many such initiatives involve students from the "North," or "developed" world building projects for villages or…
Production of ecosystem services depends on the ecological community structure at a given location. Ecosystem engineering species (EES) can strongly determine community structure, but do they consequently determine the production of ecosystem services? We explore this question ...
OrChem - An open source chemistry search engine for Oracle®
2009-01-01
Background Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Results Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. Availability OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net. PMID:20298521
García-Sancho, Miguel
2011-01-01
This paper explores the introduction of professional systems engineers and information management practices into the first centralized DNA sequence database, developed at the European Molecular Biology Laboratory (EMBL) during the 1980s. In so doing, it complements the literature on the emergence of an information discourse after World War II and its subsequent influence in biological research. By the careers of the database creators and the computer algorithms they designed, analyzing, from the mid-1960s onwards information in biology gradually shifted from a pervasive metaphor to be embodied in practices and professionals such as those incorporated at the EMBL. I then investigate the reception of these database professionals by the EMBL biological staff, which evolved from initial disregard to necessary collaboration as the relationship between DNA, genes, and proteins turned out to be more complex than expected. The trajectories of the database professionals at the EMBL suggest that the initial subject matter of the historiography of genomics should be the long-standing practices that emerged after World War II and to a large extent originated outside biomedicine and academia. Only after addressing these practices, historians may turn to their further disciplinary assemblage in fields such as bioinformatics or biotechnology.
An Interactive Online Database for Potato Varieties Evaluated in the Eastern U.S.
USDA-ARS?s Scientific Manuscript database
Online databases are no longer a novelty. However, for the potato growing and research community little effort has been put into collecting data from multiple states and provinces, and presenting it in a web-based database format for researchers and end users to utilize. The NE1031 regional potato v...
Knowledge Based Engineering for Spatial Database Management and Use
NASA Technical Reports Server (NTRS)
Peuquet, D. (Principal Investigator)
1984-01-01
The use of artificial intelligence techniques that are applicable to Geographic Information Systems (GIS) are examined. Questions involving the performance and modification to the database structure, the definition of spectra in quadtree structures and their use in search heuristics, extension of the knowledge base, and learning algorithm concepts are investigated.
Hubble Space Telescope: cost reduction by re-engineering telemetry processing and archiving
NASA Astrophysics Data System (ADS)
Miebach, Manfred P.
1998-05-01
The Hubble Space Telescope (HST), the first of NASA's Great Observatories, was launched on April 24, 1990. The HST was designed for a minimum fifteen-year mission with on-orbit servicing by the Space Shuttle System planned at approximately three-year intervals. Major changes to the HST ground system are planned to be in place for the third servicing mission in December 1999. The primary objectives of the ground system reengineering effort, a project called 'vision December 1999. The primary objectives of the ground system re-engineering effort, a project called 'vision 2000 control center systems (CCS)', are to reduce both development and operating costs significantly for the remaining years of HST's lifetime. Development costs will be reduced by providing a modern hardware and software architecture and utilizing commercial of f the shelf (COTS) products wherever possible. Operating costs will be reduced by eliminating redundant legacy systems and processes and by providing an integrated ground system geared toward autonomous operation. Part of CCS is a Space Telescope Engineering Data Store, the design of which is based on current Data Warehouse technology. The purpose of this data store is to provide a common data source of telemetry data for all HST subsystems. This data store will become the engineering data archive and will include a queryable database for the user to analyze HST telemetry. The access to the engineering data in the Data Warehouse is platform- independent from an office environment using commercial standards. Latest internet technology is used to reach the HST engineering community. A WEB-based user interface allows easy access to the data archives. This paper will provide a high level overview of the CCS system and will illustrate some of the CCS telemetry capabilities. Samples of CCS user interface pages will be given. Vision 2000 is an ambitious project, but one that is well under way. It will allow the HST program to realize reduced operations costs for the Third Servicing Mission and beyond.
Hubble Space Telescope: the new telemetry archiving system
NASA Astrophysics Data System (ADS)
Miebach, Manfred P.
2000-07-01
The Hubble Space Telescope (HST), the first of NASA's Great Observatories, was launched on April 24, 1990. The HST was designed for a minimum fifteen-year mission with on-orbit servicing by the Space Shuttle System planned at approximately three-year intervals. Major changes to the HST ground system have been implemented for the third servicing mission in December 1999. The primary objectives of the ground system re- engineering effort, a project called 'Vision 2000 Control Center System (CCS),' are to reduce both development and operating costs significantly for the remaining years of HST's lifetime. Development costs are reduced by providing a more modern hardware and software architecture and utilizing commercial off the shelf (COTS) products wherever possible. Part of CCS is a Space Telescope Engineering Data Store, the design of which is based on current Data Warehouse technology. The Data Warehouse (Red Brick), as implemented in the CCS Ground System that operates and monitors the Hubble Space Telescope, represents the first use of a commercial Data Warehouse to manage engineering data. The purpose of this data store is to provide a common data source of telemetry data for all HST subsystems. This data store will become the engineering data archive and will provide a queryable database for the user to analyze HST telemetry. The access to the engineering data in the Data Warehouse is platform-independent from an office environment using commercial standards (Unix, Windows98/NT). The latest Internet technology is used to reach the HST engineering community. A WEB-based user interface allows easy access to the data archives. This paper will provide a CCS system overview and will illustrate some of the CCS telemetry capabilities: in particular the use of the new Telemetry Archiving System. Vision 20001 is an ambitious project, but one that is well under way. It will allow the HST program to realize reduced operations costs for the Third Servicing Mission and beyond.
Heterogeneous distributed query processing: The DAVID system
NASA Technical Reports Server (NTRS)
Jacobs, Barry E.
1985-01-01
The objective of the Distributed Access View Integrated Database (DAVID) project is the development of an easy to use computer system with which NASA scientists, engineers and administrators can uniformly access distributed heterogeneous databases. Basically, DAVID will be a database management system that sits alongside already existing database and file management systems. Its function is to enable users to access the data in other languages and file systems without having to learn the data manipulation languages. Given here is an outline of a talk on the DAVID project and several charts.
ClusterMine360: a database of microbial PKS/NRPS biosynthesis
Conway, Kyle R.; Boddy, Christopher N.
2013-01-01
ClusterMine360 (http://www.clustermine360.ca/) is a database of microbial polyketide and non-ribosomal peptide gene clusters. It takes advantage of crowd-sourcing by allowing members of the community to make contributions while automation is used to help achieve high data consistency and quality. The database currently has >200 gene clusters from >185 compound families. It also features a unique sequence repository containing >10 000 polyketide synthase/non-ribosomal peptide synthetase domains. The sequences are filterable and downloadable as individual or multiple sequence FASTA files. We are confident that this database will be a useful resource for members of the polyketide synthases/non-ribosomal peptide synthetases research community, enabling them to keep up with the growing number of sequenced gene clusters and rapidly mine these clusters for functional information. PMID:23104377
Implementing model-based system engineering for the whole lifecycle of a spacecraft
NASA Astrophysics Data System (ADS)
Fischer, P. M.; Lüdtke, D.; Lange, C.; Roshani, F.-C.; Dannemann, F.; Gerndt, A.
2017-09-01
Design information of a spacecraft is collected over all phases in the lifecycle of a project. A lot of this information is exchanged between different engineering tasks and business processes. In some lifecycle phases, model-based system engineering (MBSE) has introduced system models and databases that help to organize such information and to keep it consistent for everyone. Nevertheless, none of the existing databases approached the whole lifecycle yet. Virtual Satellite is the MBSE database developed at DLR. It has been used for quite some time in Phase A studies and is currently extended for implementing it in the whole lifecycle of spacecraft projects. Since it is unforeseeable which future use cases such a database needs to support in all these different projects, the underlying data model has to provide tailoring and extension mechanisms to its conceptual data model (CDM). This paper explains the mechanisms as they are implemented in Virtual Satellite, which enables extending the CDM along the project without corrupting already stored information. As an upcoming major use case, Virtual Satellite will be implemented as MBSE tool in the S2TEP project. This project provides a new satellite bus for internal research and several different payload missions in the future. This paper explains how Virtual Satellite will be used to manage configuration control problems associated with such a multi-mission platform. It discusses how the S2TEP project starts using the software for collecting the first design information from concurrent engineering studies, then making use of the extension mechanisms of the CDM to introduce further information artefacts such as functional electrical architecture, thus linking more and more processes into an integrated MBSE approach.
Examples of finite element mesh generation using SDRC IDEAS
NASA Technical Reports Server (NTRS)
Zapp, John; Volakis, John L.
1990-01-01
IDEAS (Integrated Design Engineering Analysis Software) offers a comprehensive package for mechanical design engineers. Due to its multifaceted capabilities, however, it can be manipulated to serve the needs of electrical engineers, also. IDEAS can be used to perform the following tasks: system modeling, system assembly, kinematics, finite element pre/post processing, finite element solution, system dynamics, drafting, test data analysis, and project relational database.
Data and Analysis Center for Software: An IAC in Transition.
1983-06-01
reviewed and is approved for publication. * APPROVEDt Proj ect Engineer . JOHN J. MARCINIAK, Colonel, USAF Chief, Command and Control Division . FOR THE CO...SUPPLEMENTARY NOTES RADC Project Engineer : John Palaimo (COEE) It. KEY WORDS (Conilnuo n rever*e aide if necessary and identify by block numober...Software Engineering Software Technology Information Analysis Center Database Scientific and Technical Information 20. ABSTRACT (Continue on reverse side It
2016-05-04
IMESA) Access to Criminal Justice Information (CJI) and Terrorist Screening Databases (TSDB) References: See Enclosure 1 1. PURPOSE. In...CJI database mirror image files. (3) Memorandums of understanding with the FBI CJIS as the data broker for DoD organizations that need access ...not for access determinations. (3) Legal restrictions established by the Sex Offender Registration and Notification Act (SORNA) jurisdictions on
Voice-enabled Knowledge Engine using Flood Ontology and Natural Language Processing
NASA Astrophysics Data System (ADS)
Sermet, M. Y.; Demir, I.; Krajewski, W. F.
2015-12-01
The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts, flood-related data, information and interactive visualizations for communities in Iowa. The IFIS is designed for use by general public, often people with no domain knowledge and limited general science background. To improve effective communication with such audience, we have introduced a voice-enabled knowledge engine on flood related issues in IFIS. Instead of navigating within many features and interfaces of the information system and web-based sources, the system provides dynamic computations based on a collection of built-in data, analysis, and methods. The IFIS Knowledge Engine connects to real-time stream gauges, in-house data sources, analysis and visualization tools to answer natural language questions. Our goal is the systematization of data and modeling results on flood related issues in Iowa, and to provide an interface for definitive answers to factual queries. The goal of the knowledge engine is to make all flood related knowledge in Iowa easily accessible to everyone, and support voice-enabled natural language input. We aim to integrate and curate all flood related data, implement analytical and visualization tools, and make it possible to compute answers from questions. The IFIS explicitly implements analytical methods and models, as algorithms, and curates all flood related data and resources so that all these resources are computable. The IFIS Knowledge Engine computes the answer by deriving it from its computational knowledge base. The knowledge engine processes the statement, access data warehouse, run complex database queries on the server-side and return outputs in various formats. This presentation provides an overview of IFIS Knowledge Engine, its unique information interface and functionality as an educational tool, and discusses the future plans for providing knowledge on flood related issues and resources. IFIS Knowledge Engine provides an alternative access method to these comprehensive set of tools and data resources available in IFIS. Current implementation of the system accepts free-form input and voice recognition capabilities within browser and mobile applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojick, D E; Warnick, W L; Carroll, B C
With the United States federal government spending billions annually for research and development, ways to increase the productivity of that research can have a significant return on investment. The process by which science knowledge is spread is called diffusion. It is therefore important to better understand and measure the benefits of this diffusion of knowledge. In particular, it is important to understand whether advances in Internet searching can speed up the diffusion of scientific knowledge and accelerate scientific progress despite the fact that the vast majority of scientific information resources continue to be held in deep web databases that manymore » search engines cannot fully access. To address the complexity of the search issue, the term global discovery is used for the act of searching across heterogeneous environments and distant communities. This article discusses these issues and describes research being conducted by the Office of Scientific and Technical Information (OSTI).« less
ERIC Educational Resources Information Center
Hawaii Univ., Honolulu. Institutional Research Office.
This report details graduation and persistence rates for degree-seeking students at the seven University of Hawaii Community Colleges (UHCC) from Fall 1987-Fall 1995. The data are from the National Center for Higher Education Management Systems/University of Hawaii System Longitudinal Database Project. The report focuses on full-time and part-time…
Humanitarian Engineering Placements in Our Own Communities
ERIC Educational Resources Information Center
VanderSteen, J. D. J.; Hall, K. R.; Baillie, C. A.
2010-01-01
There is an increasing interest in the humanitarian engineering curriculum, and a service-learning placement could be an important component of such a curriculum. International placements offer some important pedagogical advantages, but also have some practical and ethical limitations. Local community-based placements have the potential to be…
Metabolic Network Modeling of Microbial Communities
Biggs, Matthew B.; Medlock, Gregory L.; Kolling, Glynis L.
2015-01-01
Genome-scale metabolic network reconstructions and constraint-based analysis are powerful methods that have the potential to make functional predictions about microbial communities. Current use of genome-scale metabolic networks to characterize the metabolic functions of microbial communities includes species compartmentalization, separating species-level and community-level objectives, dynamic analysis, the “enzyme-soup” approach, multi-scale modeling, and others. There are many challenges inherent to the field, including a need for tools that accurately assign high-level omics signals to individual community members, new automated reconstruction methods that rival manual curation, and novel algorithms for integrating omics data and engineering communities. As technologies and modeling frameworks improve, we expect that there will be proportional advances in the fields of ecology, health science, and microbial community engineering. PMID:26109480
Forster, Samuel C; Browne, Hilary P; Kumar, Nitin; Hunt, Martin; Denise, Hubert; Mitchell, Alex; Finn, Robert D; Lawley, Trevor D
2016-01-04
The Human Pan-Microbe Communities (HPMC) database (http://www.hpmcd.org/) provides a manually curated, searchable, metagenomic resource to facilitate investigation of human gastrointestinal microbiota. Over the past decade, the application of metagenome sequencing to elucidate the microbial composition and functional capacity present in the human microbiome has revolutionized many concepts in our basic biology. When sufficient high quality reference genomes are available, whole genome metagenomic sequencing can provide direct biological insights and high-resolution classification. The HPMC database provides species level, standardized phylogenetic classification of over 1800 human gastrointestinal metagenomic samples. This is achieved by combining a manually curated list of bacterial genomes from human faecal samples with over 21000 additional reference genomes representing bacteria, viruses, archaea and fungi with manually curated species classification and enhanced sample metadata annotation. A user-friendly, web-based interface provides the ability to search for (i) microbial groups associated with health or disease state, (ii) health or disease states and community structure associated with a microbial group, (iii) the enrichment of a microbial gene or sequence and (iv) enrichment of a functional annotation. The HPMC database enables detailed analysis of human microbial communities and supports research from basic microbiology and immunology to therapeutic development in human health and disease. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Chen, Mingyang; Stott, Amanda C; Li, Shenggang; Dixon, David A
2012-04-01
A robust metadata database called the Collaborative Chemistry Database Tool (CCDBT) for massive amounts of computational chemistry raw data has been designed and implemented. It performs data synchronization and simultaneously extracts the metadata. Computational chemistry data in various formats from different computing sources, software packages, and users can be parsed into uniform metadata for storage in a MySQL database. Parsing is performed by a parsing pyramid, including parsers written for different levels of data types and sets created by the parser loader after loading parser engines and configurations. Copyright © 2011 Elsevier Inc. All rights reserved.
Bull, Janet; Zafar, S Yousuf; Wheeler, Jane L; Harker, Matthew; Gblokpor, Agbessi; Hanson, Laura; Hulihan, Deirdre; Nugent, Rikki; Morris, John; Abernethy, Amy P
2010-08-01
Outpatient palliative care, an evolving delivery model, seeks to improve continuity of care across settings and to increase access to services in hospice and palliative medicine (HPM). It can provide a critical bridge between inpatient palliative care and hospice, filling the gap in community-based supportive care for patients with advanced life-limiting illness. Low capacities for data collection and quantitative research in HPM have impeded assessment of the impact of outpatient palliative care. In North Carolina, a regional database for community-based palliative care has been created through a unique partnership between a HPM organization and academic medical center. This database flexibly uses information technology to collect patient data, entered at the point of care (e.g., home, inpatient hospice, assisted living facility, nursing home). HPM physicians and nurse practitioners collect data; data are transferred to an academic site that assists with analyses and data management. Reports to community-based sites, based on data they provide, create a better understanding of local care quality. The data system was developed and implemented over a 2-year period, starting with one community-based HPM site and expanding to four. Data collection methods were collaboratively created and refined. The database continues to grow. Analyses presented herein examine data from one site and encompass 2572 visits from 970 new patients, characterizing the population, symptom profiles, and change in symptoms after intervention. A collaborative regional approach to HPM data can support evaluation and improvement of palliative care quality at the local, aggregated, and statewide levels.
Tracking Community College Transfers Using National Student Clearinghouse Data.
ERIC Educational Resources Information Center
Romano, Richard M.; Wisniewski, Martin
This study shows how community colleges can track almost all of their own students who transfer into both public and private colleges and across state lines using the National Student Clearinghouse (NSC) database. It utilizes data from the student information systems of Broome Community College, New York; Cayuga Community College, New York; the…
Röling, Wilfred F. M.; van Bodegom, Peter M.
2014-01-01
Molecular ecology approaches are rapidly advancing our insights into the microorganisms involved in the degradation of marine oil spills and their metabolic potentials. Yet, many questions remain open: how do oil-degrading microbial communities assemble in terms of functional diversity, species abundances and organization and what are the drivers? How do the functional properties of microorganisms scale to processes at the ecosystem level? How does mass flow among species, and which factors and species control and regulate fluxes, stability and other ecosystem functions? Can generic rules on oil-degradation be derived, and what drivers underlie these rules? How can we engineer oil-degrading microbial communities such that toxic polycyclic aromatic hydrocarbons are degraded faster? These types of questions apply to the field of microbial ecology in general. We outline how recent advances in single-species systems biology might be extended to help answer these questions. We argue that bottom-up mechanistic modeling allows deciphering the respective roles and interactions among microorganisms. In particular constraint-based, metagenome-derived community-scale flux balance analysis appears suited for this goal as it allows calculating degradation-related fluxes based on physiological constraints and growth strategies, without needing detailed kinetic information. We subsequently discuss what is required to make these approaches successful, and identify a need to better understand microbial physiology in order to advance microbial ecology. We advocate the development of databases containing microbial physiological data. Answering the posed questions is far from trivial. Oil-degrading communities are, however, an attractive setting to start testing systems biology-derived models and hypotheses as they are relatively simple in diversity and key activities, with several key players being isolated and a high availability of experimental data and approaches. PMID:24723922
Röling, Wilfred F M; van Bodegom, Peter M
2014-01-01
Molecular ecology approaches are rapidly advancing our insights into the microorganisms involved in the degradation of marine oil spills and their metabolic potentials. Yet, many questions remain open: how do oil-degrading microbial communities assemble in terms of functional diversity, species abundances and organization and what are the drivers? How do the functional properties of microorganisms scale to processes at the ecosystem level? How does mass flow among species, and which factors and species control and regulate fluxes, stability and other ecosystem functions? Can generic rules on oil-degradation be derived, and what drivers underlie these rules? How can we engineer oil-degrading microbial communities such that toxic polycyclic aromatic hydrocarbons are degraded faster? These types of questions apply to the field of microbial ecology in general. We outline how recent advances in single-species systems biology might be extended to help answer these questions. We argue that bottom-up mechanistic modeling allows deciphering the respective roles and interactions among microorganisms. In particular constraint-based, metagenome-derived community-scale flux balance analysis appears suited for this goal as it allows calculating degradation-related fluxes based on physiological constraints and growth strategies, without needing detailed kinetic information. We subsequently discuss what is required to make these approaches successful, and identify a need to better understand microbial physiology in order to advance microbial ecology. We advocate the development of databases containing microbial physiological data. Answering the posed questions is far from trivial. Oil-degrading communities are, however, an attractive setting to start testing systems biology-derived models and hypotheses as they are relatively simple in diversity and key activities, with several key players being isolated and a high availability of experimental data and approaches.
Bridging the Engineering and Medicine Gap
NASA Technical Reports Server (NTRS)
Walton, M.; Antonsen, E.
2018-01-01
A primary challenge NASA faces is communication between the disparate entities of engineers and human system experts in life sciences. Clear communication is critical for exploration mission success from the perspective of both risk analysis and data handling. The engineering community uses probabilistic risk assessment (PRA) models to inform their own risk analysis and has extensive experience managing mission data, but does not always fully consider human systems integration (HSI). The medical community, as a part of HSI, has been working 1) to develop a suite of tools to express medical risk in quantitative terms that are relatable to the engineering approaches commonly in use, and 2) to manage and integrate HSI data with engineering data. This talk will review the development of the Integrated Medical Model as an early attempt to bridge the communication gap between the medical and engineering communities in the language of PRA. This will also address data communication between the two entities in the context of data management considerations of the Medical Data Architecture. Lessons learned from these processes will help identify important elements to consider in future communication and integration of these two groups.
Materials And Processes Technical Information System (MAPTIS) LDEF materials database
NASA Technical Reports Server (NTRS)
Davis, John M.; Strickland, John W.
1992-01-01
The Materials and Processes Technical Information System (MAPTIS) is a collection of materials data which was computerized and is available to engineers in the aerospace community involved in the design and development of spacecraft and related hardware. Consisting of various database segments, MAPTIS provides the user with information such as material properties, test data derived from tests specifically conducted for qualification of materials for use in space, verification and control, project management, material information, and various administrative requirements. A recent addition to the project management segment consists of materials data derived from the LDEF flight. This tremendous quantity of data consists of both pre-flight and post-flight data in such diverse areas as optical/thermal, mechanical and electrical properties, atomic concentration surface analysis data, as well as general data such as sample placement on the satellite, A-O flux, equivalent sun hours, etc. Each data point is referenced to the primary investigator(s) and the published paper from which the data was taken. The MAPTIS system is envisioned to become the central location for all LDEF materials data. This paper consists of multiple parts, comprising a general overview of the MAPTIS System and the types of data contained within, and the specific LDEF data element and the data contained in that segment.
Integrating the IA2 Astronomical Archive in the VO: The VO-Dance Engine
NASA Astrophysics Data System (ADS)
Molinaro, M.; Laurino, O.; Smareglia, R.
2012-09-01
Virtual Observatory (VO) protocols and standards are getting mature and the astronomical community asks for astrophysical data to be easily reachable. This means data centers have to intensify their efforts to provide the data they manage not only through proprietary portals and services but also through interoperable resources developed on the basis of the IVOA (International Virtual Observatory Alliance) recommendations. Here we present the work and ideas developed at the IA2 (Italian Astronomical Archive) data center hosted by the INAF-OATs (Italian Institute for Astrophysics - Trieste Astronomical Observatory) to reach this goal. The core point is the development of an application that from existing DB and archive structures can translate their content to VO compliant resources: VO-Dance (written in Java). This application, in turn, relies on a database (potentially DBMS independent) to store the translation layer information of each resource and auxiliary content (UCDs, field names, authorizations, policies, etc.). The last token is an administrative interface (currently developed using the Django python framework) to allow the data center administrators to set up and maintain resources. This deployment, platform independent, with database and administrative interface highly customizable, means the package, when stable and easily distributable, can be also used by single astronomers or groups to set up their own resources from their public datasets.
Shuttle Entry Imaging Using Infrared Thermography
NASA Technical Reports Server (NTRS)
Horvath, Thomas; Berry, Scott; Alter, Stephen; Blanchard, Robert; Schwartz, Richard; Ross, Martin; Tack, Steve
2007-01-01
During the Columbia Accident Investigation, imaging teams supporting debris shedding analysis were hampered by poor entry image quality and the general lack of information on optical signatures associated with a nominal Shuttle entry. After the accident, recommendations were made to NASA management to develop and maintain a state-of-the-art imagery database for Shuttle engineering performance assessments and to improve entry imaging capability to support anomaly and contingency analysis during a mission. As a result, the Space Shuttle Program sponsored an observation campaign to qualitatively characterize a nominal Shuttle entry over the widest possible Mach number range. The initial objectives focused on an assessment of capability to identify/resolve debris liberated from the Shuttle during entry, characterization of potential anomalous events associated with RCS jet firings and unusual phenomenon associated with the plasma trail. The aeroheating technical community viewed the Space Shuttle Program sponsored activity as an opportunity to influence the observation objectives and incrementally demonstrate key elements of a quantitative spatially resolved temperature measurement capability over a series of flights. One long-term desire of the Shuttle engineering community is to calibrate boundary layer transition prediction methodologies that are presently part of the Shuttle damage assessment process using flight data provided by a controlled Shuttle flight experiment. Quantitative global imaging may offer a complementary method of data collection to more traditional methods such as surface thermocouples. This paper reviews the process used by the engineering community to influence data collection methods and analysis of global infrared images of the Shuttle obtained during hypersonic entry. Emphasis is placed upon airborne imaging assets sponsored by the Shuttle program during Return to Flight. Visual and IR entry imagery were obtained with available airborne imaging platforms used within DoD along with agency assets developed and optimized for use during Shuttle ascent to demonstrate capability (i.e., tracking, acquisition of multispectral data, spatial resolution) and identify system limitations (i.e., radiance modeling, saturation) using state-of-the-art imaging instrumentation and communication systems. Global infrared intensity data have been transformed to temperature by comparison to Shuttle flight thermocouple data. Reasonable agreement is found between the flight thermography images and numerical prediction. A discussion of lessons learned and potential application to a potential Shuttle boundary layer transition flight test is presented.
Jones, Andrew R.; Siepen, Jennifer A.; Hubbard, Simon J.; Paton, Norman W.
2010-01-01
Tandem mass spectrometry, run in combination with liquid chromatography (LC-MS/MS), can generate large numbers of peptide and protein identifications, for which a variety of database search engines are available. Distinguishing correct identifications from false positives is far from trivial because all data sets are noisy, and tend to be too large for manual inspection, therefore probabilistic methods must be employed to balance the trade-off between sensitivity and specificity. Decoy databases are becoming widely used to place statistical confidence in results sets, allowing the false discovery rate (FDR) to be estimated. It has previously been demonstrated that different MS search engines produce different peptide identification sets, and as such, employing more than one search engine could result in an increased number of peptides being identified. However, such efforts are hindered by the lack of a single scoring framework employed by all search engines. We have developed a search engine independent scoring framework based on FDR which allows peptide identifications from different search engines to be combined, called the FDRScore. We observe that peptide identifications made by three search engines are infrequently false positives, and identifications made by only a single search engine, even with a strong score from the source search engine, are significantly more likely to be false positives. We have developed a second score based on the FDR within peptide identifications grouped according to the set of search engines that have made the identification, called the combined FDRScore. We demonstrate by searching large publicly available data sets that the combined FDRScore can differentiate between between correct and incorrect peptide identifications with high accuracy, allowing on average 35% more peptide identifications to be made at a fixed FDR than using a single search engine. PMID:19253293
Systems Engineering | Photovoltaic Research | NREL
Research Other Reliability & Engineering pages: Real-Time PV & Solar Resource Testing Accelerated community toward developing comprehensive PV standards. Each year, NREL researchers, along with solar Engineering Systems Engineering We provide engineering testing and evaluation of photovoltaic (PV
ERIC Educational Resources Information Center
Maltby, Jennifer L.; Brooks, Christopher; Horton, Marjorie; Morgan, Helen
2016-01-01
Science, technology, engineering and math (STEM) degrees provide opportunities for economic mobility. Yet women, underrepresented minority (URM), and first-generation college students remain disproportionately underrepresented in STEM fields. This study examined the effectiveness of a living-learning community (LLC) for URM and first-generation…
Navigating Community College Transfer in Science, Technical, Engineering, and Mathematics Fields
ERIC Educational Resources Information Center
Packard, Becky Wai-Ling; Gagnon, Janelle L.; Senas, Arleen J.
2012-01-01
Given financial barriers facing community college students today, and workforce projections in science, technical, engineering, and math (STEM) fields, the costs of unnecessary delays while navigating transfer pathways are high. In this phenomenological study, we analyzed the delay experiences of 172 students (65% female) navigating community…
Imprinting Community College Computer Science Education with Software Engineering Principles
ERIC Educational Resources Information Center
Hundley, Jacqueline Holliday
2012-01-01
Although the two-year curriculum guide includes coverage of all eight software engineering core topics, the computer science courses taught in Alabama community colleges limit student exposure to the programming, or coding, phase of the software development lifecycle and offer little experience in requirements analysis, design, testing, and…
Prengaman, M P; Bigbee, J L; Baker, E; Schmitz, D F
2014-01-01
Health professional shortages are a significant issue throughout the USA, particularly in rural communities. Filling nurse vacancies is a costly concern for many critical access hospitals (CAH), which serve as the primary source of health care for rural communities. CAHs and rural communities have strengths and weaknesses that affect their recruitment and retention of rural nurses. The purpose of this study was to develop a tool that rural communities and CAHs can utilize to assess their strengths and weaknesses related to nurse recruitment and retention. The Nursing Community Apgar Questionnaire (NCAQ) was developed based on an extensive literature review, visits to multiple rural sites, and consultations with rural nurses, rural nurse administrators and content experts. A quantitative interview tool consisting of 50 factors that affect rural nurse recruitment and retention was developed. The tool allows participants to rate each factor in terms of advantage and importance level. The tool also includes three open-ended questions for qualitative analysis. The NCAQ was designed to identify rural communities' and CAHs' strengths and challenges related to rural nurse recruitment and retention. The NCAQ will be piloted and a database developed for CAHs to compare their results with those in the database. Furthermore, the NCAQ results may be utilized to prioritize resource allocation and tailor rural nurse recruitment and retention efforts to highlight a community's strengths. The NCAQ will function as a useful real-time tool for CAHs looking to assess and improve their rural nurse recruitment and retention practices and compare their results with those of their peers. Longitudinal results will allow CAHs and their communities to evaluate their progress over time. As the database grows in size, state, regional, and national results can be compared, trends may be discovered and best practices identified.
Detection of alternative splice variants at the proteome level in Aspergillus flavus.
Chang, Kung-Yen; Georgianna, D Ryan; Heber, Steffen; Payne, Gary A; Muddiman, David C
2010-03-05
Identification of proteins from proteolytic peptides or intact proteins plays an essential role in proteomics. Researchers use search engines to match the acquired peptide sequences to the target proteins. However, search engines depend on protein databases to provide candidates for consideration. Alternative splicing (AS), the mechanism where the exon of pre-mRNAs can be spliced and rearranged to generate distinct mRNA and therefore protein variants, enable higher eukaryotic organisms, with only a limited number of genes, to have the requisite complexity and diversity at the proteome level. Multiple alternative isoforms from one gene often share common segments of sequences. However, many protein databases only include a limited number of isoforms to keep minimal redundancy. As a result, the database search might not identify a target protein even with high quality tandem MS data and accurate intact precursor ion mass. We computationally predicted an exhaustive list of putative isoforms of Aspergillus flavus proteins from 20 371 expressed sequence tags to investigate whether an alternative splicing protein database can assign a greater proportion of mass spectrometry data. The newly constructed AS database provided 9807 new alternatively spliced variants in addition to 12 832 previously annotated proteins. The searches of the existing tandem MS spectra data set using the AS database identified 29 new proteins encoded by 26 genes. Nine fungal genes appeared to have multiple protein isoforms. In addition to the discovery of splice variants, AS database also showed potential to improve genome annotation. In summary, the introduction of an alternative splicing database helps identify more proteins and unveils more information about a proteome.
Zooplankton community analysis in the Changjiang River estuary by single-gene-targeted metagenomics
NASA Astrophysics Data System (ADS)
Cheng, Fangping; Wang, Minxiao; Li, Chaolun; Sun, Song
2014-07-01
DNA barcoding provides accurate identification of zooplankton species through all life stages. Single-gene-targeted metagenomic analysis based on DNA barcode databases can facilitate longterm monitoring of zooplankton communities. With the help of the available zooplankton databases, the zooplankton community of the Changjiang (Yangtze) River estuary was studied using a single-gene-targeted metagenomic method to estimate the species richness of this community. A total of 856 mitochondrial cytochrome oxidase subunit 1 (cox1) gene sequences were determined. The environmental barcodes were clustered into 70 molecular operational taxonomic units (MOTUs). Forty-two MOTUs matched barcoded marine organisms with more than 90% similarity and were assigned to either the species (similarity>96%) or genus level (similarity<96%). Sibling species could also be distinguished. Many species that were overlooked by morphological methods were identified by molecular methods, especially gelatinous zooplankton and merozooplankton that were likely sampled at different life history phases. Zooplankton community structures differed significantly among all of the samples. The MOTU spatial distributions were influenced by the ecological habits of the corresponding species. In conclusion, single-gene-targeted metagenomic analysis is a useful tool for zooplankton studies, with which specimens from all life history stages can be identified quickly and effectively with a comprehensive database.
We discuss the initial design and application of the National Urban Database and Access Portal Tool (NUDAPT). This new project is sponsored by the USEPA and involves collaborations and contributions from many groups from federal and state agencies, and from private and academic i...
EarthChem: International Collaboration for Solid Earth Geochemistry in Geoinformatics
NASA Astrophysics Data System (ADS)
Walker, J. D.; Lehnert, K. A.; Hofmann, A. W.; Sarbas, B.; Carlson, R. W.
2005-12-01
The current on-line information systems for igneous rock geochemistry - PetDB, GEOROC, and NAVDAT - convincingly demonstrate the value of rigorous scientific data management of geochemical data for research and education. The next generation of hypothesis formulation and testing can be vastly facilitated by enhancing these electronic resources through integration of available datasets, expansion of data coverage in location, time, and tectonic setting, timely updates with new data, and through intuitive and efficient access and data analysis tools for the broader geosciences community. PetDB, GEOROC, and NAVDAT have therefore formed the EarthChem consortium (www.earthchem.org) as a international collaborative effort to address these needs and serve the larger earth science community by facilitating the compilation, communication, serving, and visualization of geochemical data, and their integration with other geological, geochronological, geophysical, and geodetic information to maximize their scientific application. We report on the status of and future plans for EarthChem activities. EarthChem's development plan includes: (1) expanding the functionality of the web portal to become a `one-stop shop for geochemical data' with search capability across databases, standardized and integrated data output, generally applicable tools for data quality assessment, and data analysis/visualization including plotting methods and an information-rich map interface; and (2) expanding data holdings by generating new datasets as identified and prioritized through community outreach, and facilitating data contributions from the community by offering web-based data submission capability and technical assistance for design, implementation, and population of new databases and their integration with all EarthChem data holdings. Such federated databases and datasets will retain their identity within the EarthChem system. We also plan on working with publishers to ease the assimilation of geochemical data into the EarthChem database. As a community resource, EarthChem will address user concerns and respond to broad scientific and educational needs. EarthChem will hold yearly workshops, town hall meetings, and/or exhibits at major meetings. The group has established a two-tier committee structure to help ease the communication and coordination of database and IT issues between existing data management projects, and to receive feedback and support from individuals and groups from the larger geosciences community.
NASA Technical Reports Server (NTRS)
Murphy, Kelly J.; Bunning, Pieter G.; Pamadi, Bandu N.; Scallion, William I.; Jones, Kenneth M.
2004-01-01
An overview of research efforts at NASA in support of the stage separation and ascent aerothermodynamics research program is presented. The objective of this work is to develop a synergistic suite of experimental, computational, and engineering tools and methods to apply to vehicle separation across the transonic to hypersonic speed regimes. Proximity testing of a generic bimese wing-body configuration is on-going in the transonic (Mach numbers 0.6, 1.05, and 1.1), supersonic (Mach numbers 2.3, 3.0, and 4.5) and hypersonic (Mach numbers 6 and 10) speed regimes in four wind tunnel facilities at the NASA Langley Research Center. An overset grid, Navier-Stokes flow solver has been enhanced and demonstrated on a matrix of proximity cases and on a dynamic separation simulation of the bimese configuration. Steady-state predictions with this solver were in excellent agreement with wind tunnel data at Mach 3 as were predictions via a Cartesian-grid Euler solver. Experimental and computational data have been used to evaluate multi-body enhancements to the widely-used Aerodynamic Preliminary Analysis System, an engineering methodology, and to develop a new software package, SepSim, for the simulation and visualization of vehicle motions in a stage separation scenario. Web-based software will be used for archiving information generated from this research program into a database accessible to the user community. Thus, a framework has been established to study stage separation problems using coordinated experimental, computational, and engineering tools.
Graphical user interfaces for symbol-oriented database visualization and interaction
NASA Astrophysics Data System (ADS)
Brinkschulte, Uwe; Siormanolakis, Marios; Vogelsang, Holger
1997-04-01
In this approach, two basic services designed for the engineering of computer based systems are combined: a symbol-oriented man-machine-service and a high speed database-service. The man-machine service is used to build graphical user interfaces (GUIs) for the database service; these interfaces are stored using the database service. The idea is to create a GUI-builder and a GUI-manager for the database service based upon the man-machine service using the concept of symbols. With user-definable and predefined symbols, database contents can be visualized and manipulated in a very flexible and intuitive way. Using the GUI-builder and GUI-manager, a user can build and operate its own graphical user interface for a given database according to its needs without writing a single line of code.
NASA Technical Reports Server (NTRS)
Wrenn, Gregory A.
2005-01-01
This report describes a database routine called DB90 which is intended for use with scientific and engineering computer programs. The software is written in the Fortran 90/95 programming language standard with file input and output routines written in the C programming language. These routines should be completely portable to any computing platform and operating system that has Fortran 90/95 and C compilers. DB90 allows a program to supply relation names and up to 5 integer key values to uniquely identify each record of each relation. This permits the user to select records or retrieve data in any desired order.
Herpes zoster surveillance using electronic databases in the Valencian Community (Spain)
2013-01-01
Background Epidemiologic data of Herpes Zoster (HZ) disease in Spain are scarce. The objective of this study was to assess the epidemiology of HZ in the Valencian Community (Spain), using outpatient and hospital electronic health databases. Methods Data from 2007 to 2010 was collected from computerized health databases of a population of around 5 million inhabitants. Diagnoses were recorded by physicians using the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM). A sample of medical records under different criteria was reviewed by a general practitioner, to assess the reliability of codification. Results The average annual incidence of HZ was 4.60 per 1000 persons-year (PY) for all ages (95% CI: 4.57-4.63), is more frequent in women [5.32/1000PY (95% CI: 5.28-5.37)] and is strongly age-related, with a peak incidence at 70-79 years. A total of 7.16/1000 cases of HZ required hospitalization. Conclusions Electronic health database used in the Valencian Community is a reliable electronic surveillance tool for HZ disease and will be useful to define trends in disease burden before and after HZ vaccine introduction. PMID:24094135
Web-based Electronic Sharing and RE-allocation of Assets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leverett, Dave; Miller, Robert A.; Berlin, Gary J.
2002-09-09
The Electronic Asses Sharing Program is a web-based application that provides the capability for complex-wide sharing and reallocation of assets that are excess, under utilized, or un-utilized. through a web-based fron-end and supporting has database with a search engine, users can search for assets that they need, search for assets needed by others, enter assets they need, and enter assets they have available for reallocation. In addition, entire listings of available assets and needed assets can be viewed. The application is written in Java, the hash database and search engine are in Object-oriented Java Database Management (OJDBM). The application willmore » be hosted on an SRS-managed server outside the Firewall and access will be controlled via a protected realm. An example of the application can be viewed at the followinig (temporary) URL: http://idgdev.srs.gov/servlet/srs.weshare.WeShare« less
An engineering database management system for spacecraft operations
NASA Technical Reports Server (NTRS)
Cipollone, Gregorio; Mckay, Michael H.; Paris, Joseph
1993-01-01
Studies at ESOC have demonstrated the feasibility of a flexible and powerful Engineering Database Management System in support for spacecraft operations documentation. The objectives set out were three-fold: first an analysis of the problems encountered by the Operations team in obtaining and managing operations documents; secondly, the definition of a concept for operations documentation and the implementation of prototype to prove the feasibility of the concept; and thirdly, definition of standards and protocols required for the exchange of data between the top-level partners in a satellite project. The EDMS prototype was populated with ERS-l satellite design data and has been used by the operations team at ESOC to gather operational experience. An operational EDMS would be implemented at the satellite prime contractor's site as a common database for all technical information surrounding a project and would be accessible by the cocontractor's and ESA teams.
2013-06-01
accumulate and shelter sessile and mobile marine species. Fouling in sea chests and sea water pipework is also an operational issue for marine engineers ...pipework is also an operational issue for marine engineers , as it restricts water flow to essential vessel systems and may enhance biocorrosion [18, 19...subtidal marine communities worldwide and are considered as key species and important habitat engineers in benthic communities [30]. They possess high
NREL: U.S. Life Cycle Inventory Database - Project Management Team
Project Management Team Information about the U.S. Life Cycle Inventory (LCI) Database project management team is listed on this page. Additional project information is available about the U.S. LCI Mechanical Engineering, Colorado State University Professional History Michael has worked as a Senior
Marriott, Lisa K.; Cameron, William E.; Purnell, Jonathan Q.; Cetola, Stephano; Ito, Matthew K.; Williams, Craig D.; Newcomb, Kenneth C.; Randall, Joan A.; Messenger, Wyatt B.; Lipus, Adam C.; Shannon, Jackilen
2013-01-01
Background Health information technology (HIT) offers a resource for public empowerment through tailored information. Objective Use interactive community health events to improve awareness of chronic disease risk factors while collecting data to improve health. Methods Let’s Get Healthy! is an education and research program in which participants visit interactive research stations to learn about their own health (diet, body composition, blood chemistry). HIT enables computerized data collection that presents participants with immediate results and tailored educational feedback. An anonymous wristband number links collected data in a population database. Results and Lessons Learned Communities tailor events to meet community health needs with volunteers trained to conduct research. Participants experience being a research participant and contribute to an anonymous population database for both traditional research purposes and open-source community use. Conclusions By integrating HIT with community involvement, health fairs become an interactive method for engaging communities in research and raising health awareness. PMID:22982846
NASA Astrophysics Data System (ADS)
Suvannatsiri, Ratchasak; Santichaianant, Kitidech; Murphy, Elizabeth
2015-01-01
This paper reports on a project in which students designed, constructed and tested a model of an existing early warning system with simulation of debris flow in a context of a landslide. Students also assessed rural community members' knowledge of this system and subsequently taught them to estimate the time needed for evacuation of the community in the event of a landslide. Participants were four undergraduate students in a civil engineering programme at a university in Thailand, as well as nine community members and three external evaluators. Results illustrate project and problem-based, experiential learning and highlight the real-world applications and development of knowledge and of hard and soft skills. The discussion raises issues of scalability and feasibility for implementation of these types of projects in large undergraduate engineering classes.
Interests diffusion in social networks
NASA Astrophysics Data System (ADS)
D'Agostino, Gregorio; D'Antonio, Fulvio; De Nicola, Antonio; Tucci, Salvatore
2015-10-01
We provide a model for diffusion of interests in Social Networks (SNs). We demonstrate that the topology of the SN plays a crucial role in the dynamics of the individual interests. Understanding cultural phenomena on SNs and exploiting the implicit knowledge about their members is attracting the interest of different research communities both from the academic and the business side. The community of complexity science is devoting significant efforts to define laws, models, and theories, which, based on acquired knowledge, are able to predict future observations (e.g. success of a product). In the mean time, the semantic web community aims at engineering a new generation of advanced services by defining constructs, models and methods, adding a semantic layer to SNs. In this context, a leapfrog is expected to come from a hybrid approach merging the disciplines above. Along this line, this work focuses on the propagation of individual interests in social networks. The proposed framework consists of the following main components: a method to gather information about the members of the social networks; methods to perform some semantic analysis of the Domain of Interest; a procedure to infer members' interests; and an interests evolution theory to predict how the interests propagate in the network. As a result, one achieves an analytic tool to measure individual features, such as members' susceptibilities and authorities. Although the approach applies to any type of social network, here it is has been tested against the computer science research community. The DBLP (Digital Bibliography and Library Project) database has been elected as test-case since it provides the most comprehensive list of scientific production in this field.
Cherrington, Andrea; Ayala, Guadalupe X.; Amick, Halle; Allison, Jeroan; Corbie-Smith, Giselle; Scarinci, Isabel
2018-01-01
Introduction/objectives The Community Health Worker (CHW) model has gained popularity as a method for reaching vulnerable populations with diabetes mellitus (DM), yet little is known about its actual role in program delivery. The purpose of this qualitative study was to examine methods of implementation as well as related challenges and lessons learned. Methods Semi-structured interviews were conducted with program managers. Four databases (PubMed, CINAHL, ISI Web of Knowledge, PsycInfo), the CDC’s 1998 directory of CHW programs and Google Search Engine and were used to identify CHW programs. Criteria for inclusion were: DM program; used CHW strategy; occurred in United States. Two independent reviewers performed content analyses to identify major themes and findings. Results Sixteen programs were assessed, all but three focused on minority populations. Most CHWs were recruited informally; six programs required CHWs to have diabetes. CHW roles and responsibilities varied across programs; educator was the most commonly identified role. Training also varied in terms of both content and intensity. All programs gave CHWs remuneration for their work. Common challenges included difficulties with CHW retention, intervention fidelity and issues related to sustainability. Cultural and gender issues also emerged. Examples of lessons learned included the need for community buy-in and the need to anticipate non-diabetes related issues. Conclusions Lessons learned from these programs may be useful to others as they apply the CHW model to diabetes management within their own communities. Further research is needed to elucidate the specific features of this model necessary to positively impact health outcomes. PMID:18832287
Trends in Environmental Health Engineering
ERIC Educational Resources Information Center
Rowe, D. R.
1972-01-01
Reviews the trends in environmental health engineering and describes programs in environmental engineering technology and the associated environmental engineering courses at Western Kentucky University (four-year program), Wytheville Community College (two-year program), and Rensselaer Polytechnic Institute (four-year program). (PR)
Relational Information Management Data-Base System
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Erickson, W. J.; Gray, F. P.; Comfort, D. L.; Wahlstrom, S. O.; Von Limbach, G.
1985-01-01
DBMS with several features particularly useful to scientists and engineers. RIM5 interfaced with any application program written in language capable of Calling FORTRAN routines. Applications include data management for Space Shuttle Columbia tiles, aircraft flight tests, high-pressure piping, atmospheric chemistry, census, university registration, CAD/CAM Geometry, and civil-engineering dam construction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None Available
To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.
Aeronautical engineering: A continuing bibliography with indexes (supplement 306)
NASA Technical Reports Server (NTRS)
1994-01-01
This bibliography lists 181 reports, articles, and other documents recently introduced into the NASA STI Database. Subject coverage includes the following: design, construction and testing of aircraft and aircraft engines; aircraft components, equipment, and systems; ground support systems; and theoretical and applied aspects of aerodynamics and general fluid dynamics.
Aeronautical engineering: A continuing bibliography with indexes (supplement 302)
NASA Technical Reports Server (NTRS)
1994-01-01
This bibliography lists 152 reports, articles, and other documents introduced into the NASA scientific and technical information database. Subject coverage includes: design, construction and testing of aircraft and aircraft engines; aircraft components, equipment, and systems; ground support systems; and theoretical and applied aspects of aerodynamics and general fluid dynamics.
Southwell's Relaxation Search in Computer Aided Advising: An Intelligent Information System.
ERIC Educational Resources Information Center
Song, Xueshu
1992-01-01
Describes the development and validation of a microcomputer software system that enhances undergraduate students' interests in becoming engineering graduate students. The development of a database with information on engineering graduate programs is discussed, and a model that matches individual and institutional needs using Southwell's Relaxation…
A Search Engine Features Comparison.
ERIC Educational Resources Information Center
Vorndran, Gerald
Until recently, the World Wide Web (WWW) public access search engines have not included many of the advanced commands, options, and features commonly available with the for-profit online database user interfaces, such as DIALOG. This study evaluates the features and characteristics common to both types of search interfaces, examines the Web search…
Digitizing Images for Curriculum 21: Phase II.
ERIC Educational Resources Information Center
Walker, Alice D.
Although visual databases exist for the study of art, architecture, geography, health care, and other areas, readily accessible sources of quality images are not available for engineering faculty interested in developing multimedia modules or for student projects. Presented here is a brief review of Phase I of the Engineering Visual Database…
Aeronautical engineering: A continuing bibliography with indexes (supplement 303)
NASA Technical Reports Server (NTRS)
1994-01-01
This bibliography lists 211 reports, articles, and other documents introduced into the NASA scientific and technical information database. Subject coverage includes: design, construction, and testing of aircraft and aircraft engines; aircraft components, equipment, and systems; ground support systems; and theoretical and applied aspects of aerodynamics and general fluid dynamics.
Genomics Community Resources | Informatics Technology for Cancer Research (ITCR)
To facilitate genomic research and the dissemination of its products, National Human Genome Research Institute (NHGRI) supports genomic resources that are crucial for basic research, disease studies, model organism studies, and other biomedical research. Awards under this FOA will support the development and distribution of genomic resources that will be valuable for the broad research community, using cost-effective approaches. Such resources include (but are not limited to) databases and informatics resources (such as human and model organism databases, ontologies, and analysi
Database Design and Management in Engineering Optimization.
1988-02-01
scientific and engineer- Q.- ’ method In the mid-19SOs along with modern digital com- ing applications. The paper highlights the difference puters, have made...is continuously tion software can call standard subroutines from the DBMS redefined in an application program, DDL must have j libary to define...operations. .. " type data usually encountered in engineering applications. GFDGT: Computes the number of digits needed to display " "’ A user
A Full-Text-Based Search Engine for Finding Highly Matched Documents Across Multiple Categories
NASA Technical Reports Server (NTRS)
Nguyen, Hung D.; Steele, Gynelle C.
2016-01-01
This report demonstrates the full-text-based search engine that works on any Web-based mobile application. The engine has the capability to search databases across multiple categories based on a user's queries and identify the most relevant or similar. The search results presented here were found using an Android (Google Co.) mobile device; however, it is also compatible with other mobile phones.
ERIC Educational Resources Information Center
Spence, Michelle; Mawhinney, Tara; Barsky, Eugene
2012-01-01
Science and engineering libraries have an important role to play in preserving the intellectual content in research areas of the departments they serve. This study employs bibliographic data from the Web of Science database to examine how much research material is required to cover 90% of faculty citations in civil engineering and computer…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-12-01
The bibliography contains citations of selected patents concerning fuel control devices and methods for use in internal combustion engines. Patents describe air-fuel ratio control, fuel injection systems, evaporative fuel control, and surge-corrected fuel control. Citations also discuss electronic and feedback control, methods for engine protection, and fuel conservation. (Contains a minimum of 232 citations and includes a subject term index and title list.)
Engineering With Nature Geographic Project Mapping Tool (EWN ProMap)
2015-07-01
EWN ProMap database provides numerous case studies for infrastructure projects such as breakwaters, river engineering dikes, and seawalls that have...the EWN Project Mapping Tool (EWN ProMap) is to assist users in their search for case study information that can be valuable for developing EWN ideas...Essential elements of EWN include: (1) using science and engineering to produce operational efficiencies supporting sustainable delivery of
Overview of NASA MSFC IEC Federated Engineering Collaboration Capability
NASA Technical Reports Server (NTRS)
Moushon, Brian; McDuffee, Patrick
2005-01-01
The MSFC IEC federated engineering framework is currently developing a single collaborative engineering framework across independent NASA centers. The federated approach allows NASA centers the ability to maintain diversity and uniqueness, while providing interoperability. These systems are integrated together in a federated framework without compromising individual center capabilities. MSFC IEC's Federation Framework will have a direct affect on how engineering data is managed across the Agency. The approach is directly attributed in response to the Columbia Accident Investigation Board (CAB) finding F7.4-11 which states the Space Shuttle Program has a wealth of data sucked away in multiple databases without a convenient way to integrate and use the data for management, engineering, or safety decisions. IEC s federated capability is further supported by OneNASA recommendation 6 that identifies the need to enhance cross-Agency collaboration by putting in place common engineering and collaborative tools and databases, processes, and knowledge-sharing structures. MSFC's IEC Federated Framework is loosely connected to other engineering applications that can provide users with the integration needed to achieve an Agency view of the entire product definition and development process, while allowing work to be distributed across NASA Centers and contractors. The IEC DDMS federation framework eliminates the need to develop a single, enterprise-wide data model, where the goal of having a common data model shared between NASA centers and contractors is very difficult to achieve.
NASA Astrophysics Data System (ADS)
Buxner, S.; Grier, J.; Meinke, B. K.; Schneider, N. M.; Low, R.; Schultz, G. R.; Manning, J. G.; Fraknoi, A.; Gross, N. A.; Shipp, S. S.
2015-12-01
For the past six years, the NASA Science Education and Public Outreach (E/PO) Forums have supported the NASA Science Mission Directorate (SMD) and its E/PO community by enhancing the coherency and efficiency of SMD-funded E/PO programs. The Forums have fostered collaboration and partnerships between scientists with content expertise and educators with pedagogy expertise. As part of this work, in collaboration with the AAS Division of Planetary Sciences, we have interviewed SMD scientists, and more recently engineers, to understand their needs, barriers, attitudes, and understanding of education and outreach work. Respondents told us that they needed additional resources and professional development to support their work in education and outreach, including information about how to get started, ways to improve their communication, and strategies and activities for their teaching and outreach. In response, the Forums have developed and made available a suite of tools to support scientists and engineers in their E/PO efforts. These include "getting started" guides, "tips and tricks" for engaging in E/PO, vetted lists of classroom and outreach activities, and resources for college classrooms. NASA Wavelength (http://nasawavelength.org/), an online repository of SMD funded activities that have been reviewed by both educators and scientists for quality and accuracy, provides a searchable database of resources for teaching as well as ready-made lists by topic and education level, including lists for introductory college classrooms. Additionally, we have also supported scientists at professional conferences through organizing oral and poster sessions, networking activities, E/PO helpdesks, professional development workshops, and support for students and early careers scientists. For more information and to access resources for scientists and engineers, visit http://smdepo.org.
NASA Astrophysics Data System (ADS)
Brunet, V.; Molton, P.; Bézard, H.; Deck, S.; Jacquin, L.
2012-01-01
This paper describes the results obtained during the European Union JEDI (JEt Development Investigations) project carried out in cooperation between ONERA and Airbus. The aim of these studies was first to acquire a complete database of a modern-type engine jet installation set under a wall-to-wall swept wing in various transonic flow conditions. Interactions between the engine jet, the pylon, and the wing were studied thanks to ¤advanced¥ measurement techniques. In parallel, accurate Reynolds-averaged Navier Stokes (RANS) simulations were carried out from simple ones with the Spalart Allmaras model to more complex ones like the DRSM-SSG (Differential Reynolds Stress Modef of Speziale Sarkar Gatski) turbulence model. In the end, Zonal-Detached Eddy Simulations (Z-DES) were also performed to compare different simulation techniques. All numerical results are accurately validated thanks to the experimental database acquired in parallel. This complete and complex study of modern civil aircraft engine installation allowed many upgrades in understanding and simulation methods to be obtained. Furthermore, a setup for engine jet installation studies has been validated for possible future works in the S3Ch transonic research wind-tunnel. The main conclusions are summed up in this paper.
Using Long-Short-Term-Memory Recurrent Neural Networks to Predict Aviation Engine Vibrations
NASA Astrophysics Data System (ADS)
ElSaid, AbdElRahman Ahmed
This thesis examines building viable Recurrent Neural Networks (RNN) using Long Short Term Memory (LSTM) neurons to predict aircraft engine vibrations. The different networks are trained on a large database of flight data records obtained from an airline containing flights that suffered from excessive vibration. RNNs can provide a more generalizable and robust method for prediction over analytical calculations of engine vibration, as analytical calculations must be solved iteratively based on specific empirical engine parameters, and this database contains multiple types of engines. Further, LSTM RNNs provide a "memory" of the contribution of previous time series data which can further improve predictions of future vibration values. LSTM RNNs were used over traditional RNNs, as those suffer from vanishing/exploding gradients when trained with back propagation. The study managed to predict vibration values for 1, 5, 10, and 20 seconds in the future, with 2.84% 3.3%, 5.51% and 10.19% mean absolute error, respectively. These neural networks provide a promising means for the future development of warning systems so that suitable actions can be taken before the occurrence of excess vibration to avoid unfavorable situations during flight.
A future Outlook: Web based Simulation of Hydrodynamic models
NASA Astrophysics Data System (ADS)
Islam, A. S.; Piasecki, M.
2003-12-01
Despite recent advances to present simulation results as 3D graphs or animation contours, the modeling user community still faces some shortcomings when trying to move around and analyze data. Typical problems include the lack of common platforms with standard vocabulary to exchange simulation results from different numerical models, insufficient descriptions about data (metadata), lack of robust search and retrieval tools for data, and difficulties to reuse simulation domain knowledge. This research demonstrates how to create a shared simulation domain in the WWW and run a number of models through multi-user interfaces. Firstly, meta-datasets have been developed to describe hydrodynamic model data based on geographic metadata standard (ISO 19115) that has been extended to satisfy the need of the hydrodynamic modeling community. The Extended Markup Language (XML) is used to publish this metadata by the Resource Description Framework (RDF). Specific domain ontology for Web Based Simulation (WBS) has been developed to explicitly define vocabulary for the knowledge based simulation system. Subsequently, this knowledge based system is converted into an object model using Meta Object Family (MOF). The knowledge based system acts as a Meta model for the object oriented system, which aids in reusing the domain knowledge. Specific simulation software has been developed based on the object oriented model. Finally, all model data is stored in an object relational database. Database back-ends help store, retrieve and query information efficiently. This research uses open source software and technology such as Java Servlet and JSP, Apache web server, Tomcat Servlet Engine, PostgresSQL databases, Protégé ontology editor, RDQL and RQL for querying RDF in semantic level, Jena Java API for RDF. Also, we use international standards such as the ISO 19115 metadata standard, and specifications such as XML, RDF, OWL, XMI, and UML. The final web based simulation product is deployed as Web Archive (WAR) files which is platform and OS independent and can be used by Windows, UNIX, or Linux. Keywords: Apache, ISO 19115, Java Servlet, Jena, JSP, Metadata, MOF, Linux, Ontology, OWL, PostgresSQL, Protégé, RDF, RDQL, RQL, Tomcat, UML, UNIX, Windows, WAR, XML
A comprehensive and scalable database search system for metaproteomics.
Chatterjee, Sandip; Stupp, Gregory S; Park, Sung Kyu Robin; Ducom, Jean-Christophe; Yates, John R; Su, Andrew I; Wolan, Dennis W
2016-08-16
Mass spectrometry-based shotgun proteomics experiments rely on accurate matching of experimental spectra against a database of protein sequences. Existing computational analysis methods are limited in the size of their sequence databases, which severely restricts the proteomic sequencing depth and functional analysis of highly complex samples. The growing amount of public high-throughput sequencing data will only exacerbate this problem. We designed a broadly applicable metaproteomic analysis method (ComPIL) that addresses protein database size limitations. Our approach to overcome this significant limitation in metaproteomics was to design a scalable set of sequence databases assembled for optimal library querying speeds. ComPIL was integrated with a modified version of the search engine ProLuCID (termed "Blazmass") to permit rapid matching of experimental spectra. Proof-of-principle analysis of human HEK293 lysate with a ComPIL database derived from high-quality genomic libraries was able to detect nearly all of the same peptides as a search with a human database (~500x fewer peptides in the database), with a small reduction in sensitivity. We were also able to detect proteins from the adenovirus used to immortalize these cells. We applied our method to a set of healthy human gut microbiome proteomic samples and showed a substantial increase in the number of identified peptides and proteins compared to previous metaproteomic analyses, while retaining a high degree of protein identification accuracy and allowing for a more in-depth characterization of the functional landscape of the samples. The combination of ComPIL with Blazmass allows proteomic searches to be performed with database sizes much larger than previously possible. These large database searches can be applied to complex meta-samples with unknown composition or proteomic samples where unexpected proteins may be identified. The protein database, proteomic search engine, and the proteomic data files for the 5 microbiome samples characterized and discussed herein are open source and available for use and additional analysis.
A survey of the current status of web-based databases indexing Iranian journals.
Merat, Shahin; Khatibzadeh, Shahab; Mesgarpour, Bita; Malekzadeh, Reza
2009-05-01
The scientific output of Iran is increasing rapidly during the recent years. Unfortunately, most papers are published in journals which are not indexed by popular indexing systems and many of them are in Persian without English translation. This makes the results of Iranian scientific research unavailable to other researchers, including Iranians. The aim of this study was to evaluate the quality of current web-based databases indexing scientific articles published in Iran. We identified web-based databases which indexed scientific journals published in Iran using popular search engines. The sites were then subjected to a series of tests to evaluate their coverage, search capabilities, stability, accuracy of information, consistency, accessibility, ease of use, and other features. Results were compared with each other to identify strengths and shortcomings of each site. Five web sites were indentified. None had a complete coverage on scientific Iranian journals. The search capabilities were less than optimal in most sites. English translations of research titles, author names, keywords, and abstracts of Persian-language articles did not follow standards. Some sites did not cover abstracts. Numerous typing errors make searches ineffective and citation indexing unreliable. None of the currently available indexing sites are capable of presenting Iranian research to the international scientific community. The government should intervene by enforcing policies designed to facilitate indexing through a systematic approach. The policies should address Iranian journals, authors, and indexing sites. Iranian journals should be required to provide their indexing data, including references, electronically; authors should provide correct indexing information to journals; and indexing sites should improve their software to meet standards set by the government.
NASA Astrophysics Data System (ADS)
Topousis, Daria E.; Dennehy, Cornelius J.; Lebsock, Kenneth L.
2012-12-01
Historically, engineers at the National Aeronautics and Space Administration (NASA) had few opportunities or incentives to share their technical expertise across the Agency. Its center- and project-focused culture often meant that knowledge never left organizational and geographic boundaries. The need to develop a knowledge sharing culture became critical as a result of increasingly complex missions, closeout of the Shuttle Program, and a new generation of engineers entering the workforce. To address this need, the Office of the Chief Engineer established communities of practice on the NASA Engineering Network. These communities were strategically aligned with NASA's core competencies in such disciplines as avionics, flight mechanics, life support, propulsion, structures, loads and dynamics, human factors, and guidance, navigation, and control. This paper is a case study of NASA's implementation of a system that would identify and develop communities, from establishing simple websites that compiled discipline-specific resources to fostering a knowledge-sharing environment through collaborative and interactive technologies. It includes qualitative evidence of improved availability and transfer of knowledge. It focuses on capabilities that increased knowledge exchange such as a custom-made Ask An Expert system, community contact lists, publication of key resources, and submission forms that allowed any user to propose content for the sites. It discusses the peer relationships that developed through the communities and the leadership and infrastructure that made them possible.
NASA Astrophysics Data System (ADS)
Hagler, LaTesha R.
As the number of historically underrepresented populations transfer from community college to university to pursue baccalaureate degrees in science, technology, engineering, and mathematics (STEM), little research exists about the challenges and successes Latino students experience as they transition from 2-year colleges to 4-year universities. Thus, institutions of higher education have limited insight to inform their policies, practices, and strategic planning in developing effective sources of support, services, and programs for underrepresented students in STEM disciplines. This qualitative research study explored the academic and social experiences of 14 Latino engineering community college transfer students at one university. Specifically, this study examined the lived experiences of minority community college transfer students' transition into and persistence at a 4-year institution. The conceptual framework applied to this study was Schlossberg's Transition Theory, which analyzed the participant's social and academic experiences that led to their successful transition from community college to university. Three themes emerged from the narrative data analysis: (a) Academic Experiences, (b) Social Experiences, and (c) Sources of Support. The findings indicate that engineering community college transfer students experience many challenges in their transition into and persistence at 4-year institutions. Some of the challenges include lack of academic preparedness, environmental challenges, lack of time management skills and faculty serving the role as institutional agents.
Engineering success: Undergraduate Latina women's persistence in an undergradute engineering program
NASA Astrophysics Data System (ADS)
Rosbottom, Steven R.
The purpose and focus of this narrative inquiry case study were to explore the personal stories of four undergraduate Latina students who persist in their engineering programs. This study was guided by two overarching research questions: a) What are the lived experiences of undergraduate Latina engineering students? b) What are the contributing factors that influence undergraduate Latina students to persist in an undergraduate engineering program? Yosso's (2005) community cultural wealth was used to the analyze data. Findings suggest through Yosso's (2005) aspirational capital, familial capital, social capital, navigational capital, and resistant capital the Latina student persisted in their engineering programs. These contributing factors brought to light five themes that emerged, the discovery of academic passions, guidance and support of family and teachers, preparation for and commitment to persistence, the power of community and collective engagement, and commitment to helping others. The themes supported their persistence in their engineering programs. Thus, this study informs policies, practices, and programs that support undergraduate Latina engineering student's persistence in engineering programs.
Reference System of DNA and Protein Sequences on CD-ROM
NASA Astrophysics Data System (ADS)
Nasu, Hisanori; Ito, Toshiaki
DNASIS-DBREF31 is a database for DNA and Protein sequences in the form of optical Compact Disk (CD) ROM, developed and commercialized by Hitachi Software Engineering Co., Ltd. Both nucleic acid base sequences and protein amino acid sequences can be retrieved from a single CD-ROM. Existing database is offered in the form of on-line service, floppy disks, or magnetic tape, all of which have some problems or other, such as usability or storage capacity. DNASIS-DBREF31 newly adopt a CD-ROM as a database device to realize a mass storage and personal use of the database.
Aerodynamic Characteristics, Database Development and Flight Simulation of the X-34 Vehicle
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Brauckmann, Gregory J.; Ruth, Michael J.; Fuhrmann, Henri D.
2000-01-01
An overview of the aerodynamic characteristics, development of the preflight aerodynamic database and flight simulation of the NASA/Orbital X-34 vehicle is presented in this paper. To develop the aerodynamic database, wind tunnel tests from subsonic to hypersonic Mach numbers including ground effect tests at low subsonic speeds were conducted in various facilities at the NASA Langley Research Center. Where wind tunnel test data was not available, engineering level analysis is used to fill the gaps in the database. Using this aerodynamic data, simulations have been performed for typical design reference missions of the X-34 vehicle.
ERIC Educational Resources Information Center
Dancz, Claire L. A.; Ketchman, Kevin J.; Burke, Rebekah D.; Hottle, Troy A.; Parrish, Kristen; Bilec, Melissa M.; Landis, Amy E.
2017-01-01
While many institutions express interest in integrating sustainability into their civil engineering curriculum, the engineering community lacks consensus on established methods for infusing sustainability into curriculum and verified approaches to assess engineers' sustainability knowledge. This paper presents the development of a sustainability…
ERIC Educational Resources Information Center
Cox, Monica F.; Cekic, Osman; Adams, Stephanie G.
2010-01-01
The engineering education community (motivated by internal and external factors) has begun to focus on leadership abilities of college students in engineering fields via reports from ABET, the National Academy of Engineering, and the National Research Council. These reports have directed criticism toward higher education institutions for their…
Communication as Part of the Engineering Skills Set
ERIC Educational Resources Information Center
Lappalainen, Pia
2009-01-01
Engineering graduates are facing changing requirements regarding their competencies, as interdisciplinarity and globalization have transformed engineering communities into collaboration arenas extending beyond uniform national, cultural, contextual and disciplinary settings and structures. Engineers no longer manage their daily tasks with plain…
Using State Student Unit Record Data to Increase Community College Student Success
ERIC Educational Resources Information Center
Ewell, Peter; Jenkins, Davis
2008-01-01
This chapter examines lessons learned by states that are using student unit record (SUR) data to improve outcomes for community college students and recommends steps states can take to strengthen their use of SUR databases to benefit students and communities. (Contains 1 exhibit.)
Connecting Urban Students with Engineering Design: Community-Focused, Student-Driven Projects
ERIC Educational Resources Information Center
Parker, Carolyn; Kruchten, Catherine; Moshfeghian, Audrey
2017-01-01
The STEM Achievement in Baltimore Elementary Schools (SABES) program is a community partnership initiative that includes both in-school and afterschool STEM education for grades 3-5. It was designed to broaden participation and achievement in STEM education by bringing science and engineering to the lives of low-income urban elementary school…
ERIC Educational Resources Information Center
Schon, James F.
In order to identify the distinguishing characteristics of technical education programs in engineering and industrial technology currently offered by post-secondary institutions in California, a body of data was collected by visiting 25 community colleges, 5 state universities, and 8 industrial firms; by a questionnaire sampling of 72 California…
Toyoda, Tetsuro
2011-01-01
Synthetic biology requires both engineering efficiency and compliance with safety guidelines and ethics. Focusing on the rational construction of biological systems based on engineering principles, synthetic biology depends on a genome-design platform to explore the combinations of multiple biological components or BIO bricks for quickly producing innovative devices. This chapter explains the differences among various platform models and details a methodology for promoting open innovation within the scope of the statutory exemption of patent laws. The detailed platform adopts a centralized evaluation model (CEM), computer-aided design (CAD) bricks, and a freemium model. It is also important for the platform to support the legal aspects of copyrights as well as patent and safety guidelines because intellectual work including DNA sequences designed rationally by human intelligence is basically copyrightable. An informational platform with high traceability, transparency, auditability, and security is required for copyright proof, safety compliance, and incentive management for open innovation in synthetic biology. GenoCon, which we have organized and explained here, is a competition-styled, open-innovation method involving worldwide participants from scientific, commercial, and educational communities that aims to improve the designs of genomic sequences that confer a desired function on an organism. Using only a Web browser, a participating contributor proposes a design expressed with CAD bricks that generate a relevant DNA sequence, which is then experimentally and intensively evaluated by the GenoCon organizers. The CAD bricks that comprise programs and databases as a Semantic Web are developed, executed, shared, reused, and well stocked on the secure Semantic Web platform called the Scientists' Networking System or SciNetS/SciNeS, based on which a CEM research center for synthetic biology and open innovation should be established. Copyright © 2011 Elsevier Inc. All rights reserved.
Proceedings of the 11th Thermal and Fluids Analysis Workshop
NASA Astrophysics Data System (ADS)
Sakowski, Barbara
2002-07-01
The Eleventh Thermal & Fluids Analysis WorkShop (TFAWS 2000) was held the week of August 21-25 at The Forum in downtown Cleveland. This year's annual event focused on building stronger links between research community and the engineering design/application world and celebrated the theme "Bridging the Gap Between Research and Design". Dr. Simon Ostrach delivered the keynote address "Research for Design (R4D)" and encouraged a more deliberate approach to performing research with near-term engineering design applications in mind. Over 100 persons attended TFAWS 2000, including participants from five different countries. This year's conference devoted a full-day seminar to the discussion of analysis and design tools associated with aeropropulsion research at the Glenn Research Center. As in previous years, the workshop also included hands-on instruction in state-of-the-art analysis tools, paper sessions on selected topics, short courses and application software demonstrations. TFAWS 2000 was co-hosted by the Thermal/Fluids Systems Design and Analysis Branch of NASA GRC and by the Ohio Aerospace Institute and was co-chaired by Barbara A. Sakowski and James R. Yuko. The annual NASA Delegates meeting is a standard component of TFAWS where the civil servants of the various centers represented discuss current and future events which affect the Community of Applied Thermal and Fluid ANalystS (CATFANS). At this year's delegates meeting the following goals (among others) were set by the collective body of delegates participation of all Centers in the NASA material properties database (TPSX) update: (1) developing and collaboratively supporting multi-center proposals; (2) expanding the scope of TFAWS to include other federal laboratories; (3) initiation of a white papers on thermal tools and standards; and (4) formation of an Agency-wide TFAWS steering committee.
ERIC Educational Resources Information Center
CEDEFOP Flash, 1993
1993-01-01
During 1992, CEDEFOP (the European Centre for the Development of Vocational Training) commissioned two projects to investigate the current situation with regard to databases on vocational qualifications in Member States of the European Community (EC) and possibilities for networking such databases. Results of these two studies were presented and…
PBL and CDIO: complementary models for engineering education development
NASA Astrophysics Data System (ADS)
Edström, Kristina; Kolmos, Anette
2014-09-01
This paper compares two models for reforming engineering education, problem/project-based learning (PBL), and conceive-design-implement-operate (CDIO), identifying and explaining similarities and differences. PBL and CDIO are defined and contrasted in terms of their history, community, definitions, curriculum design, relation to disciplines, engineering projects, and change strategy. The structured comparison is intended as an introduction for learning about any of these models. It also invites reflection to support the understanding and evolution of PBL and CDIO, and indicates specifically what the communities can learn from each other. It is noted that while the two approaches share many underlying values, they only partially overlap as strategies for educational reform. The conclusions are that practitioners have much to learn from each other's experiences through a dialogue between the communities, and that PBL and CDIO can play compatible and mutually reinforcing roles, and thus can be fruitfully combined to reform engineering education.
LymPHOS 2.0: an update of a phosphosite database of primary human T cells
Nguyen, Tien Dung; Vidal-Cortes, Oriol; Gallardo, Oscar; Abian, Joaquin; Carrascal, Montserrat
2015-01-01
LymPHOS is a web-oriented database containing peptide and protein sequences and spectrometric information on the phosphoproteome of primary human T-Lymphocytes. Current release 2.0 contains 15 566 phosphorylation sites from 8273 unique phosphopeptides and 4937 proteins, which correspond to a 45-fold increase over the original database description. It now includes quantitative data on phosphorylation changes after time-dependent treatment with activators of the TCR-mediated signal transduction pathway. Sequence data quality has also been improved with the use of multiple search engines for database searching. LymPHOS can be publicly accessed at http://www.lymphos.org. Database URL: http://www.lymphos.org. PMID:26708986
Metabolic Engineering X Conference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, Evan
The International Metabolic Engineering Society (IMES) and the Society for Biological Engineering (SBE), both technological communities of the American Institute of Chemical Engineers (AIChE), hosted the Metabolic Engineering X Conference (ME-X) on June 15-19, 2014 at the Westin Bayshore in Vancouver, British Columbia. It attracted 395 metabolic engineers from academia, industry and government from around the globe.
NCBI GEO: archive for functional genomics data sets--10 years on.
Barrett, Tanya; Troup, Dennis B; Wilhite, Stephen E; Ledoux, Pierre; Evangelista, Carlos; Kim, Irene F; Tomashevsky, Maxim; Marshall, Kimberly A; Phillippy, Katherine H; Sherman, Patti M; Muertter, Rolf N; Holko, Michelle; Ayanbule, Oluwabukunmi; Yefanov, Andrey; Soboleva, Alexandra
2011-01-01
A decade ago, the Gene Expression Omnibus (GEO) database was established at the National Center for Biotechnology Information (NCBI). The original objective of GEO was to serve as a public repository for high-throughput gene expression data generated mostly by microarray technology. However, the research community quickly applied microarrays to non-gene-expression studies, including examination of genome copy number variation and genome-wide profiling of DNA-binding proteins. Because the GEO database was designed with a flexible structure, it was possible to quickly adapt the repository to store these data types. More recently, as the microarray community switches to next-generation sequencing technologies, GEO has again adapted to host these data sets. Today, GEO stores over 20,000 microarray- and sequence-based functional genomics studies, and continues to handle the majority of direct high-throughput data submissions from the research community. Multiple mechanisms are provided to help users effectively search, browse, download and visualize the data at the level of individual genes or entire studies. This paper describes recent database enhancements, including new search and data representation tools, as well as a brief review of how the community uses GEO data. GEO is freely accessible at http://www.ncbi.nlm.nih.gov/geo/.
Integration of Oracle and Hadoop: Hybrid Databases Affordable at Scale
NASA Astrophysics Data System (ADS)
Canali, L.; Baranowski, Z.; Kothuri, P.
2017-10-01
This work reports on the activities aimed at integrating Oracle and Hadoop technologies for the use cases of CERN database services and in particular on the development of solutions for offloading data and queries from Oracle databases into Hadoop-based systems. The goal and interest of this investigation is to increase the scalability and optimize the cost/performance footprint for some of our largest Oracle databases. These concepts have been applied, among others, to build offline copies of CERN accelerator controls and logging databases. The tested solution allows to run reports on the controls data offloaded in Hadoop without affecting the critical production database, providing both performance benefits and cost reduction for the underlying infrastructure. Other use cases discussed include building hybrid database solutions with Oracle and Hadoop, offering the combined advantages of a mature relational database system with a scalable analytics engine.
Shared patients: multiple health and social care contact.
Keene, J; Swift, L; Bailey, S; Janacek, G
2001-07-01
The paper describes results from the 'Tracking Project', a new method for examining agency overlap, repeat service use and shared clients/patients amongst social and health care agencies in the community. This is the first project in this country to combine total population databases from a range of social, health care and criminal justice agencies to give a multidisciplinary database for one county (n = 97,162 cases), through standardised anonymisation of agency databases, using SOUNDEX, a software programme. A range of 20 community social and health care agencies were shown to have a large overlap with each other in a two-year period, indicating high proportions of shared patients/clients. Accident and Emergency is used as an example of major overlap: 16.2% (n = 39,992) of persons who attended a community agency had attended Accident and Emergency as compared to 8.2% (n = 775,000) of the total population of the county. Of these, 96% who had attended seven or more different community agencies had also attended Accident and Emergency. Further statistical analysis of Accident and Emergency attendance as a characteristic of community agency populations (n = 39,992) revealed that increasing frequency of attendance at Accident and Emergency was very strongly associated with increasing use of other services. That is, the patients that repeatedly attend Accident and Emergency are much more likely to attend more other agencies, indicating the possibility that they share more problematic or difficult patients. Research questions arising from these data are discussed and future research methods suggested in order to derive predictors from the database and develop screening instruments to identify multiple agency attenders for targeting or multidisciplinary working. It is suggested that Accident and Emergency attendance might serve as an important predictor of multiple agency attendance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durfee, Justin David; Frazier, Christopher Rawls; Bandlow, Alisa
Sandia National Laboratories (Sandia) is in Phase 3 Sustainment of development of a prototype tool, currently referred to as the Contingency Contractor Optimization Tool - Prototype (CCOTP), under the direction of OSD Program Support. CCOT-P is intended to help provide senior Department of Defense (DoD) leaders with comprehensive insight into the global availability, readiness and capabilities of the Total Force Mix. The CCOT-P will allow senior decision makers to quickly and accurately assess the impacts, risks and mitigating strategies for proposed changes to force/capabilities assignments, apportionments and allocations options, focusing specifically on contingency contractor planning. During Phase 2 of themore » program, conducted during fiscal year 2012, Sandia developed an electronic storyboard prototype of the Contingency Contractor Optimization Tool that can be used for communication with senior decision makers and other Operational Contract Support (OCS) stakeholders. Phase 3 used feedback from demonstrations of the electronic storyboard prototype to develop an engineering prototype for planners to evaluate. Sandia worked with the DoD and Joint Chiefs of Staff strategic planning community to get feedback and input to ensure that the engineering prototype was developed to closely align with future planning needs. The intended deployment environment was also a key consideration as this prototype was developed. Initial release of the engineering prototype was done on servers at Sandia in the middle of Phase 3. In 2013, the tool was installed on a production pilot server managed by the OUSD(AT&L) eBusiness Center. The purpose of this document is to specify the CCOT-P engineering prototype platform requirements as of May 2016. Sandia developed the CCOT-P engineering prototype using common technologies to minimize the likelihood of deployment issues. CCOT-P engineering prototype was architected and designed to be as independent as possible of the major deployment components such as the server hardware, the server operating system, the database, and the web server. This document describes the platform requirements, the architecture, and the implementation details of the CCOT-P engineering prototype.« less
Recent NASA Wake-Vortex Flight Tests, Flow-Physics Database and Wake-Development Analysis
NASA Technical Reports Server (NTRS)
Vicroy, Dan D.; Vijgen, Paul M.; Reimer, Heidi M.; Gallegos, Joey L.; Spalart, Philippe R.
1998-01-01
A series of flight tests over the ocean of a four engine turboprop airplane in the cruise configuration have provided a data set for improved understanding of wake vortex physics and atmospheric interaction. An integrated database has been compiled for wake characterization and validation of wake-vortex computational models. This paper describes the wake-vortex flight tests, the data processing, the database development and access, and results obtained from preliminary wake-characterization analysis using the data sets.
SQL/NF Translator for the Triton Nested Relational Database System
1990-12-01
18as., Ohio .. 9~~ ~~ 1 4- AFIT/GCE/ENG/90D-05 SQL/Nk1 TRANSLATOR FOR THE TRITON NESTED RELATIONAL DATABASE SYSTEM THESIS Craig William Schnepf Captain...FOR THE TRITON NESTED RELATIONAL DATABASE SYSTEM THESIS Presented to the Faculty of the School of Engineering of the Air Force Institute of Technnlogy... systems . The SQL/NF query language used for the nested relationil model is an extension of the popular relational model query language SQL. The query
1985-12-01
RELATIONAL TO NETWORK QUERY TRANSLATOR FOR A DISTRIBUTED DATABASE MANAGEMENT SYSTEM TH ESI S .L Kevin H. Mahoney -- Captain, USAF AFIT/GCS/ENG/85D-7...NETWORK QUERY TRANSLATOR FOR A DISTRIBUTED DATABASE MANAGEMENT SYSTEM - THESIS Presented to the Faculty of the School of Engineering of the Air Force...Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Systems - Kevin H. Mahoney
Implementation of the FAA research and development electromagnetic database
NASA Technical Reports Server (NTRS)
Mcdowall, R. L.; Grush, D. J.; Cook, D. M.; Glynn, M. S.
1991-01-01
The Idaho National Engineering Laboratory (INEL) has been assisting the FAA in developing a database of information about lightning. The FAA Research and Development Electromagnetic Database (FRED) will ultimately contain data from a variety of airborne and ground-based lightning research projects. An outline of the data currently available in FRED is presented. The data sources which the FAA intends to incorporate into FRED are listed. In addition, it describes how the researchers may access and use the FRED menu system.
Engineering for Native Americans.
ERIC Educational Resources Information Center
Jarosz, Jeffrey
2003-01-01
The engineering workforce is overwhelmingly male and White. To attract and retain American Indian and other minority-group students, engineering programs must offer practical, hands-on, team activities; show that engineering is beneficial to society and Indian communities; use inclusive textbooks; offer distance-learning opportunities; and…
van Wieren-de Wijer, Diane B M A; Maitland-van der Zee, Anke-Hilse; de Boer, Anthonius; Stricker, Bruno H Ch; Kroon, Abraham A; de Leeuw, Peter W; Bozkurt, O; Klungel, Olaf H
2009-04-01
To describe the design, recruitment and baseline characteristics of participants in a community pharmacy based pharmacogenetic study of antihypertensive drug treatment. Participants enrolled from the population-based Pharmaco-Morbidity Record Linkage System. We designed a nested case-control study in which we will assess whether specific genetic polymorphisms modify the effect of antihypertensive drugs on the risk of myocardial infarction. In this study, cases (myocardial infarction) and controls were recruited through community pharmacies that participate in PHARMO. The PHARMO database comprises drug dispensing histories of about 2,000,000 subjects from a representative sample of Dutch community pharmacies linked to the national registrations of hospital discharges. In total we selected 31010 patients (2777 cases and 28233 controls) from the PHARMO database, of whom 15973 (1871 cases, 14102 controls) were approached through their community pharmacy. Overall response rate was 36.3% (n = 5791, 794 cases, 4997 controls), whereas 32.1% (n = 5126, 701 cases, 4425 controls) gave informed consent to genotype their DNA. As expected, several cardiovascular risk factors such as smoking, body mass index, hypercholesterolemia, and diabetes mellitus were more common in cases than in controls. Furthermore, cases more often used beta-blockers and calcium-antagonists, whereas controls more often used thiazide diuretics, ACE-inhibitors, and angiotensin-II receptor blockers. We have demonstrated that it is feasible to select patients from a coded database for a pharmacogenetic study and to approach them through community pharmacies, achieving reasonable response rates and without violating privacy rules.
PCACE-Personal-Computer-Aided Cabling Engineering
NASA Technical Reports Server (NTRS)
Billitti, Joseph W.
1987-01-01
PCACE computer program developed to provide inexpensive, interactive system for learning and using engineering approach to interconnection systems. Basically database system that stores information as files of individual connectors and handles wiring information in circuit groups stored as records. Directly emulates typical manual engineering methods of handling data, thus making interface between user and program very natural. Apple version written in P-Code Pascal and IBM PC version of PCACE written in TURBO Pascal 3.0
Seniors' Online Communities: A Quantitative Content Analysis
ERIC Educational Resources Information Center
Nimrod, Galit
2010-01-01
Purpose: To examine the contents and characteristics of seniors' online communities and to explore their potential benefits to older adults. Design and Methods: Quantitative content analysis of a full year's data from 14 leading online communities using a novel computerized system. The overall database included 686,283 messages. Results: There was…
Humanitarian engineering placements in our own communities
NASA Astrophysics Data System (ADS)
VanderSteen, J. D. J.; Hall, K. R.; Baillie, C. A.
2010-05-01
There is an increasing interest in the humanitarian engineering curriculum, and a service-learning placement could be an important component of such a curriculum. International placements offer some important pedagogical advantages, but also have some practical and ethical limitations. Local community-based placements have the potential to be transformative for both the student and the community, although this potential is not always seen. In order to investigate the role of local placements, qualitative research interviews were conducted. Thirty-two semi-structured research interviews were conducted and analysed, resulting in a distinct outcome space. It is concluded that local humanitarian engineering placements greatly complement international placements and are strongly recommended if international placements are conducted. More importantly it is seen that we are better suited to address the marginalised in our own community, although it is often easier to see the needs of an outside populace.
Biogeography of anaerobic ammonia-oxidizing (anammox) bacteria
Sonthiphand, Puntipar; Hall, Michael W.; Neufeld, Josh D.
2014-01-01
Anaerobic ammonia-oxidizing (anammox) bacteria are able to oxidize ammonia and reduce nitrite to produce N2 gas. After being discovered in a wastewater treatment plant (WWTP), anammox bacteria were subsequently characterized in natural environments, including marine, estuary, freshwater, and terrestrial habitats. Although anammox bacteria play an important role in removing fixed N from both engineered and natural ecosystems, broad scale anammox bacterial distributions have not yet been summarized. The objectives of this study were to explore global distributions and diversity of anammox bacteria and to identify factors that influence their biogeography. Over 6000 anammox 16S rRNA gene sequences from the public database were analyzed in this current study. Data ordinations indicated that salinity was an important factor governing anammox bacterial distributions, with distinct populations inhabiting natural and engineered ecosystems. Gene phylogenies and rarefaction analysis demonstrated that freshwater environments and the marine water column harbored the highest and the lowest diversity of anammox bacteria, respectively. Co-occurrence network analysis indicated that Ca. Scalindua strongly connected with other Ca. Scalindua taxa, whereas Ca. Brocadia co-occurred with taxa from both known and unknown anammox genera. Our survey provides a better understanding of ecological factors affecting anammox bacterial distributions and provides a comprehensive baseline for understanding the relationships among anammox communities in global environments. PMID:25147546
Biogeography of anaerobic ammonia-oxidizing (anammox) bacteria.
Sonthiphand, Puntipar; Hall, Michael W; Neufeld, Josh D
2014-01-01
Anaerobic ammonia-oxidizing (anammox) bacteria are able to oxidize ammonia and reduce nitrite to produce N2 gas. After being discovered in a wastewater treatment plant (WWTP), anammox bacteria were subsequently characterized in natural environments, including marine, estuary, freshwater, and terrestrial habitats. Although anammox bacteria play an important role in removing fixed N from both engineered and natural ecosystems, broad scale anammox bacterial distributions have not yet been summarized. The objectives of this study were to explore global distributions and diversity of anammox bacteria and to identify factors that influence their biogeography. Over 6000 anammox 16S rRNA gene sequences from the public database were analyzed in this current study. Data ordinations indicated that salinity was an important factor governing anammox bacterial distributions, with distinct populations inhabiting natural and engineered ecosystems. Gene phylogenies and rarefaction analysis demonstrated that freshwater environments and the marine water column harbored the highest and the lowest diversity of anammox bacteria, respectively. Co-occurrence network analysis indicated that Ca. Scalindua strongly connected with other Ca. Scalindua taxa, whereas Ca. Brocadia co-occurred with taxa from both known and unknown anammox genera. Our survey provides a better understanding of ecological factors affecting anammox bacterial distributions and provides a comprehensive baseline for understanding the relationships among anammox communities in global environments.