Comprehensive Routing Security Development and Deployment for the Internet
2015-02-01
feature enhancement and bug fixes. • MySQL : MySQL is a widely used and popular open source database package. It was chosen for database support in the...RPSTIR depends on several other open source packages. • MySQL : MySQL is used for the the local RPKI database cache. • OpenSSL: OpenSSL is used for...cryptographic libraries for X.509 certificates. • ODBC mySql Connector: ODBC (Open Database Connectivity) is a standard programming interface (API) for
2010-09-01
5 2. SCIL Architecture ...............................................................................6 3. Assertions...137 x THIS PAGE INTENTIONALLY LEFT BLANK xi LIST OF FIGURES Figure 1. SCIL architecture...Database Connectivity LAN Local Area Network ODBC Open Database Connectivity SCIL Social-Cultural Content in Language UMD
Database Entity Persistence with Hibernate for the Network Connectivity Analysis Model
2014-04-01
time savings in the Java coding development process. Appendices A and B describe address setup procedures for installing the MySQL database...development environment is required: • The open source MySQL Database Management System (DBMS) from Oracle, which is a Java Database Connectivity (JDBC...compliant DBMS • MySQL JDBC Driver library that comes as a plug-in with the Netbeans distribution • The latest Java Development Kit with the latest
Code of Federal Regulations, 2013 CFR
2013-10-01
... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...
Code of Federal Regulations, 2014 CFR
2014-10-01
... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...
Code of Federal Regulations, 2012 CFR
2012-10-01
... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...
Code of Federal Regulations, 2010 CFR
2010-10-01
... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...-compatible format. All databases must be supported with adequate documentation on data attributes, SQL...
A Tutorial in Creating Web-Enabled Databases with Inmagic DB/TextWorks through ODBC.
ERIC Educational Resources Information Center
Breeding, Marshall
2000-01-01
Explains how to create Web-enabled databases. Highlights include Inmagic's DB/Text WebPublisher product called DB/TextWorks; ODBC (Open Database Connectivity) drivers; Perl programming language; HTML coding; Structured Query Language (SQL); Common Gateway Interface (CGI) programming; and examples of HTML pages and Perl scripts. (LRW)
Curriculum Connection. Take Technology Outdoors.
ERIC Educational Resources Information Center
Dean, Bruce Robert
1992-01-01
Technology can support hands-on science as elementary students use computers to formulate field guides to nature surrounding their school. Students examine other field guides; open databases for recording information; collect, draw, and identify plants, insects, and animals; enter data into the database; then generate a computerized field guide.…
ERMes: Open Source Simplicity for Your E-Resource Management
ERIC Educational Resources Information Center
Doering, William; Chilton, Galadriel
2009-01-01
ERMes, the latest version of electronic resource management system (ERM), is a relational database; content in different tables connects to, and works with, content in other tables. ERMes requires Access 2007 (Windows) or Access 2008 (Mac) to operate as the database utilizes functionality not available in previous versions of Microsoft Access. The…
Ibmdbpy-spatial : An Open-source implementation of in-database geospatial analytics in Python
NASA Astrophysics Data System (ADS)
Roy, Avipsa; Fouché, Edouard; Rodriguez Morales, Rafael; Moehler, Gregor
2017-04-01
As the amount of spatial data acquired from several geodetic sources has grown over the years and as data infrastructure has become more powerful, the need for adoption of in-database analytic technology within geosciences has grown rapidly. In-database analytics on spatial data stored in a traditional enterprise data warehouse enables much faster retrieval and analysis for making better predictions about risks and opportunities, identifying trends and spot anomalies. Although there are a number of open-source spatial analysis libraries like geopandas and shapely available today, most of them have been restricted to manipulation and analysis of geometric objects with a dependency on GEOS and similar libraries. We present an open-source software package, written in Python, to fill the gap between spatial analysis and in-database analytics. Ibmdbpy-spatial provides a geospatial extension to the ibmdbpy package, implemented in 2015. It provides an interface for spatial data manipulation and access to in-database algorithms in IBM dashDB, a data warehouse platform with a spatial extender that runs as a service on IBM's cloud platform called Bluemix. Working in-database reduces the network overload, as the complete data need not be replicated into the user's local system altogether and only a subset of the entire dataset can be fetched into memory in a single instance. Ibmdbpy-spatial accelerates Python analytics by seamlessly pushing operations written in Python into the underlying database for execution using the dashDB spatial extender, thereby benefiting from in-database performance-enhancing features, such as columnar storage and parallel processing. The package is currently supported on Python versions from 2.7 up to 3.4. The basic architecture of the package consists of three main components - 1) a connection to the dashDB represented by the instance IdaDataBase, which uses a middleware API namely - pypyodbc or jaydebeapi to establish the database connection via ODBC or JDBC respectively, 2) an instance to represent the spatial data stored in the database as a dataframe in Python, called the IdaGeoDataFrame, with a specific geometry attribute which recognises a planar geometry column in dashDB and 3) Python wrappers for spatial functions like within, distance, area, buffer} and more which dashDB currently supports to make the querying process from Python much simpler for the users. The spatial functions translate well-known geopandas-like syntax into SQL queries utilising the database connection to perform spatial operations in-database and can operate on single geometries as well two different geometries from different IdaGeoDataFrames. The in-database queries strictly follow the standards of OpenGIS Implementation Specification for Geographic information - Simple feature access for SQL. The results of the operations obtained can thereby be accessed dynamically via interactive Jupyter notebooks from any system which supports Python, without any additional dependencies and can also be combined with other open source libraries such as matplotlib and folium in-built within Jupyter notebooks for visualization purposes. We built a use case to analyse crime hotspots in New York city to validate our implementation and visualized the results as a choropleth map for each borough.
Quirós, Miguel; Gražulis, Saulius; Girdzijauskaitė, Saulė; Merkys, Andrius; Vaitkus, Antanas
2018-05-18
Computer descriptions of chemical molecular connectivity are necessary for searching chemical databases and for predicting chemical properties from molecular structure. In this article, the ongoing work to describe the chemical connectivity of entries contained in the Crystallography Open Database (COD) in SMILES format is reported. This collection of SMILES is publicly available for chemical (substructure) search or for any other purpose on an open-access basis, as is the COD itself. The conventions that have been followed for the representation of compounds that do not fit into the valence bond theory are outlined for the most frequently found cases. The procedure for getting the SMILES out of the CIF files starts with checking whether the atoms in the asymmetric unit are a chemically acceptable image of the compound. When they are not (molecule in a symmetry element, disorder, polymeric species,etc.), the previously published cif_molecule program is used to get such image in many cases. The program package Open Babel is then applied to get SMILES strings from the CIF files (either those directly taken from the COD or those produced by cif_molecule when applicable). The results are then checked and/or fixed by a human editor, in a computer-aided task that at present still consumes a great deal of human time. Even if the procedure still needs to be improved to make it more automatic (and hence faster), it has already yielded more than 160,000 curated chemical structures and the purpose of this article is to announce the existence of this work to the chemical community as well as to spread the use of its results.
2007-09-19
extended object relations such as boundary, interior, open, closed , within, connected, and overlaps, which are invariant under elastic deformation...is required in a geo-spatial semantic web is challenging because the defining properties of geographic entities are very closely related to space. In...Objects under Primitive will be open (i.e., they will not contain their boundary points) and the objects under Complex will be closed . In addition to
Walsh, John; Roberts, Ruth; Morris, Richard
2015-01-01
Patients with diabetes have to take numerous factors/data into their therapeutic decisions in daily life. Connecting the devices they are using by feeding the data generated into a database/app is supposed to help patients to optimize their glycemic control. As this is not established in practice, the different roadblocks have to be discussed to open the road. That large telecommunication companies are now entering this market might be a big help in pushing this forward. Smartphones offer an ideal platform for connectivity solutions. PMID:25614015
Abstracting data warehousing issues in scientific research.
Tews, Cody; Bracio, Boris R
2002-01-01
This paper presents the design and implementation of the Idaho Biomedical Data Management System (IBDMS). This system preprocesses biomedical data from the IMPROVE (Improving Control of Patient Status in Critical Care) library via an Open Database Connectivity (ODBC) connection. The ODBC connection allows for local and remote simulations to access filtered, joined, and sorted data using the Structured Query Language (SQL). The tool is capable of providing an overview of available data in addition to user defined data subset for verification of models of the human respiratory system.
jSPyDB, an open source database-independent tool for data management
NASA Astrophysics Data System (ADS)
Pierro, Giuseppe Antonio; Cavallari, Francesca; Di Guida, Salvatore; Innocente, Vincenzo
2011-12-01
Nowadays, the number of commercial tools available for accessing Databases, built on Java or .Net, is increasing. However, many of these applications have several drawbacks: usually they are not open-source, they provide interfaces only with a specific kind of database, they are platform-dependent and very CPU and memory consuming. jSPyDB is a free web-based tool written using Python and Javascript. It relies on jQuery and python libraries, and is intended to provide a simple handler to different database technologies inside a local web browser. Such a tool, exploiting fast access libraries such as SQLAlchemy, is easy to install, and to configure. The design of this tool envisages three layers. The front-end client side in the local web browser communicates with a backend server. Only the server is able to connect to the different databases for the purposes of performing data definition and manipulation. The server makes the data available to the client, so that the user can display and handle them safely. Moreover, thanks to jQuery libraries, this tool supports export of data in different formats, such as XML and JSON. Finally, by using a set of pre-defined functions, users are allowed to create their customized views for a better data visualization. In this way, we optimize the performance of database servers by avoiding short connections and concurrent sessions. In addition, security is enforced since we do not provide users the possibility to directly execute any SQL statement.
JNDMS Task Authorization 2 Report
2013-10-01
uses Barnyard to store alarms from all DREnet Snort sensors in a MySQL database. Barnyard is an open source tool designed to work with Snort to take...Technology ITI Information Technology Infrastructure J2EE Java 2 Enterprise Edition JAR Java Archive. This is an archive file format defined by Java ...standards. JDBC Java Database Connectivity JDW JNDMS Data Warehouse JNDMS Joint Network and Defence Management System JNDMS Joint Network Defence and
NASA Astrophysics Data System (ADS)
Bashev, A.
2012-04-01
Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data treatment could be conducted in other programs after extraction the filtered data into *.csv file. It makes the database understandable for non-experts. The database employs open data format (*.csv) and wide spread tools: PHP as the program language, MySQL as database management system, JavaScript for interaction with GoogleMaps and JQueryUI for create user interface. The database is multilingual: there are association tables, which connect with elements of the database. In total the development required about 150 hours. The database still has several problems. The main problem is the reliability of the data. Actually it needs an expert system for estimation the reliability, but the elaboration of such a system would take more resources than the database itself. The second problem is the problem of stream selection - how to select the stations that are connected with each other (for example, belong to one water stream) and indicate their sequence. Currently the interface is English and Russian. However it can be easily translated to your language. But some problems we decided. For example problem "the problem of the same station" (sometimes the distance between stations is smaller, than the error of position): when you adding new station to the database our application automatically find station near this place. Also we decided problem of object and parameter type (how to regard "EC" and "electrical conductivity" as the same parameter). This problem has been solved using "associative tables". If you would like to see the interface on your language, just contact us. We should send you the list of terms and phrases for translation on your language. The main advantage of the database is that it is totally open: everybody can see, extract the data from the database and use them for non-commercial purposes with no charge. Registered users can contribute to the database without getting paid. We hope, that it will be widely used first of all for education purposes, but professional scientists could use it also.
The One Universal Graph — a free and open graph database
NASA Astrophysics Data System (ADS)
Ng, Liang S.; Champion, Corbin
2016-02-01
Recent developments in graph database mostly are huge projects involving big organizations, big operations and big capital, as the name Big Data attests. We proposed the concept of One Universal Graph (OUG) which states that all observable and known objects and concepts (physical, conceptual or digitally represented) can be connected with only one single graph; furthermore the OUG can be implemented with a very simple text file format with free software, capable of being executed on Android or smaller devices. As such the One Universal Graph Data Exchange (GOUDEX) modules can potentially be installed on hundreds of millions of Android devices and Intel compatible computers shipped annually. Coupled with its open nature and ability to connect to existing leading search engines and databases currently in operation, GOUDEX has the potential to become the largest and a better interface for users and programmers to interact with the data on the Internet. With a Web User Interface for users to use and program in native Linux environment, Free Crowdware implemented in GOUDEX can help inexperienced users learn programming with better organized documentation for free software, and is able to manage programmer's contribution down to a single line of code or a single variable in software projects. It can become the first practically realizable “Internet brain” on which a global artificial intelligence system can be implemented. Being practically free and open, One Universal Graph can have significant applications in robotics, artificial intelligence as well as social networks.
Ezra Tsur, Elishai
2017-01-01
Databases are imperative for research in bioinformatics and computational biology. Current challenges in database design include data heterogeneity and context-dependent interconnections between data entities. These challenges drove the development of unified data interfaces and specialized databases. The curation of specialized databases is an ever-growing challenge due to the introduction of new data sources and the emergence of new relational connections between established datasets. Here, an open-source framework for the curation of specialized databases is proposed. The framework supports user-designed models of data encapsulation, objects persistency and structured interfaces to local and external data sources such as MalaCards, Biomodels and the National Centre for Biotechnology Information (NCBI) databases. The proposed framework was implemented using Java as the development environment, EclipseLink as the data persistency agent and Apache Derby as the database manager. Syntactic analysis was based on J3D, jsoup, Apache Commons and w3c.dom open libraries. Finally, a construction of a specialized database for aneurysms associated vascular diseases is demonstrated. This database contains 3-dimensional geometries of aneurysms, patient's clinical information, articles, biological models, related diseases and our recently published model of aneurysms' risk of rapture. Framework is available in: http://nbel-lab.com.
Statistical organelle dissection of Arabidopsis guard cells using image database LIPS.
Higaki, Takumi; Kutsuna, Natsumaro; Hosokawa, Yoichiroh; Akita, Kae; Ebine, Kazuo; Ueda, Takashi; Kondo, Noriaki; Hasezawa, Seiichiro
2012-01-01
To comprehensively grasp cell biological events in plant stomatal movement, we have captured microscopic images of guard cells with various organelles markers. The 28,530 serial optical sections of 930 pairs of Arabidopsis guard cells have been released as a new image database, named Live Images of Plant Stomata (LIPS). We visualized the average organellar distributions in guard cells using probabilistic mapping and image clustering techniques. The results indicated that actin microfilaments and endoplasmic reticulum (ER) are mainly localized to the dorsal side and connection regions of guard cells. Subtractive images of open and closed stomata showed distribution changes in intracellular structures, including the ER, during stomatal movement. Time-lapse imaging showed that similar ER distribution changes occurred during stomatal opening induced by light irradiation or femtosecond laser shots on neighboring epidermal cells, indicating that our image analysis approach has identified a novel ER relocation in stomatal opening.
Clinical records anonymisation and text extraction (CRATE): an open-source software system.
Cardinal, Rudolf N
2017-04-26
Electronic medical records contain information of value for research, but contain identifiable and often highly sensitive confidential information. Patient-identifiable information cannot in general be shared outside clinical care teams without explicit consent, but anonymisation/de-identification allows research uses of clinical data without explicit consent. This article presents CRATE (Clinical Records Anonymisation and Text Extraction), an open-source software system with separable functions: (1) it anonymises or de-identifies arbitrary relational databases, with sensitivity and precision similar to previous comparable systems; (2) it uses public secure cryptographic methods to map patient identifiers to research identifiers (pseudonyms); (3) it connects relational databases to external tools for natural language processing; (4) it provides a web front end for research and administrative functions; and (5) it supports a specific model through which patients may consent to be contacted about research. Creation and management of a research database from sensitive clinical records with secure pseudonym generation, full-text indexing, and a consent-to-contact process is possible and practical using entirely free and open-source software.
Status, upgrades, and advances of RTS2: the open source astronomical observatory manager
NASA Astrophysics Data System (ADS)
Kubánek, Petr
2016-07-01
RTS2 is an open source observatory control system. Being developed from early 2000, it continue to receive new features in last two years. RTS2 is a modulat, network-based distributed control system, featuring telescope drivers with advanced tracking and pointing capabilities, fast camera drivers and high level modules for "business logic" of the observatory, connected to a SQL database. Running on all continents of the planet, it accumulated a lot to control parts or full observatory setups.
Climate Signals: An On-Line Digital Platform for Mapping Climate Change Impacts in Real Time
NASA Astrophysics Data System (ADS)
Cutting, H.
2016-12-01
Climate Signals is an on-line digital platform for cataloging and mapping the impacts of climate change. The CS platform specifies and details the chains of connections between greenhouse gas emissions and individual climate events. Currently in open-beta release, the platform is designed to to engage and serve the general public, news media, and policy-makers, particularly in real-time during extreme climate events. Climate Signals consists of a curated relational database of events and their links to climate change, a mapping engine, and a gallery of climate change monitors offering real-time data. For each event in the database, an infographic engine provides a custom attribution "tree" that illustrates the connections to climate change. In addition, links to key contextual resources are aggregated and curated for each event. All event records are fully annotated with detailed source citations and corresponding hyper links. The system of attribution used to link events to climate change in real-time is detailed here. This open-beta release is offered for public user testing and engagement. Launched in May 2016, the operation of this platform offers lessons for public engagement in climate change impacts.
Herasevich, Vitaly; Pickering, Brian W; Dong, Yue; Peters, Steve G; Gajic, Ognjen
2010-03-01
To develop and validate an informatics infrastructure for syndrome surveillance, decision support, reporting, and modeling of critical illness. Using open-schema data feeds imported from electronic medical records (EMRs), we developed a near-real-time relational database (Multidisciplinary Epidemiology and Translational Research in Intensive Care Data Mart). Imported data domains included physiologic monitoring, medication orders, laboratory and radiologic investigations, and physician and nursing notes. Open database connectivity supported the use of Boolean combinations of data that allowed authorized users to develop syndrome surveillance, decision support, and reporting (data "sniffers") routines. Random samples of database entries in each category were validated against corresponding independent manual reviews. The Multidisciplinary Epidemiology and Translational Research in Intensive Care Data Mart accommodates, on average, 15,000 admissions to the intensive care unit (ICU) per year and 200,000 vital records per day. Agreement between database entries and manual EMR audits was high for sex, mortality, and use of mechanical ventilation (kappa, 1.0 for all) and for age and laboratory and monitored data (Bland-Altman mean difference +/- SD, 1(0) for all). Agreement was lower for interpreted or calculated variables, such as specific syndrome diagnoses (kappa, 0.5 for acute lung injury), duration of ICU stay (mean difference +/- SD, 0.43+/-0.2), or duration of mechanical ventilation (mean difference +/- SD, 0.2+/-0.9). Extraction of essential ICU data from a hospital EMR into an open, integrative database facilitates process control, reporting, syndrome surveillance, decision support, and outcome research in the ICU.
2013-08-08
theft in the CERT Insider Threat Database were associated with foreign social network connections. 1 Verizon. “The 2013 Data Breach Investigations...passwords, opening infected attachments or web sites, etc. 1 Verizon. “The 2013 Data Breach Investigations Report.” http...were experienced by 38% of respondents1 • The 2013 Verizon Data Breach Report2 reveals • 29% of breaches studied leveraged social tactics • A
NASA Astrophysics Data System (ADS)
Gross, M. B.; Mayernik, M. S.; Rowan, L. R.; Khan, H.; Boler, F. M.; Maull, K. E.; Stott, D.; Williams, S.; Corson-Rikert, J.; Johns, E. M.; Daniels, M. D.; Krafft, D. B.
2015-12-01
UNAVCO, UCAR, and Cornell University are working together to leverage semantic web technologies to enable discovery of people, datasets, publications and other research products, as well as the connections between them. The EarthCollab project, an EarthCube Building Block, is enhancing an existing open-source semantic web application, VIVO, to address connectivity gaps across distributed networks of researchers and resources related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) UNAVCO, a geodetic facility and consortium that supports diverse research projects informed by geodesy. People, publications, datasets and grant information have been mapped to an extended version of the VIVO-ISF ontology and ingested into VIVO's database. Data is ingested using a custom set of scripts that include the ability to perform basic automated and curated disambiguation. VIVO can display a page for every object ingested, including connections to other objects in the VIVO database. A dataset page, for example, includes the dataset type, time interval, DOI, related publications, and authors. The dataset type field provides a connection to all other datasets of the same type. The author's page will show, among other information, related datasets and co-authors. Information previously spread across several unconnected databases is now stored in a single location. In addition to VIVO's default display, the new database can also be queried using SPARQL, a query language for semantic data. EarthCollab will also extend the VIVO web application. One such extension is the ability to cross-link separate VIVO instances across institutions, allowing local display of externally curated information. For example, Cornell's VIVO faculty pages will display UNAVCO's dataset information and UNAVCO's VIVO will display Cornell faculty member contact and position information. Additional extensions, including enhanced geospatial capabilities, will be developed following task-centered usability testing.
Integer sequence discovery from small graphs
Hoppe, Travis; Petrone, Anna
2015-01-01
We have exhaustively enumerated all simple, connected graphs of a finite order and have computed a selection of invariants over this set. Integer sequences were constructed from these invariants and checked against the Online Encyclopedia of Integer Sequences (OEIS). 141 new sequences were added and six sequences were extended. From the graph database, we were able to programmatically suggest relationships among the invariants. It will be shown that we can readily visualize any sequence of graphs with a given criteria. The code has been released as an open-source framework for further analysis and the database was constructed to be extensible to invariants not considered in this work. PMID:27034526
Design and implementation of a CORBA-based genome mapping system prototype.
Hu, J; Mungall, C; Nicholson, D; Archibald, A L
1998-01-01
CORBA (Common Object Request Broker Architecture), as an open standard, is considered to be a good solution for the development and deployment of applications in distributed heterogeneous environments. This technology can be applied in the bioinformatics area to enhance utilization, management and interoperation between biological resources. This paper investigates issues in developing CORBA applications for genome mapping information systems in the Internet environment with emphasis on database connectivity and graphical user interfaces. The design and implementation of a CORBA prototype for an animal genome mapping database are described. The prototype demonstration is available via: http://www.ri.bbsrc.ac.uk/ark_corba/. jian.hu@bbsrc.ac.uk
Biermann, Martin
2014-04-01
Clinical trials aiming for regulatory approval of a therapeutic agent must be conducted according to Good Clinical Practice (GCP). Clinical Data Management Systems (CDMS) are specialized software solutions geared toward GCP-trials. They are however less suited for data management in small non-GCP research projects. For use in researcher-initiated non-GCP studies, we developed a client-server database application based on the public domain CakePHP framework. The underlying MySQL database uses a simple data model based on only five data tables. The graphical user interface can be run in any web browser inside the hospital network. Data are validated upon entry. Data contained in external database systems can be imported interactively. Data are automatically anonymized on import, and the key lists identifying the subjects being logged to a restricted part of the database. Data analysis is performed by separate statistics and analysis software connecting to the database via a generic Open Database Connectivity (ODBC) interface. Since its first pilot implementation in 2011, the solution has been applied to seven different clinical research projects covering different clinical problems in different organ systems such as cancer of the thyroid and the prostate glands. This paper shows how the adoption of a generic web application framework is a feasible, flexible, low-cost, and user-friendly way of managing multidimensional research data in researcher-initiated non-GCP clinical projects. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
WEB-GIS Decision Support System for CO2 storage
NASA Astrophysics Data System (ADS)
Gaitanaru, Dragos; Leonard, Anghel; Radu Gogu, Constantin; Le Guen, Yvi; Scradeanu, Daniel; Pagnejer, Mihaela
2013-04-01
Environmental decision support systems (DSS) paradigm evolves and changes as more knowledge and technology become available to the environmental community. Geographic Information Systems (GIS) can be used to extract, assess and disseminate some types of information, which are otherwise difficult to access by traditional methods. In the same time, with the help of the Internet and accompanying tools, creating and publishing online interactive maps has become easier and rich with options. The Decision Support System (MDSS) developed for the MUSTANG (A MUltiple Space and Time scale Approach for the quaNtification of deep saline formations for CO2 storaGe) project is a user friendly web based application that uses the GIS capabilities. MDSS can be exploited by the experts for CO2 injection and storage in deep saline aquifers. The main objective of the MDSS is to help the experts to take decisions based large structured types of data and information. In order to achieve this objective the MDSS has a geospatial objected-orientated database structure for a wide variety of data and information. The entire application is based on several principles leading to a series of capabilities and specific characteristics: (i) Open-Source - the entire platform (MDSS) is based on open-source technologies - (1) database engine, (2) application server, (3) geospatial server, (4) user interfaces, (5) add-ons, etc. (ii) Multiple database connections - MDSS is capable to connect to different databases that are located on different server machines. (iii)Desktop user experience - MDSS architecture and design follows the structure of a desktop software. (iv)Communication - the server side and the desktop are bound together by series functions that allows the user to upload, use, modify and download data within the application. The architecture of the system involves one database and a modular application composed by: (1) a visualization module, (2) an analysis module, (3) a guidelines module, and (4) a risk assessment module. The Database component is build by using the PostgreSQL and PostGIS open source technology. The visualization module allows the user to view data of CO2 injection sites in different ways: (1) geospatial visualization, (2) table view, (3) 3D visualization. The analysis module will allow the user to perform certain analysis like Injectivity, Containment and Capacity analysis. The Risk Assessment module focus on the site risk matrix approach. The Guidelines module contains the methodologies of CO2 injection and storage into deep saline aquifers guidelines.
Development of an Integrated Hydrologic Modeling System for Rainfall-Runoff Simulation
NASA Astrophysics Data System (ADS)
Lu, B.; Piasecki, M.
2008-12-01
This paper aims to present the development of an integrated hydrological model which involves functionalities of digital watershed processing, online data retrieval, hydrologic simulation and post-event analysis. The proposed system is intended to work as a back end to the CUAHSI HIS cyberinfrastructure developments. As a first step into developing this system, a physics-based distributed hydrologic model PIHM (Penn State Integrated Hydrologic Model) is wrapped into OpenMI(Open Modeling Interface and Environment ) environment so as to seamlessly interact with OpenMI compliant meteorological models. The graphical user interface is being developed from the openGIS application called MapWindows which permits functionality expansion through the addition of plug-ins. . Modules required to set up through the GUI workboard include those for retrieving meteorological data from existing database or meteorological prediction models, obtaining geospatial data from the output of digital watershed processing, and importing initial condition and boundary condition. They are connected to the OpenMI compliant PIHM to simulate rainfall-runoff processes and includes a module for automatically displaying output after the simulation. Online databases are accessed through the WaterOneFlow web services, and the retrieved data are either stored in an observation database(OD) following the schema of Observation Data Model(ODM) in case for time series support, or a grid based storage facility which may be a format like netCDF or a grid-based-data database schema . Specific development steps include the creation of a bridge to overcome interoperability issue between PIHM and the ODM, as well as the embedding of TauDEM (Terrain Analysis Using Digital Elevation Models) into the model. This module is responsible for developing watershed and stream network using digital elevation models. Visualizing and editing geospatial data is achieved by the usage of MapWinGIS, an ActiveX control developed by MapWindow team. After applying to the practical watershed, the performance of the model can be tested by the post-event analysis module.
Technical Aspects of Interfacing MUMPS to an External SQL Relational Database Management System
Kuzmak, Peter M.; Walters, Richard F.; Penrod, Gail
1988-01-01
This paper describes an interface connecting InterSystems MUMPS (M/VX) to an external relational DBMS, the SYBASE Database Management System. The interface enables MUMPS to operate in a relational environment and gives the MUMPS language full access to a complete set of SQL commands. MUMPS generates SQL statements as ASCII text and sends them to the RDBMS. The RDBMS executes the statements and returns ASCII results to MUMPS. The interface suggests that the language features of MUMPS make it an attractive tool for use in the relational database environment. The approach described in this paper separates MUMPS from the relational database. Positioning the relational database outside of MUMPS promotes data sharing and permits a number of different options to be used for working with the data. Other languages like C, FORTRAN, and COBOL can access the RDBMS database. Advanced tools provided by the relational database vendor can also be used. SYBASE is an advanced high-performance transaction-oriented relational database management system for the VAX/VMS and UNIX operating systems. SYBASE is designed using a distributed open-systems architecture, and is relatively easy to interface with MUMPS.
Morisawa, Hiraku; Hirota, Mikako; Toda, Tosifusa
2006-01-01
Background In the post-genome era, most research scientists working in the field of proteomics are confronted with difficulties in management of large volumes of data, which they are required to keep in formats suitable for subsequent data mining. Therefore, a well-developed open source laboratory information management system (LIMS) should be available for their proteomics research studies. Results We developed an open source LIMS appropriately customized for 2-D gel electrophoresis-based proteomics workflow. The main features of its design are compactness, flexibility and connectivity to public databases. It supports the handling of data imported from mass spectrometry software and 2-D gel image analysis software. The LIMS is equipped with the same input interface for 2-D gel information as a clickable map on public 2DPAGE databases. The LIMS allows researchers to follow their own experimental procedures by reviewing the illustrations of 2-D gel maps and well layouts on the digestion plates and MS sample plates. Conclusion Our new open source LIMS is now available as a basic model for proteome informatics, and is accessible for further improvement. We hope that many research scientists working in the field of proteomics will evaluate our LIMS and suggest ways in which it can be improved. PMID:17018156
Arnold, Roland; Goldenberg, Florian; Mewes, Hans-Werner; Rattei, Thomas
2014-01-01
The Similarity Matrix of Proteins (SIMAP, http://mips.gsf.de/simap/) database has been designed to massively accelerate computationally expensive protein sequence analysis tasks in bioinformatics. It provides pre-calculated sequence similarities interconnecting the entire known protein sequence universe, complemented by pre-calculated protein features and domains, similarity clusters and functional annotations. SIMAP covers all major public protein databases as well as many consistently re-annotated metagenomes from different repositories. As of September 2013, SIMAP contains >163 million proteins corresponding to ∼70 million non-redundant sequences. SIMAP uses the sensitive FASTA search heuristics, the Smith–Waterman alignment algorithm, the InterPro database of protein domain models and the BLAST2GO functional annotation algorithm. SIMAP assists biologists by facilitating the interactive exploration of the protein sequence universe. Web-Service and DAS interfaces allow connecting SIMAP with any other bioinformatic tool and resource. All-against-all protein sequence similarity matrices of project-specific protein collections are generated on request. Recent improvements allow SIMAP to cover the rapidly growing sequenced protein sequence universe. New Web-Service interfaces enhance the connectivity of SIMAP. Novel tools for interactive extraction of protein similarity networks have been added. Open access to SIMAP is provided through the web portal; the portal also contains instructions and links for software access and flat file downloads. PMID:24165881
NASA Astrophysics Data System (ADS)
Cervato, C.; Fils, D.; Bohling, G.; Diver, P.; Greer, D.; Reed, J.; Tang, X.
2006-12-01
The federation of databases is not a new endeavor. Great strides have been made e.g. in the health and astrophysics communities. Reviews of those successes indicate that they have been able to leverage off key cross-community core concepts. In its simplest implementation, a federation of databases with identical base schemas that can be extended to address individual efforts, is relatively easy to accomplish. Efforts of groups like the Open Geospatial Consortium have shown methods to geospatially relate data between different sources. We present here a summary of CHRONOS's (http://www.chronos.org) experience with highly heterogeneous data. Our experience with the federation of very diverse databases shows that the wide variety of encoding options for items like locality, time scale, taxon ID, and other key parameters makes it difficult to effectively join data across them. However, the response to this is not to develop one large, monolithic database, which will suffer growth pains due to social, national, and operational issues, but rather to systematically develop the architecture that will enable cross-resource (database, repository, tool, interface) interaction. CHRONOS has accomplished the major hurdle of federating small IT database efforts with service-oriented and XML-based approaches. The application of easy-to-use procedures that allow groups of all sizes to implement and experiment with searches across various databases and to use externally created tools is vital. We are sharing with the geoinformatics community the difficulties with application frameworks, user authentication, standards compliance, and data storage encountered in setting up web sites and portals for various science initiatives (e.g., ANDRILL, EARTHTIME). The ability to incorporate CHRONOS data, services, and tools into the existing framework of a group is crucial to the development of a model that supports and extends the vitality of the small- to medium-sized research effort that is essential for a vibrant scientific community. This presentation will directly address issues of portal development related to JSR-168 and other portal API's as well as issues related to both federated and local directory-based authentication. The application of service-oriented architecture in connection with ReST-based approaches is vital to facilitate service use by experienced and less experienced information technology groups. Application of these services with XML- based schemas allows for the connection to third party tools such a GIS-based tools and software designed to perform a specific scientific analysis. The connection of all these capabilities into a combined framework based on the standard XHTML Document object model and CSS 2.0 standards used in traditional web development will be demonstrated. CHRONOS also utilizes newer client techniques such as AJAX and cross- domain scripting along with traditional server-side database, application, and web servers. The combination of the various components of this architecture creates an environment based on open and free standards that allows for the discovery, retrieval, and integration of tools and data.
van der Linden, Helma; Talmon, Jan; Tange, Huibert; Grimson, Jane; Hasman, Arie
2005-03-01
The PropeR EHR system (PropeRWeb) is a multidisciplinary electronic health record (EHR) system for multidisciplinary use in extramural patient care for stroke patients. The system is built using existing open source components and is based on open standards. It is implemented as a web application using servlets and Java Server Pages (JSP's) with a CORBA connection to the database servers, which are based on the OMG HDTF specifications. PropeRWeb is a generic system which can be readily customized for use in a variety of clinical domains. The system proved to be stable and flexible, although some aspects (a.o. user friendliness) could be improved. These improvements are currently under development in a second version.
Enhancing Geoscience Research Discovery Through the Semantic Web
NASA Astrophysics Data System (ADS)
Rowan, Linda R.; Gross, M. Benjamin; Mayernik, Matthew; Khan, Huda; Boler, Frances; Maull, Keith; Stott, Don; Williams, Steve; Corson-Rikert, Jon; Johns, Erica M.; Daniels, Michael; Krafft, Dean B.; Meertens, Charles
2016-04-01
UNAVCO, UCAR, and Cornell University are working together to leverage semantic web technologies to enable discovery of people, datasets, publications and other research products, as well as the connections between them. The EarthCollab project, a U.S. National Science Foundation EarthCube Building Block, is enhancing an existing open-source semantic web application, VIVO, to enhance connectivity across distributed networks of researchers and resources related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) UNAVCO, a geodetic facility and consortium that supports diverse research projects informed by geodesy. People, publications, datasets and grant information have been mapped to an extended version of the VIVO-ISF ontology and ingested into VIVO's database. Much of the VIVO ontology was built for the life sciences, so we have added some components of existing geoscience-based ontologies and a few terms from a local ontology that we created. The UNAVCO VIVO instance, connect.unavco.org, utilizes persistent identifiers whenever possible; for example using ORCIDs for people, publication DOIs, data DOIs and unique NSF grant numbers. Data is ingested using a custom set of scripts that include the ability to perform basic automated and curated disambiguation. VIVO can display a page for every object ingested, including connections to other objects in the VIVO database. A dataset page, for example, includes the dataset type, time interval, DOI, related publications, and authors. The dataset type field provides a connection to all other datasets of the same type. The author's page shows, among other information, related datasets and co-authors. Information previously spread across several unconnected databases is now stored in a single location. In addition to VIVO's default display, the new database can be queried using SPARQL, a query language for semantic data. EarthCollab is extending the VIVO web application. One such extension is the ability to cross-link separate VIVO instances across institutions, allowing local display of externally curated information. For example, Cornell's VIVO faculty pages will display UNAVCO's dataset information and UNAVCO's VIVO will display Cornell faculty member contact and position information. About half of UNAVCO's membership is international and we hope to connect our data to institutions in other countries with a similar approach. Additional extensions, including enhanced geospatial capabilities, will be developed based on task-centered usability testing.
The future of fMRI in cognitive neuroscience.
Poldrack, Russell A
2012-08-15
Over the last 20 years, fMRI has revolutionized cognitive neuroscience. Here I outline a vision for what the next 20 years of fMRI in cognitive neuroscience might look like. Some developments that I hope for include increased methodological rigor, an increasing focus on connectivity and pattern analysis as opposed to "blobology", a greater focus on selective inference powered by open databases, and increased use of ontologies and computational models to describe underlying processes. Copyright © 2011 Elsevier Inc. All rights reserved.
Connecting geoscience systems and data using Linked Open Data in the Web of Data
NASA Astrophysics Data System (ADS)
Ritschel, Bernd; Neher, Günther; Iyemori, Toshihiko; Koyama, Yukinobu; Yatagai, Akiyo; Murayama, Yasuhiro; Galkin, Ivan; King, Todd; Fung, Shing F.; Hughes, Steve; Habermann, Ted; Hapgood, Mike; Belehaki, Anna
2014-05-01
Linked Data or Linked Open Data (LOD) in the realm of free and publically accessible data is one of the most promising and most used semantic Web frameworks connecting various types of data and vocabularies including geoscience and related domains. The semantic Web extension to the commonly existing and used World Wide Web is based on the meaning of entities and relationships or in different words classes and properties used for data in a global data and information space, the Web of Data. LOD data is referenced and mash-uped by URIs and is retrievable using simple parameter controlled HTTP-requests leading to a result which is human-understandable or machine-readable. Furthermore the publishing and mash-up of data in the semantic Web realm is realized by specific Web standards, such as RDF, RDFS, OWL and SPARQL defined for the Web of Data. Semantic Web based mash-up is the Web method to aggregate and reuse various contents from different sources, such as e.g. using FOAF as a model and vocabulary for the description of persons and organizations -in our case- related to geoscience projects, instruments, observations, data and so on. On the example of three different geoscience data and information management systems, such as ESPAS, IUGONET and GFZ ISDC and the associated science data and related metadata or better called context data, the concept of the mash-up of systems and data using the semantic Web approach and the Linked Open Data framework is described in this publication. Because the three systems are based on different data models, data storage structures and technical implementations an extra semantic Web layer upon the existing interfaces is used for mash-up solutions. In order to satisfy the semantic Web standards, data transition processes, such as the transfer of content stored in relational databases or mapped in XML documents into SPARQL capable databases or endpoints using D2R or XSLT is necessary. In addition, the use of mapped and/or merged domain specific and cross-domain vocabularies in the sense of terminological ontologies are the foundation for a virtually unified data retrieval and access in IUGONET, ESPAS and GFZ ISDC data management systems. SPARQL endpoints realized either by originally RDF databases, e.g. Virtuoso or by virtual SPARQL endpoints, e.g. D2R services enable an only upon Web standard-based mash-up of domain-specific systems and data, such as in this case the space weather and geomagnetic domain but also cross-domain connection to data and vocabularies, e.g. related to NASA's VxOs, particularly VWO or NASA's PDS data system within LOD. LOD - Linked Open Data RDF - Resource Description Framework RDFS - RDF Schema OWL - Ontology Web Language SPARQL - SPARQL Protocol and RDF Query Language FOAF - Friends of a Friend ontology ESPAS - Near Earth Space Data Infrastructure for e-Science (Project) IUGONET - Inter-university Upper Atmosphere Global Observation Network (Project) GFZ ISDC - German Research Centre for Geosciences Information System and Data Center XML - Extensible Mark-up Language D2R - (Relational) Database to RDF (Transformation) XSLT - Extensible Stylesheet Language Transformation Virtuoso - OpenLink Virtuoso Universal Server (including RDF data management) NASA - National Aeronautics and Space Administration VOx - Virtual Observatories VWO - Virtual Wave Observatory PDS - Planetary Data System
DOT National Transportation Integrated Search
2014-12-01
The Bureau of Transportation Statistics (BTS) leads in the collection, analysis, and dissemination of transportation data. The Intermodal Passenger Connectivity Database : (ICPD) is an ongoing data collection that measures the degree of connectivity ...
WebCN: A web-based computation tool for in situ-produced cosmogenic nuclides
NASA Astrophysics Data System (ADS)
Ma, Xiuzeng; Li, Yingkui; Bourgeois, Mike; Caffee, Marc; Elmore, David; Granger, Darryl; Muzikar, Paul; Smith, Preston
2007-06-01
Cosmogenic nuclide techniques are increasingly being utilized in geoscience research. For this it is critical to establish an effective, easily accessible and well defined tool for cosmogenic nuclide computations. We have been developing a web-based tool (WebCN) to calculate surface exposure ages and erosion rates based on the nuclide concentrations measured by the accelerator mass spectrometry. WebCN for 10Be and 26Al has been finished and published at http://www.physics.purdue.edu/primelab/for_users/rockage.html. WebCN for 36Cl is under construction. WebCN is designed as a three-tier client/server model and uses the open source PostgreSQL for the database management and PHP for the interface design and calculations. On the client side, an internet browser and Microsoft Access are used as application interfaces to access the system. Open Database Connectivity is used to link PostgreSQL and Microsoft Access. WebCN accounts for both spatial and temporal distributions of the cosmic ray flux to calculate the production rates of in situ-produced cosmogenic nuclides at the Earth's surface.
2017-09-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS DATABASE CREATION AND STATISTICAL ANALYSIS: FINDING CONNECTIONS BETWEEN TWO OR MORE SECONDARY...BLANK ii Approved for public release. Distribution is unlimited. DATABASE CREATION AND STATISTICAL ANALYSIS: FINDING CONNECTIONS BETWEEN TWO OR MORE...Problem and Motivation . . . . . . . . . . . . . . . . . . . 1 1.2 DOD Applicability . . . . . . . . . . . . . . . . .. . . . . . . 2 1.3 Research
Hyperconnectivity, Attribute-Space Connectivity and Path Openings: Theoretical Relationships
NASA Astrophysics Data System (ADS)
Wilkinson, Michael H. F.
In this paper the relationship of hyperconnected filters with path openings and attribute-space connected filters is studied. Using a recently developed axiomatic framework based on hyperconnectivity operators, which are the hyperconnected equivalents of connectivity openings, it is shown that path openings are a special case of hyperconnected area openings. The new axiomatics also yield insight into the relationship between hyperconnectivity and attribute-space connectivity. It is shown any hyperconnectivity is an attribute-space connectivity, but that the reverse is not true.
Chao, Edmund Y S; Armiger, Robert S; Yoshida, Hiroaki; Lim, Jonathan; Haraguchi, Naoki
2007-03-08
The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the "Virtual Human" reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of these unique database and simulation technology. This integrated system, model library and database will impact on orthopaedic education, basic research, device development and application, and clinical patient care related to musculoskeletal joint system reconstruction, trauma management, and rehabilitation.
Chao, Edmund YS; Armiger, Robert S; Yoshida, Hiroaki; Lim, Jonathan; Haraguchi, Naoki
2007-01-01
The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the "Virtual Human" reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of these unique database and simulation technology. This integrated system, model library and database will impact on orthopaedic education, basic research, device development and application, and clinical patient care related to musculoskeletal joint system reconstruction, trauma management, and rehabilitation. PMID:17343764
Squires, R. Burke; Noronha, Jyothi; Hunt, Victoria; García‐Sastre, Adolfo; Macken, Catherine; Baumgarth, Nicole; Suarez, David; Pickett, Brett E.; Zhang, Yun; Larsen, Christopher N.; Ramsey, Alvin; Zhou, Liwei; Zaremba, Sam; Kumar, Sanjeev; Deitrich, Jon; Klem, Edward; Scheuermann, Richard H.
2012-01-01
Please cite this paper as: Squires et al. (2012) Influenza research database: an integrated bioinformatics resource for influenza research and surveillance. Influenza and Other Respiratory Viruses 6(6), 404–416. Background The recent emergence of the 2009 pandemic influenza A/H1N1 virus has highlighted the value of free and open access to influenza virus genome sequence data integrated with information about other important virus characteristics. Design The Influenza Research Database (IRD, http://www.fludb.org) is a free, open, publicly‐accessible resource funded by the U.S. National Institute of Allergy and Infectious Diseases through the Bioinformatics Resource Centers program. IRD provides a comprehensive, integrated database and analysis resource for influenza sequence, surveillance, and research data, including user‐friendly interfaces for data retrieval, visualization and comparative genomics analysis, together with personal log in‐protected ‘workbench’ spaces for saving data sets and analysis results. IRD integrates genomic, proteomic, immune epitope, and surveillance data from a variety of sources, including public databases, computational algorithms, external research groups, and the scientific literature. Results To demonstrate the utility of the data and analysis tools available in IRD, two scientific use cases are presented. A comparison of hemagglutinin sequence conservation and epitope coverage information revealed highly conserved protein regions that can be recognized by the human adaptive immune system as possible targets for inducing cross‐protective immunity. Phylogenetic and geospatial analysis of sequences from wild bird surveillance samples revealed a possible evolutionary connection between influenza virus from Delaware Bay shorebirds and Alberta ducks. Conclusions The IRD provides a wealth of integrated data and information about influenza virus to support research of the genetic determinants dictating virus pathogenicity, host range restriction and transmission, and to facilitate development of vaccines, diagnostics, and therapeutics. PMID:22260278
Crystallography Open Database – an open-access collection of crystal structures
Gražulis, Saulius; Chateigner, Daniel; Downs, Robert T.; Yokochi, A. F. T.; Quirós, Miguel; Lutterotti, Luca; Manakova, Elena; Butkus, Justas; Moeck, Peter; Le Bail, Armel
2009-01-01
The Crystallography Open Database (COD), which is a project that aims to gather all available inorganic, metal–organic and small organic molecule structural data in one database, is described. The database adopts an open-access model. The COD currently contains ∼80 000 entries in crystallographic information file format, with nearly full coverage of the International Union of Crystallography publications, and is growing in size and quality. PMID:22477773
Ubiquitous-health (U-Health) monitoring systems for elders and caregivers
NASA Astrophysics Data System (ADS)
Moon, Gyu; Lim, Kyung-won; Yoo, Young-min; An, Hye-min; Lee, Ki Seop; Szu, Harold
2011-06-01
This paper presents two aordable low-tack system for household biomedical wellness monitoring. The rst system, JIKIMI (pronounced caregiver in Korean), is a remote monitoring system that analyzes the behavior patterns of elders that live alone. JIKIMI is composed of an in-house sensing system, a set of wireless sensor nodes containing a pyroelectric infrared sensor to detect the motion of elders, an emergency button and a magnetic sensor that detects the opening and closing of doors. The system is also equipped with a server system, which is comprised of a database and web server. The server provides the mechanism for web-based monitoring to caregivers. The second system, Reader of Bottle Information (ROBI), is an assistant system which advises the contents of bottles for elders. ROBI is composed of bottles that have connected RFID tags and an advice system, which is composed of a wireless RFID reader, a gateway and a remote database server. The RFID tags are connected to the caps of the bottles are used in conjunction with the advice system These systems have been in use for three years and have proven to be useful for caregivers to provide more ecient and eective care services.
Social media based NPL system to find and retrieve ARM data: Concept paper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devarakonda, Ranjeet; Giansiracusa, Michael T.; Kumar, Jitendra
Information connectivity and retrieval has a role in our daily lives. The most pervasive source of online information is databases. The amount of data is growing at rapid rate and database technology is improving and having a profound effect. Almost all online applications are storing and retrieving information from databases. One challenge in supplying the public with wider access to informational databases is the need for knowledge of database languages like Structured Query Language (SQL). Although the SQL language has been published in many forms, not everybody is able to write SQL queries. Another challenge is that it may notmore » be practical to make the public aware of the structure of the database. There is a need for novice users to query relational databases using their natural language. To solve this problem, many natural language interfaces to structured databases have been developed. The goal is to provide more intuitive method for generating database queries and delivering responses. Social media makes it possible to interact with a wide section of the population. Through this medium, and with the help of Natural Language Processing (NLP) we can make the data of the Atmospheric Radiation Measurement Data Center (ADC) more accessible to the public. We propose an architecture for using Apache Lucene/Solr [1], OpenML [2,3], and Kafka [4] to generate an automated query/response system with inputs from Twitter5, our Cassandra DB, and our log database. Using the Twitter API and NLP we can give the public the ability to ask questions of our database and get automated responses.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devarakonda, Ranjeet; Giansiracusa, Michael T.; Kumar, Jitendra
Information connectivity and retrieval has a role in our daily lives. The most pervasive source of online information is databases. The amount of data is growing at rapid rate and database technology is improving and having a profound effect. Almost all online applications are storing and retrieving information from databases. One challenge in supplying the public with wider access to informational databases is the need for knowledge of database languages like Structured Query Language (SQL). Although the SQL language has been published in many forms, not everybody is able to write SQL queries. Another challenge is that it may notmore » be practical to make the public aware of the structure of the database. There is a need for novice users to query relational databases using their natural language. To solve this problem, many natural language interfaces to structured databases have been developed. The goal is to provide more intuitive method for generating database queries and delivering responses. Social media makes it possible to interact with a wide section of the population. Through this medium, and with the help of Natural Language Processing (NLP) we can make the data of the Atmospheric Radiation Measurement Data Center (ADC) more accessible to the public. We propose an architecture for using Apache Lucene/Solr [1], OpenML [2,3], and Kafka [4] to generate an automated query/response system with inputs from Twitter5, our Cassandra DB, and our log database. Using the Twitter API and NLP we can give the public the ability to ask questions of our database and get automated responses.« less
The U.S. Geological Survey Monthly Water Balance Model Futures Portal
Bock, Andrew R.; Hay, Lauren E.; Markstrom, Steven L.; Emmerich, Christopher; Talbert, Marian
2017-05-03
The U.S. Geological Survey Monthly Water Balance Model Futures Portal (https://my.usgs.gov/mows/) is a user-friendly interface that summarizes monthly historical and simulated future conditions for seven hydrologic and meteorological variables (actual evapotranspiration, potential evapotranspiration, precipitation, runoff, snow water equivalent, atmospheric temperature, and streamflow) at locations across the conterminous United States (CONUS).The estimates of these hydrologic and meteorological variables were derived using a Monthly Water Balance Model (MWBM), a modular system that simulates monthly estimates of components of the hydrologic cycle using monthly precipitation and atmospheric temperature inputs. Precipitation and atmospheric temperature from 222 climate datasets spanning historical conditions (1952 through 2005) and simulated future conditions (2020 through 2099) were summarized for hydrographic features and used to drive the MWBM for the CONUS. The MWBM input and output variables were organized into an open-access database. An Open Geospatial Consortium, Inc., Web Feature Service allows the querying and identification of hydrographic features across the CONUS. To connect the Web Feature Service to the open-access database, a user interface—the Monthly Water Balance Model Futures Portal—was developed to allow the dynamic generation of summary files and plots based on plot type, geographic location, specific climate datasets, period of record, MWBM variable, and other options. Both the plots and the data files are made available to the user for download
Sukhinin, Dmitrii I.; Engel, Andreas K.; Manger, Paul; Hilgetag, Claus C.
2016-01-01
Databases of structural connections of the mammalian brain, such as CoCoMac (cocomac.g-node.org) or BAMS (https://bams1.org), are valuable resources for the analysis of brain connectivity and the modeling of brain dynamics in species such as the non-human primate or the rodent, and have also contributed to the computational modeling of the human brain. Another animal model that is widely used in electrophysiological or developmental studies is the ferret; however, no systematic compilation of brain connectivity is currently available for this species. Thus, we have started developing a database of anatomical connections and architectonic features of the ferret brain, the Ferret(connect)ome, www.Ferretome.org. The Ferretome database has adapted essential features of the CoCoMac methodology and legacy, such as the CoCoMac data model. This data model was simplified and extended in order to accommodate new data modalities that were not represented previously, such as the cytoarchitecture of brain areas. The Ferretome uses a semantic parcellation of brain regions as well as a logical brain map transformation algorithm (objective relational transformation, ORT). The ORT algorithm was also adopted for the transformation of architecture data. The database is being developed in MySQL and has been populated with literature reports on tract-tracing observations in the ferret brain using a custom-designed web interface that allows efficient and validated simultaneous input and proofreading by multiple curators. The database is equipped with a non-specialist web interface. This interface can be extended to produce connectivity matrices in several formats, including a graphical representation superimposed on established ferret brain maps. An important feature of the Ferretome database is the possibility to trace back entries in connectivity matrices to the original studies archived in the system. Currently, the Ferretome contains 50 reports on connections comprising 20 injection reports with more than 150 labeled source and target areas, the majority reflecting connectivity of subcortical nuclei and 15 descriptions of regional brain architecture. We hope that the Ferretome database will become a useful resource for neuroinformatics and neural modeling, and will support studies of the ferret brain as well as facilitate advances in comparative studies of mesoscopic brain connectivity. PMID:27242503
Sukhinin, Dmitrii I; Engel, Andreas K; Manger, Paul; Hilgetag, Claus C
2016-01-01
Databases of structural connections of the mammalian brain, such as CoCoMac (cocomac.g-node.org) or BAMS (https://bams1.org), are valuable resources for the analysis of brain connectivity and the modeling of brain dynamics in species such as the non-human primate or the rodent, and have also contributed to the computational modeling of the human brain. Another animal model that is widely used in electrophysiological or developmental studies is the ferret; however, no systematic compilation of brain connectivity is currently available for this species. Thus, we have started developing a database of anatomical connections and architectonic features of the ferret brain, the Ferret(connect)ome, www.Ferretome.org. The Ferretome database has adapted essential features of the CoCoMac methodology and legacy, such as the CoCoMac data model. This data model was simplified and extended in order to accommodate new data modalities that were not represented previously, such as the cytoarchitecture of brain areas. The Ferretome uses a semantic parcellation of brain regions as well as a logical brain map transformation algorithm (objective relational transformation, ORT). The ORT algorithm was also adopted for the transformation of architecture data. The database is being developed in MySQL and has been populated with literature reports on tract-tracing observations in the ferret brain using a custom-designed web interface that allows efficient and validated simultaneous input and proofreading by multiple curators. The database is equipped with a non-specialist web interface. This interface can be extended to produce connectivity matrices in several formats, including a graphical representation superimposed on established ferret brain maps. An important feature of the Ferretome database is the possibility to trace back entries in connectivity matrices to the original studies archived in the system. Currently, the Ferretome contains 50 reports on connections comprising 20 injection reports with more than 150 labeled source and target areas, the majority reflecting connectivity of subcortical nuclei and 15 descriptions of regional brain architecture. We hope that the Ferretome database will become a useful resource for neuroinformatics and neural modeling, and will support studies of the ferret brain as well as facilitate advances in comparative studies of mesoscopic brain connectivity.
mantisGRID: a grid platform for DICOM medical images management in Colombia and Latin America.
Garcia Ruiz, Manuel; Garcia Chaves, Alvin; Ruiz Ibañez, Carlos; Gutierrez Mazo, Jorge Mario; Ramirez Giraldo, Juan Carlos; Pelaez Echavarria, Alejandro; Valencia Diaz, Edison; Pelaez Restrepo, Gustavo; Montoya Munera, Edwin Nelson; Garcia Loaiza, Bernardo; Gomez Gonzalez, Sebastian
2011-04-01
This paper presents the mantisGRID project, an interinstitutional initiative from Colombian medical and academic centers aiming to provide medical grid services for Colombia and Latin America. The mantisGRID is a GRID platform, based on open source grid infrastructure that provides the necessary services to access and exchange medical images and associated information following digital imaging and communications in medicine (DICOM) and health level 7 standards. The paper focuses first on the data abstraction architecture, which is achieved via Open Grid Services Architecture Data Access and Integration (OGSA-DAI) services and supported by the Globus Toolkit. The grid currently uses a 30-Mb bandwidth of the Colombian High Technology Academic Network, RENATA, connected to Internet 2. It also includes a discussion on the relational database created to handle the DICOM objects that were represented using Extensible Markup Language Schema documents, as well as other features implemented such as data security, user authentication, and patient confidentiality. Grid performance was tested using the three current operative nodes and the results demonstrated comparable query times between the mantisGRID (OGSA-DAI) and Distributed mySQL databases, especially for a large number of records.
Shape-based human detection for threat assessment
NASA Astrophysics Data System (ADS)
Lee, Dah-Jye; Zhan, Pengcheng; Thomas, Aaron; Schoenberger, Robert B.
2004-07-01
Detection of intrusions for early threat assessment requires the capability of distinguishing whether the intrusion is a human, an animal, or other objects. Most low-cost security systems use simple electronic motion detection sensors to monitor motion or the location of objects within the perimeter. Although cost effective, these systems suffer from high rates of false alarm, especially when monitoring open environments. Any moving objects including animals can falsely trigger the security system. Other security systems that utilize video equipment require human interpretation of the scene in order to make real-time threat assessment. Shape-based human detection technique has been developed for accurate early threat assessments for open and remote environment. Potential threats are isolated from the static background scene using differential motion analysis and contours of the intruding objects are extracted for shape analysis. Contour points are simplified by removing redundant points connecting short and straight line segments and preserving only those with shape significance. Contours are represented in tangent space for comparison with shapes stored in database. Power cepstrum technique has been developed to search for the best matched contour in database and to distinguish a human from other objects from different viewing angles and distances.
DOT National Transportation Integrated Search
2016-10-01
Each database record shows the modes that serve the facility, those that are nearby but not connecting, and incudes facility location information. The data can be analyzed on a city, state, zip code, metropolitan area, or modal basis. Geographic coor...
Comet: an open-source MS/MS sequence database search tool.
Eng, Jimmy K; Jahan, Tahmina A; Hoopmann, Michael R
2013-01-01
Proteomics research routinely involves identifying peptides and proteins via MS/MS sequence database search. Thus the database search engine is an integral tool in many proteomics research groups. Here, we introduce the Comet search engine to the existing landscape of commercial and open-source database search tools. Comet is open source, freely available, and based on one of the original sequence database search tools that has been widely used for many years. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Killion, Patrick J; Sherlock, Gavin; Iyer, Vishwanath R
2003-01-01
Background The power of microarray analysis can be realized only if data is systematically archived and linked to biological annotations as well as analysis algorithms. Description The Longhorn Array Database (LAD) is a MIAME compliant microarray database that operates on PostgreSQL and Linux. It is a fully open source version of the Stanford Microarray Database (SMD), one of the largest microarray databases. LAD is available at Conclusions Our development of LAD provides a simple, free, open, reliable and proven solution for storage and analysis of two-color microarray data. PMID:12930545
Integration of the Eventlndex with other ATLAS systems
NASA Astrophysics Data System (ADS)
Barberis, D.; Cárdenas Zárate, S. E.; Gallas, E. J.; Prokoshin, F.
2015-12-01
The ATLAS EventIndex System, developed for use in LHC Run 2, is designed to index every processed event in ATLAS, replacing the TAG System used in Run 1. Its storage infrastructure, based on Hadoop open-source software framework, necessitates revamping how information in this system relates to other ATLAS systems. It will store more indexes since the fundamental mechanisms for retrieving these indexes will be better integrated into all stages of data processing, allowing more events from later stages of processing to be indexed than was possible with the previous system. Connections with other systems (conditions database, monitoring) are fundamentally critical to assess dataset completeness, identify data duplication, and check data integrity, and also enhance access to information in EventIndex by user and system interfaces. This paper gives an overview of the ATLAS systems involved, the relevant metadata, and describe the technologies we are deploying to complete these connections.
Burns, Gully APC; Cheng, Wei-Cheng
2006-01-01
Background Knowledge bases that summarize the published literature provide useful online references for specific areas of systems-level biology that are not otherwise supported by large-scale databases. In the field of neuroanatomy, groups of small focused teams have constructed medium size knowledge bases to summarize the literature describing tract-tracing experiments in several species. Despite years of collation and curation, these databases only provide partial coverage of the available published literature. Given that the scientists reading these papers must all generate the interpretations that would normally be entered into such a system, we attempt here to provide general-purpose annotation tools to make it easy for members of the community to contribute to the task of data collation. Results In this paper, we describe an open-source, freely available knowledge management system called 'NeuroScholar' that allows straightforward structured markup of the PDF files according to a well-designed schema to capture the essential details of this class of experiment. Although, the example worked through in this paper is quite specific to neuroanatomical connectivity, the design is freely extensible and could conceivably be used to construct local knowledge bases for other experiment types. Knowledge representations of the experiment are also directly linked to the contributing textual fragments from the original research article. Through the use of this system, not only could members of the community contribute to the collation task, but input data can be gathered for automated approaches to permit knowledge acquisition through the use of Natural Language Processing (NLP). Conclusion We present a functional, working tool to permit users to populate knowledge bases for neuroanatomical connectivity data from the literature through the use of structured questionnaires. This system is open-source, fully functional and available for download from [1]. PMID:16895608
Distributed Episodic Exploratory Planning (DEEP)
2008-12-01
API). For DEEP, Hibernate offered the following advantages: • Abstracts SQL by utilizing HQL so any database with a Java Database Connectivity... Hibernate SQL ICCRTS International Command and Control Research and Technology Symposium JDB Java Distributed Blackboard JDBC Java Database Connectivity...selected because of its opportunistic reasoning capabilities and implemented in Java for platform independence. Java was chosen for ease of
Berthold, Michael R.; Hedrick, Michael P.; Gilson, Michael K.
2015-01-01
Today’s large, public databases of protein–small molecule interaction data are creating important new opportunities for data mining and integration. At the same time, new graphical user interface-based workflow tools offer facile alternatives to custom scripting for informatics and data analysis. Here, we illustrate how the large protein-ligand database BindingDB may be incorporated into KNIME workflows as a step toward the integration of pharmacological data with broader biomolecular analyses. Thus, we describe a collection of KNIME workflows that access BindingDB data via RESTful webservices and, for more intensive queries, via a local distillation of the full BindingDB dataset. We focus in particular on the KNIME implementation of knowledge-based tools to generate informed hypotheses regarding protein targets of bioactive compounds, based on notions of chemical similarity. A number of variants of this basic approach are tested for seven existing drugs with relatively ill-defined therapeutic targets, leading to replication of some previously confirmed results and discovery of new, high-quality hits. Implications for future development are discussed. Database URL: www.bindingdb.org PMID:26384374
Systematic plan of building Web geographic information system based on ActiveX control
NASA Astrophysics Data System (ADS)
Zhang, Xia; Li, Deren; Zhu, Xinyan; Chen, Nengcheng
2003-03-01
A systematic plan of building Web Geographic Information System (WebGIS) using ActiveX technology is proposed in this paper. In the proposed plan, ActiveX control technology is adopted in building client-side application, and two different schemas are introduced to implement communication between controls in users¡ browser and middle application server. One is based on Distribute Component Object Model (DCOM), the other is based on socket. In the former schema, middle service application is developed as a DCOM object that communicates with ActiveX control through Object Remote Procedure Call (ORPC) and accesses data in GIS Data Server through Open Database Connectivity (ODBC). In the latter, middle service application is developed using Java language. It communicates with ActiveX control through socket based on TCP/IP and accesses data in GIS Data Server through Java Database Connectivity (JDBC). The first one is usually developed using C/C++, and it is difficult to develop and deploy. The second one is relatively easy to develop, but its performance of data transfer relies on Web bandwidth. A sample application is developed using the latter schema. It is proved that the performance of the sample application is better than that of some other WebGIS applications in some degree.
Steffensen, Jon Lund; Dufault-Thompson, Keith; Zhang, Ying
2018-01-01
The metabolism of individual organisms and biological communities can be viewed as a network of metabolites connected to each other through chemical reactions. In metabolic networks, chemical reactions transform reactants into products, thereby transferring elements between these metabolites. Knowledge of how elements are transferred through reactant/product pairs allows for the identification of primary compound connections through a metabolic network. However, such information is not readily available and is often challenging to obtain for large reaction databases or genome-scale metabolic models. In this study, a new algorithm was developed for automatically predicting the element-transferring reactant/product pairs using the limited information available in the standard representation of metabolic networks. The algorithm demonstrated high efficiency in analyzing large datasets and provided accurate predictions when benchmarked with manually curated data. Applying the algorithm to the visualization of metabolic networks highlighted pathways of primary reactant/product connections and provided an organized view of element-transferring biochemical transformations. The algorithm was implemented as a new function in the open source software package PSAMM in the release v0.30 (https://zhanglab.github.io/psamm/).
Connectivity and functional profiling of abnormal brain structures in pedophilia
Poeppl, Timm B.; Eickhoff, Simon B.; Fox, Peter T.; Laird, Angela R.; Rupprecht, Rainer; Langguth, Berthold; Bzdok, Danilo
2015-01-01
Despite its 0.5–1% lifetime prevalence in men and its general societal relevance, neuroimaging investigations in pedophilia are scarce. Preliminary findings indicate abnormal brain structure and function. However, no study has yet linked structural alterations in pedophiles to both connectional and functional properties of the aberrant hotspots. The relationship between morphological alterations and brain function in pedophilia as well as their contribution to its psychopathology thus remain unclear. First, we assessed bimodal connectivity of structurally altered candidate regions using meta-analytic connectivity modeling (MACM) and resting-state correlations employing openly accessible data. We compared the ensuing connectivity maps to the activation likelihood estimation (ALE) maps of a recent quantitative meta-analysis of brain activity during processing of sexual stimuli. Second, we functionally characterized the structurally altered regions employing meta-data of a large-scale neuroimaging database. Candidate regions were functionally connected to key areas for processing of sexual stimuli. Moreover, we found that the functional role of structurally altered brain regions in pedophilia relates to nonsexual emotional as well as neurocognitive and executive functions, previously reported to be impaired in pedophiles. Our results suggest that structural brain alterations affect neural networks for sexual processing by way of disrupted functional connectivity, which may entail abnormal sexual arousal patterns. The findings moreover indicate that structural alterations account for common affective and neurocognitive impairments in pedophilia. The present multi-modal integration of brain structure and function analyses links sexual and nonsexual psychopathology in pedophilia. PMID:25733379
Connectivity and functional profiling of abnormal brain structures in pedophilia.
Poeppl, Timm B; Eickhoff, Simon B; Fox, Peter T; Laird, Angela R; Rupprecht, Rainer; Langguth, Berthold; Bzdok, Danilo
2015-06-01
Despite its 0.5-1% lifetime prevalence in men and its general societal relevance, neuroimaging investigations in pedophilia are scarce. Preliminary findings indicate abnormal brain structure and function. However, no study has yet linked structural alterations in pedophiles to both connectional and functional properties of the aberrant hotspots. The relationship between morphological alterations and brain function in pedophilia as well as their contribution to its psychopathology thus remain unclear. First, we assessed bimodal connectivity of structurally altered candidate regions using meta-analytic connectivity modeling (MACM) and resting-state correlations employing openly accessible data. We compared the ensuing connectivity maps to the activation likelihood estimation (ALE) maps of a recent quantitative meta-analysis of brain activity during processing of sexual stimuli. Second, we functionally characterized the structurally altered regions employing meta-data of a large-scale neuroimaging database. Candidate regions were functionally connected to key areas for processing of sexual stimuli. Moreover, we found that the functional role of structurally altered brain regions in pedophilia relates to nonsexual emotional as well as neurocognitive and executive functions, previously reported to be impaired in pedophiles. Our results suggest that structural brain alterations affect neural networks for sexual processing by way of disrupted functional connectivity, which may entail abnormal sexual arousal patterns. The findings moreover indicate that structural alterations account for common affective and neurocognitive impairments in pedophilia. The present multimodal integration of brain structure and function analyses links sexual and nonsexual psychopathology in pedophilia. © 2015 Wiley Periodicals, Inc.
Hannon, Charles P; Chalmers, Peter N; Carpiniello, Matthew F; Cvetanovich, Gregory L; Cole, Brian J; Bach, Bernard R
2016-10-19
The purpose of this study was to determine the rate and type of inconsistencies between disclosures self-reported by physicians at a major academic meeting in the United States and industry-reported disclosures in the Open Payments database for a concordant time period. Disclosures for every first and last author from the United States with a medical degree of a podium or poster presentation at the 2014 American Academy of Orthopaedic Surgeons (AAOS) Annual Meeting were collected and were compared with the disclosures reported in the Open Payments database to determine if any inconsistencies were present and, if so, within which category. In total, 1,925 total AAOS presenters were identified, and 1,113 met the inclusion criteria. Based on AAOS disclosures, 432 (39%) should have been listed within the Open Payments database. There were 125 presenters (11%) who reported an AAOS disclosure and thus should have been included in the Open Payments database, but were not included. An additional 259 presenters (23%) had ≥1 AAOS disclosures that were not reported or were improperly categorized in the Open Payments database. Inconsistencies were more common for authors who had significantly more poster presentations (p < 0.001), podium presentations (p = 0.01), total presentations (p < 0.001), and AAOS disclosures (p < 0.001) and a significantly higher value of payments in the Open Payments database (p < 0.001). In this sample, there was a 35% rate of inconsistency between physician-reported financial relationships for presenters at the AAOS Annual Meeting and industry-reported relationships published in the Open Payments database. Copyright © 2016 by The Journal of Bone and Joint Surgery, Incorporated.
NASA Astrophysics Data System (ADS)
Bartolini, S.; Becerril, L.; Martí, J.
2014-11-01
One of the most important issues in modern volcanology is the assessment of volcanic risk, which will depend - among other factors - on both the quantity and quality of the available data and an optimum storage mechanism. This will require the design of purpose-built databases that take into account data format and availability and afford easy data storage and sharing, and will provide for a more complete risk assessment that combines different analyses but avoids any duplication of information. Data contained in any such database should facilitate spatial and temporal analysis that will (1) produce probabilistic hazard models for future vent opening, (2) simulate volcanic hazards and (3) assess their socio-economic impact. We describe the design of a new spatial database structure, VERDI (Volcanic managEment Risk Database desIgn), which allows different types of data, including geological, volcanological, meteorological, monitoring and socio-economic information, to be manipulated, organized and managed. The root of the question is to ensure that VERDI will serve as a tool for connecting different kinds of data sources, GIS platforms and modeling applications. We present an overview of the database design, its components and the attributes that play an important role in the database model. The potential of the VERDI structure and the possibilities it offers in regard to data organization are here shown through its application on El Hierro (Canary Islands). The VERDI database will provide scientists and decision makers with a useful tool that will assist to conduct volcanic risk assessment and management.
Databases and Electronic Resources - Betty Petersen Memorial Library
of NOAA-Wide and Open Access Databases on the NOAA Central Library website. American Meteorological to a nonfederal website. Open Science Directory Open Science Directory contains collections of Open Access Journals (e.g. Directory of Open Access Journals) and journals in the special programs (Hinari
Gražulis, Saulius; Daškevič, Adriana; Merkys, Andrius; Chateigner, Daniel; Lutterotti, Luca; Quirós, Miguel; Serebryanaya, Nadezhda R.; Moeck, Peter; Downs, Robert T.; Le Bail, Armel
2012-01-01
Using an open-access distribution model, the Crystallography Open Database (COD, http://www.crystallography.net) collects all known ‘small molecule / small to medium sized unit cell’ crystal structures and makes them available freely on the Internet. As of today, the COD has aggregated ∼150 000 structures, offering basic search capabilities and the possibility to download the whole database, or parts thereof using a variety of standard open communication protocols. A newly developed website provides capabilities for all registered users to deposit published and so far unpublished structures as personal communications or pre-publication depositions. Such a setup enables extension of the COD database by many users simultaneously. This increases the possibilities for growth of the COD database, and is the first step towards establishing a world wide Internet-based collaborative platform dedicated to the collection and curation of structural knowledge. PMID:22070882
Flexible Decision Support in Device-Saturated Environments
2003-10-01
also output tuples to a remote MySQL or Postgres database. 3.3 GUI The GUI allows the user to pose queries using SQL and to display query...DatabaseConnection.java – handles connections to an external database (such as MySQL or Postgres ). • Debug.java – contains the code for printing out Debug messages...also provided. It is possible to output the results of queries to a MySQL or Postgres database for archival and the GUI can query those results
Extracting and connecting chemical structures from text sources using chemicalize.org.
Southan, Christopher; Stracz, Andras
2013-04-23
Exploring bioactive chemistry requires navigating between structures and data from a variety of text-based sources. While PubChem currently includes approximately 16 million document-extracted structures (15 million from patents) the extent of public inter-document and document-to-database links is still well below any estimated total, especially for journal articles. A major expansion in access to text-entombed chemistry is enabled by chemicalize.org. This on-line resource can process IUPAC names, SMILES, InChI strings, CAS numbers and drug names from pasted text, PDFs or URLs to generate structures, calculate properties and launch searches. Here, we explore its utility for answering questions related to chemical structures in documents and where these overlap with database records. These aspects are illustrated using a common theme of Dipeptidyl Peptidase 4 (DPPIV) inhibitors. Full-text open URL sources facilitated the download of over 1400 structures from a DPPIV patent and the alignment of specific examples with IC50 data. Uploading the SMILES to PubChem revealed extensive linking to patents and papers, including prior submissions from chemicalize.org as submitting source. A DPPIV medicinal chemistry paper was completely extracted and structures were aligned to the activity results table, as well as linked to other documents via PubChem. In both cases, key structures with data were partitioned from common chemistry by dividing them into individual new PDFs for conversion. Over 500 structures were also extracted from a batch of PubMed abstracts related to DPPIV inhibition. The drug structures could be stepped through each text occurrence and included some converted MeSH-only IUPAC names not linked in PubChem. Performing set intersections proved effective for detecting compounds-in-common between documents and merged extractions. This work demonstrates the utility of chemicalize.org for the exploration of chemical structure connectivity between documents and databases, including structure searches in PubChem, InChIKey searches in Google and the chemicalize.org archive. It has the flexibility to extract text from any internal, external or Web source. It synergizes with other open tools and the application is undergoing continued development. It should thus facilitate progress in medicinal chemistry, chemical biology and other bioactive chemistry domains.
Piantoni, Giovanni; Cheung, Bing Leung P.; Van Veen, Barry D.; Romeijn, Nico; Riedner, Brady A.; Tononi, Giulio; Van Der Werf, Ysbrand D.; Van Someren, Eus J.W.
2013-01-01
The cingulate cortex is regarded as the backbone of structural and functional connectivity of the brain. While its functional connectivity has been intensively studied, little is known about its effective connectivity, its modulation by behavioral states, and its involvement in cognitive performance. Given their previously reported effects on cingulate functional connectivity, we investigated how eye-closure and sleep deprivation changed cingulate effective connectivity, estimated from resting-state high-density electroencephalography (EEG) using a novel method to calculate Granger Causality directly in source space. Effective connectivity along the cingulate cortex was dominant in the forward direction. Eyes-open connectivity in the forward direction was greater compared to eyes-closed, in well-rested participants. The difference between eyes-open and eyes-closed connectivity was attenuated and no longer significant after sleep deprivation. Individual variability in the forward connectivity after sleep deprivation predicted subsequent task performance, such that those subjects who showed a greater increase in forward connectivity between the eyes-open and the eyes-closed periods also performed better on a sustained attention task. Effective connectivity in the opposite, backward, direction was not affected by whether the eyes were open or closed or by sleep deprivation. These findings indicate that the effective connectivity from posterior to anterior cingulate regions is enhanced when a well-rested subject has his eyes open compared to when they are closed. Sleep deprivation impairs this directed information flow, proportional to its deleterious effect on vigilance. Therefore, sleep may play a role in the maintenance of waking effective connectivity. PMID:23643925
Developing a GIS for CO2 analysis using lightweight, open source components
NASA Astrophysics Data System (ADS)
Verma, R.; Goodale, C. E.; Hart, A. F.; Kulawik, S. S.; Law, E.; Osterman, G. B.; Braverman, A.; Nguyen, H. M.; Mattmann, C. A.; Crichton, D. J.; Eldering, A.; Castano, R.; Gunson, M. R.
2012-12-01
There are advantages to approaching the realm of geographic information systems (GIS) using lightweight, open source components in place of a more traditional web map service (WMS) solution. Rapid prototyping, schema-less data storage, the flexible interchange of components, and open source community support are just some of the benefits. In our effort to develop an application supporting the geospatial and temporal rendering of remote sensing carbon-dioxide (CO2) data for the CO2 Virtual Science Data Environment project, we have connected heterogeneous open source components together to form a GIS. Utilizing widely popular open source components including the schema-less database MongoDB, Leaflet interactive maps, the HighCharts JavaScript graphing library, and Python Bottle web-services, we have constructed a system for rapidly visualizing CO2 data with reduced up-front development costs. These components can be aggregated together, resulting in a configurable stack capable of replicating features provided by more standard GIS technologies. The approach we have taken is not meant to replace the more established GIS solutions, but to instead offer a rapid way to provide GIS features early in the development of an application and to offer a path towards utilizing more capable GIS technology in the future.
... this page: //medlineplus.gov/ency/article/002950.htm Open heart surgery To use the sharing features on this ... large arteries connected to the heart. The term "open heart surgery" means that you are connected to a ...
Yang, Haixiu; Shang, Desi; Xu, Yanjun; Zhang, Chunlong; Feng, Li; Sun, Zeguo; Shi, Xinrui; Zhang, Yunpeng; Han, Junwei; Su, Fei; Li, Chunquan; Li, Xia
2017-07-27
Well characterized the connections among diseases, long non-coding RNAs (lncRNAs) and drugs are important for elucidating the key roles of lncRNAs in biological mechanisms in various biological states. In this study, we constructed a database called LNCmap (LncRNA Connectivity Map), available at http://www.bio-bigdata.com/LNCmap/ , to establish the correlations among diseases, physiological processes, and the action of small molecule therapeutics by attempting to describe all biological states in terms of lncRNA signatures. By reannotating the microarray data from the Connectivity Map database, the LNCmap obtained 237 lncRNA signatures of 5916 instances corresponding to 1262 small molecular drugs. We provided a user-friendly interface for the convenient browsing, retrieval and download of the database, including detailed information and the associations of drugs and corresponding affected lncRNAs. Additionally, we developed two enrichment analysis methods for users to identify candidate drugs for a particular disease by inputting the corresponding lncRNA expression profiles or an associated lncRNA list and then comparing them to the lncRNA signatures in our database. Overall, LNCmap could significantly improve our understanding of the biological roles of lncRNAs and provide a unique resource to reveal the connections among drugs, lncRNAs and diseases.
Demonstrating the Open Data Repository's Data Publisher: The CheMin Database
NASA Astrophysics Data System (ADS)
Stone, N.; Lafuente, B.; Bristow, T.; Pires, A.; Keller, R. M.; Downs, R. T.; Blake, D.; Dateo, C. E.; Fonda, M.
2018-04-01
The Open Data Repository's Data Publisher aims to provide an easy-to-use software tool that will allow researchers to create and publish database templates and related data. The CheMin Database developed using this framework is shown as an example.
Using graph approach for managing connectivity in integrative landscape modelling
NASA Astrophysics Data System (ADS)
Rabotin, Michael; Fabre, Jean-Christophe; Libres, Aline; Lagacherie, Philippe; Crevoisier, David; Moussa, Roger
2013-04-01
In cultivated landscapes, a lot of landscape elements such as field boundaries, ditches or banks strongly impact water flows, mass and energy fluxes. At the watershed scale, these impacts are strongly conditionned by the connectivity of these landscape elements. An accurate representation of these elements and of their complex spatial arrangements is therefore of great importance for modelling and predicting these impacts.We developped in the framework of the OpenFLUID platform (Software Environment for Modelling Fluxes in Landscapes) a digital landscape representation that takes into account the spatial variabilities and connectivities of diverse landscape elements through the application of the graph theory concepts. The proposed landscape representation consider spatial units connected together to represent the flux exchanges or any other information exchanges. Each spatial unit of the landscape is represented as a node of a graph and relations between units as graph connections. The connections are of two types - parent-child connection and up/downstream connection - which allows OpenFLUID to handle hierarchical graphs. Connections can also carry informations and graph evolution during simulation is possible (connections or elements modifications). This graph approach allows a better genericity on landscape representation, a management of complex connections and facilitate development of new landscape representation algorithms. Graph management is fully operational in OpenFLUID for developers or modelers ; and several graph tools are available such as graph traversal algorithms or graph displays. Graph representation can be managed i) manually by the user (for example in simple catchments) through XML-based files in easily editable and readable format or ii) by using methods of the OpenFLUID-landr library which is an OpenFLUID library relying on common open-source spatial libraries (ogr vector, geos topologic vector and gdal raster libraries). OpenFLUID-landr library has been developed in order i) to be used with no GIS expert skills needed (common gis formats can be read and simplified spatial management is provided), ii) to easily develop adapted rules of landscape discretization and graph creation to follow spatialized model requirements and iii) to allow model developers to manage dynamic and complex spatial topology. Graph management in OpenFLUID are shown with i) examples of hydrological modelizations on complex farmed landscapes and ii) the new implementation of Geo-MHYDAS tool based on the OpenFLUID-landr library, which allows to discretize a landscape and create graph structure for the MHYDAS model requirements.
Weathering Database Technology
ERIC Educational Resources Information Center
Snyder, Robert
2005-01-01
Collecting weather data is a traditional part of a meteorology unit at the middle level. However, making connections between the data and weather conditions can be a challenge. One way to make these connections clearer is to enter the data into a database. This allows students to quickly compare different fields of data and recognize which…
NASA Astrophysics Data System (ADS)
Koptev, V. Yu
2017-02-01
The work represents the results of studying basic interconnected criteria of separate equipment units of the transport network machines fleet, depending on production and mining factors to improve the transport systems management. Justifying the selection of a control system necessitates employing new methodologies and models, augmented with stability and transport flow criteria, accounting for mining work development dynamics on mining sites. A necessary condition is the accounting of technical and operating parameters related to vehicle operation. Modern open pit mining dispatching systems must include such kinds of the information database. An algorithm forming a machine fleet is presented based on multi-variation task solution in connection with defining reasonable operating features of a machine working as a part of a complex. Proposals cited in the work may apply to mining machines (drilling equipment, excavators) and construction equipment (bulldozers, cranes, pile-drivers), city transport and other types of production activities using machine fleet.
Inspiring Collaboration: The Legacy of Theo Colborn's Transdisciplinary Research on Fracking.
Wylie, Sara; Schultz, Kim; Thomas, Deborah; Kassotis, Chris; Nagel, Susan
2016-09-13
This article describes Dr Theo Colborn's legacy of inspiring complementary and synergistic environmental health research and advocacy. Colborn, a founder of endocrine disruption research, also stimulated study of hydraulic fracturing (fracking). In 2014, the United States led the world in oil and gas production, with fifteen million Americans living within one mile of an oil or gas well. Colborn pioneered efforts to understand and control the impacts of this sea change in energy production. In 2005, her research organization The Endocrine Disruption Exchange (TEDX) developed a database of chemicals used in natural gas extraction and their health effects. This database stimulated novel scientific and social scientific research and informed advocacy by (1) connecting communities' diverse health impacts to chemicals used in natural gas development, (2) inspiring social science research on open-source software and hardware for citizen science, and (3) posing new scientific questions about the endocrine-disrupting properties of fracking chemicals. © The Author(s) 2016.
[Plug-in Based Centralized Control System in Operating Rooms].
Wang, Yunlong
2017-05-30
Centralized equipment controls in an operating room (OR) is crucial to an efficient workflow in the OR. To achieve centralized control, an integrative OR needs to focus on designing a control panel that can appropriately incorporate equipment from different manufactures with various connecting ports and controls. Here we propose to achieve equipment integration using plug-in modules. Each OR will be equipped with a dynamic plug-in control panel containing physically removable connecting ports. Matching outlets will be installed onto the control panels of each equipment used at any given time. This dynamic control panel will be backed with a database containing plug-in modules that can connect any two types of connecting ports common among medical equipment manufacturers. The correct connecting ports will be called using reflection dynamics. This database will be updated regularly to include new connecting ports on the market, making it easy to maintain, update, expand and remain relevant as new equipment are developed. Together, the physical panel and the database will achieve centralized equipment controls in the OR that can be easily adapted to any equipment in the OR.
A natural language interface plug-in for cooperative query answering in biological databases.
Jamil, Hasan M
2012-06-11
One of the many unique features of biological databases is that the mere existence of a ground data item is not always a precondition for a query response. It may be argued that from a biologist's standpoint, queries are not always best posed using a structured language. By this we mean that approximate and flexible responses to natural language like queries are well suited for this domain. This is partly due to biologists' tendency to seek simpler interfaces and partly due to the fact that questions in biology involve high level concepts that are open to interpretations computed using sophisticated tools. In such highly interpretive environments, rigidly structured databases do not always perform well. In this paper, our goal is to propose a semantic correspondence plug-in to aid natural language query processing over arbitrary biological database schema with an aim to providing cooperative responses to queries tailored to users' interpretations. Natural language interfaces for databases are generally effective when they are tuned to the underlying database schema and its semantics. Therefore, changes in database schema become impossible to support, or a substantial reorganization cost must be absorbed to reflect any change. We leverage developments in natural language parsing, rule languages and ontologies, and data integration technologies to assemble a prototype query processor that is able to transform a natural language query into a semantically equivalent structured query over the database. We allow knowledge rules and their frequent modifications as part of the underlying database schema. The approach we adopt in our plug-in overcomes some of the serious limitations of many contemporary natural language interfaces, including support for schema modifications and independence from underlying database schema. The plug-in introduced in this paper is generic and facilitates connecting user selected natural language interfaces to arbitrary databases using a semantic description of the intended application. We demonstrate the feasibility of our approach with a practical example.
Crystallography Open Databases and Preservation: a World-wide Initiative
NASA Astrophysics Data System (ADS)
Chateigner, Daniel
In 2003, an international team of crystallographers proposed the Crystallography Open Database (COD), a fully-free collection of crystal structure data, in the aim of ensuring their preservation. With nearly 250000 entries, this database represents a large open set of data for crystallographers, academics and industrials, located at five different places world-wide, and included in Thomson-Reuters’ ISI. As a large step towards data preservation, raw data can now be uploaded along with «digested» structure files, and COD can be questioned by most of the crystallography-linked industrial software. The COD initiative work deserves several other open developments.
A proposal of fuzzy connective with learning function and its application to fuzzy retrieval system
NASA Technical Reports Server (NTRS)
Hayashi, Isao; Naito, Eiichi; Ozawa, Jun; Wakami, Noboru
1993-01-01
A new fuzzy connective and a structure of network constructed by fuzzy connectives are proposed to overcome a drawback of conventional fuzzy retrieval systems. This network represents a retrieval query and the fuzzy connectives in networks have a learning function to adjust its parameters by data from a database and outputs of a user. The fuzzy retrieval systems employing this network are also constructed. Users can retrieve results even with a query whose attributes do not exist in a database schema and can get satisfactory results for variety of thinkings by learning function.
Katayama, Toshiaki; Arakawa, Kazuharu; Nakao, Mitsuteru; Ono, Keiichiro; Aoki-Kinoshita, Kiyoko F; Yamamoto, Yasunori; Yamaguchi, Atsuko; Kawashima, Shuichi; Chun, Hong-Woo; Aerts, Jan; Aranda, Bruno; Barboza, Lord Hendrix; Bonnal, Raoul Jp; Bruskiewich, Richard; Bryne, Jan C; Fernández, José M; Funahashi, Akira; Gordon, Paul Mk; Goto, Naohisa; Groscurth, Andreas; Gutteridge, Alex; Holland, Richard; Kano, Yoshinobu; Kawas, Edward A; Kerhornou, Arnaud; Kibukawa, Eri; Kinjo, Akira R; Kuhn, Michael; Lapp, Hilmar; Lehvaslaiho, Heikki; Nakamura, Hiroyuki; Nakamura, Yasukazu; Nishizawa, Tatsuya; Nobata, Chikashi; Noguchi, Tamotsu; Oinn, Thomas M; Okamoto, Shinobu; Owen, Stuart; Pafilis, Evangelos; Pocock, Matthew; Prins, Pjotr; Ranzinger, René; Reisinger, Florian; Salwinski, Lukasz; Schreiber, Mark; Senger, Martin; Shigemoto, Yasumasa; Standley, Daron M; Sugawara, Hideaki; Tashiro, Toshiyuki; Trelles, Oswaldo; Vos, Rutger A; Wilkinson, Mark D; York, William; Zmasek, Christian M; Asai, Kiyoshi; Takagi, Toshihisa
2010-08-21
Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies.
2010-01-01
Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies. PMID:20727200
"XANSONS for COD": a new small BOINC project in crystallography
NASA Astrophysics Data System (ADS)
Neverov, Vladislav S.; Khrapov, Nikolay P.
2018-04-01
"XANSONS for COD" (http://xansons4cod.com) is a new BOINC project aimed at creating the open-access database of simulated x-ray and neutron powder diffraction patterns for nanocrystalline phase of materials from the collection of the Crystallography Open Database (COD). The project uses original open-source software XaNSoNS to simulate diffraction patterns on CPU and GPU. This paper describes the scientific problem this project solves, the project's internal structure, its operation principles and organization of the final database.
Ushijima, Masaru; Mashima, Tetsuo; Tomida, Akihiro; Dan, Shingo; Saito, Sakae; Furuno, Aki; Tsukahara, Satomi; Seimiya, Hiroyuki; Yamori, Takao; Matsuura, Masaaki
2013-03-01
Genome-wide transcriptional expression analysis is a powerful strategy for characterizing the biological activity of anticancer compounds. It is often instructive to identify gene sets involved in the activity of a given drug compound for comparison with different compounds. Currently, however, there is no comprehensive gene expression database and related application system that is; (i) specialized in anticancer agents; (ii) easy to use; and (iii) open to the public. To develop a public gene expression database of antitumor agents, we first examined gene expression profiles in human cancer cells after exposure to 35 compounds including 25 clinically used anticancer agents. Gene signatures were extracted that were classified as upregulated or downregulated after exposure to the drug. Hierarchical clustering showed that drugs with similar mechanisms of action, such as genotoxic drugs, were clustered. Connectivity map analysis further revealed that our gene signature data reflected modes of action of the respective agents. Together with the database, we developed analysis programs that calculate scores for ranking changes in gene expression and for searching statistically significant pathways from the Kyoto Encyclopedia of Genes and Genomes database in order to analyze the datasets more easily. Our database and the analysis programs are available online at our website (http://scads.jfcr.or.jp/db/cs/). Using these systems, we successfully showed that proteasome inhibitors are selectively classified as endoplasmic reticulum stress inducers and induce atypical endoplasmic reticulum stress. Thus, our public access database and related analysis programs constitute a set of efficient tools to evaluate the mode of action of novel compounds and identify promising anticancer lead compounds. © 2012 Japanese Cancer Association.
The YeastGenome app: the Saccharomyces Genome Database at your fingertips.
Wong, Edith D; Karra, Kalpana; Hitz, Benjamin C; Hong, Eurie L; Cherry, J Michael
2013-01-01
The Saccharomyces Genome Database (SGD) is a scientific database that provides researchers with high-quality curated data about the genes and gene products of Saccharomyces cerevisiae. To provide instant and easy access to this information on mobile devices, we have developed YeastGenome, a native application for the Apple iPhone and iPad. YeastGenome can be used to quickly find basic information about S. cerevisiae genes and chromosomal features regardless of internet connectivity. With or without network access, you can view basic information and Gene Ontology annotations about a gene of interest by searching gene names and gene descriptions or by browsing the database within the app to find the gene of interest. With internet access, the app provides more detailed information about the gene, including mutant phenotypes, references and protein and genetic interactions, as well as provides hyperlinks to retrieve detailed information by showing SGD pages and views of the genome browser. SGD provides online help describing basic ways to navigate the mobile version of SGD, highlights key features and answers frequently asked questions related to the app. The app is available from iTunes (http://itunes.com/apps/yeastgenome). The YeastGenome app is provided freely as a service to our community, as part of SGD's mission to provide free and open access to all its data and annotations.
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992
Mayday - integrative analytics for expression data
2010-01-01
Background DNA Microarrays have become the standard method for large scale analyses of gene expression and epigenomics. The increasing complexity and inherent noisiness of the generated data makes visual data exploration ever more important. Fast deployment of new methods as well as a combination of predefined, easy to apply methods with programmer's access to the data are important requirements for any analysis framework. Mayday is an open source platform with emphasis on visual data exploration and analysis. Many built-in methods for clustering, machine learning and classification are provided for dissecting complex datasets. Plugins can easily be written to extend Mayday's functionality in a large number of ways. As Java program, Mayday is platform-independent and can be used as Java WebStart application without any installation. Mayday can import data from several file formats, database connectivity is included for efficient data organization. Numerous interactive visualization tools, including box plots, profile plots, principal component plots and a heatmap are available, can be enhanced with metadata and exported as publication quality vector files. Results We have rewritten large parts of Mayday's core to make it more efficient and ready for future developments. Among the large number of new plugins are an automated processing framework, dynamic filtering, new and efficient clustering methods, a machine learning module and database connectivity. Extensive manual data analysis can be done using an inbuilt R terminal and an integrated SQL querying interface. Our visualization framework has become more powerful, new plot types have been added and existing plots improved. Conclusions We present a major extension of Mayday, a very versatile open-source framework for efficient micro array data analysis designed for biologists and bioinformaticians. Most everyday tasks are already covered. The large number of available plugins as well as the extension possibilities using compiled plugins and ad-hoc scripting allow for the rapid adaption of Mayday also to very specialized data exploration. Mayday is available at http://microarray-analysis.org. PMID:20214778
Seebregts, Christopher; Dane, Pierre; Parsons, Annie Neo; Fogwill, Thomas; Rogers, Debbie; Bekker, Marcha; Shaw, Vincent; Barron, Peter
2018-01-01
MomConnect is a national initiative coordinated by the South African National Department of Health that sends text-based mobile phone messages free of charge to pregnant women who voluntarily register at any public healthcare facility in South Africa. We describe the system design and architecture of the MomConnect technical platform, planned as a nationally scalable and extensible initiative. It uses a health information exchange that can connect any standards-compliant electronic front-end application to any standards-compliant electronic back-end database. The implementation of the MomConnect technical platform, in turn, is a national reference application for electronic interoperability in line with the South African National Health Normative Standards Framework. The use of open content and messaging standards enables the architecture to include any application adhering to the selected standards. Its national implementation at scale demonstrates both the use of this technology and a key objective of global health information systems, which is to achieve implementation scale. The system’s limited clinical information, initially, allowed the architecture to focus on the base standards and profiles for interoperability in a resource-constrained environment with limited connectivity and infrastructural capacity. Maintenance of the system requires mobilisation of national resources. Future work aims to use the standard interfaces to include data from additional applications as well as to extend and interface the framework with other public health information systems in South Africa. The development of this platform has also shown the benefits of interoperability at both an organisational and technical level in South Africa. PMID:29713506
Seebregts, Christopher; Dane, Pierre; Parsons, Annie Neo; Fogwill, Thomas; Rogers, Debbie; Bekker, Marcha; Shaw, Vincent; Barron, Peter
2018-01-01
MomConnect is a national initiative coordinated by the South African National Department of Health that sends text-based mobile phone messages free of charge to pregnant women who voluntarily register at any public healthcare facility in South Africa. We describe the system design and architecture of the MomConnect technical platform, planned as a nationally scalable and extensible initiative. It uses a health information exchange that can connect any standards-compliant electronic front-end application to any standards-compliant electronic back-end database. The implementation of the MomConnect technical platform, in turn, is a national reference application for electronic interoperability in line with the South African National Health Normative Standards Framework. The use of open content and messaging standards enables the architecture to include any application adhering to the selected standards. Its national implementation at scale demonstrates both the use of this technology and a key objective of global health information systems, which is to achieve implementation scale. The system's limited clinical information, initially, allowed the architecture to focus on the base standards and profiles for interoperability in a resource-constrained environment with limited connectivity and infrastructural capacity. Maintenance of the system requires mobilisation of national resources. Future work aims to use the standard interfaces to include data from additional applications as well as to extend and interface the framework with other public health information systems in South Africa. The development of this platform has also shown the benefits of interoperability at both an organisational and technical level in South Africa.
NASA Astrophysics Data System (ADS)
Stone, N.; Lafuente, B.; Bristow, T.; Keller, R.; Downs, R. T.; Blake, D. F.; Fonda, M.; Pires, A.
2016-12-01
Working primarily with astrobiology researchers at NASA Ames, the Open Data Repository (ODR) has been conducting a software pilot to meet the varying needs of this multidisciplinary community. Astrobiology researchers often have small communities or operate individually with unique data sets that don't easily fit into existing database structures. The ODR constructed its Data Publisher software to allow researchers to create databases with common metadata structures and subsequently extend them to meet their individual needs and data requirements. The software accomplishes these tasks through a web-based interface that allows collaborative creation and revision of common metadata templates and individual extensions to these templates for custom data sets. This allows researchers to search disparate datasets based on common metadata established through the metadata tools, but still facilitates distinct analyses and data that may be stored alongside the required common metadata. The software produces web pages that can be made publicly available at the researcher's discretion so that users may search and browse the data in an effort to make interoperability and data discovery a human-friendly task while also providing semantic data for machine-based discovery. Once relevant data has been identified, researchers can utilize the built-in application programming interface (API) that exposes the data for machine-based consumption and integration with existing data analysis tools (e.g. R, MATLAB, Project Jupyter - http://jupyter.org). The current evolution of the project has created the Astrobiology Habitable Environments Database (AHED)[1] which provides an interface to databases connected through a common metadata core. In the next project phase, the goal is for small research teams and groups to be self-sufficient in publishing their research data to meet funding mandates and academic requirements as well as fostering increased data discovery and interoperability through human-readable and machine-readable interfaces. This project is supported by the Science-Enabling Research Activity (SERA) and NASA NNX11AP82A, MSL. [1] B. Lafuente et al. (2016) AGU, submitted.
Code of Federal Regulations, 2010 CFR
2010-07-01
... points; outby the last open crosscut; return air; permissibility requirements. 75.507-1 Section 75.507-1... other than power-connection points; outby the last open crosscut; return air; permissibility... permit for noncompliance may be used in return air outby the last open crosscut for the duration of such...
Olavarria, Oscar A; Holihan, Julie L; Cherla, Deepa; Perez, Cristina A; Kao, Lillian S; Ko, Tien C; Liang, Mike K
2017-05-01
Many healthcare providers have financial interests and relationships with healthcare companies. To maintain transparency, investigators are expected to disclose their conflicts of interest (COIs). Recently, the Centers for Medicare and Medicaid Services developed the Open Payment database of COIs reported by industry. We hypothesize that there is discordance between industry-reported and physician self-reported COIs in ventral hernia publications. PubMed was searched for ventral hernia studies accepted for publication between June 2013 and October 2015 and published by authors from the US. Conflicts of interest were defined as payments received as honoraria, consulting fees, compensation for serving as faculty or as a speaker at a venue, research funding payments, or having ownerships/partnerships in companies. Conflicts of interest disclosed on the published articles were compared with the financial relationships in the Open Payments database. A total of 100 studies were selected with 497 participating authors. Information was available from the Open Payments database for 245 (49.2%) authors, of which 134 (26.9%) met the definition for COI. When comparing COIs disclosed by authors and data in the Open Payments database, 81 (16.3%) authors had at least 1 COI but did not declare any, 35 (7.0%) authors had COIs other than what they declared, and 20 (4.0%) declared a COI not listed in the Open Payments database, for a combined discordance rate of 27.3%. There is substantial discordance between self-reported COI in published articles compared with those in the Centers for Medicare and Medicaid Services Open Payments database. Additional studies are needed to determine the reasons for these differences, as COI can influence the validity of the design, conduct, and results of a study. Copyright © 2017 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Williams, Christopher R; Brooke, Benjamin S
2017-10-01
Patient outcomes after open abdominal aortic aneurysm and endovascular aortic aneurysm repair have been widely reported from several large, randomized, controlled trials. It is not clear whether these trial outcomes are representative of abdominal aortic aneurysm repair procedures performed in real-world hospital settings across the United States. This study was designed to evaluate population-based outcomes after endovascular aortic aneurysm repair versus open abdominal aortic aneurysm repair using statewide inpatient databases and examine how they have helped improve our understanding of abdominal aortic aneurysm repair. A systematic search of MEDLINE, EMBASE, and CINAHL databases was performed to identify articles comparing endovascular aortic aneurysm repair and open abdominal aortic aneurysm repair using data from statewide inpatient databases. This search was limited to studies published in the English language after 1990, and abstracts were screened and abstracted by 2 authors. Our search yielded 17 studies published between 2004 and 2016 that used data from 29 different statewide inpatient databases to compare endovascular aortic aneurysm repair versus open abdominal aortic aneurysm repair. These studies support the randomized, controlled trial results, including a lower mortality associated with endovascular aortic aneurysm repair extended from the perioperative period up to 3 years after operation, as well as a higher complication rate after endovascular aortic aneurysm repair. The evidence from statewide inpatient database analyses has also elucidated trends in procedure volume, patient case mix, volume-outcome relationships, and health care disparities associated with endovascular aortic aneurysm repair versus open abdominal aortic aneurysm repair. Population analyses of endovascular aortic aneurysm repair and open abdominal aortic aneurysm repair using statewide inpatient databases have confirmed short- and long-term mortality outcomes obtained from large, randomized, controlled trials. Moreover, these analyses have allowed us to assess the effect of endovascular aortic aneurysm repair adoption on population outcomes and patient case mix over time. Published by Elsevier Inc.
... Connecting the Dots Grassroots Fundraiser Campaign Worldwide CMN Psychology Study Registration is Now Open Conference Schedule Highlights ... Connecting the Dots Grassroots Fundraiser Campaign Worldwide CMN Psychology Study Registration is Now Open Conference Schedule Highlights ...
Recent achievements in real-time computational seismology in Taiwan
NASA Astrophysics Data System (ADS)
Lee, S.; Liang, W.; Huang, B.
2012-12-01
Real-time computational seismology is currently possible to be achieved which needs highly connection between seismic database and high performance computing. We have developed a real-time moment tensor monitoring system (RMT) by using continuous BATS records and moment tensor inversion (CMT) technique. The real-time online earthquake simulation service is also ready to open for researchers and public earthquake science education (ROS). Combine RMT with ROS, the earthquake report based on computational seismology can provide within 5 minutes after an earthquake occurred (RMT obtains point source information < 120 sec; ROS completes a 3D simulation < 3 minutes). All of these computational results are posted on the internet in real-time now. For more information, welcome to visit real-time computational seismology earthquake report webpage (RCS).
Large and linked in scientific publishing
2012-01-01
We are delighted to announce the launch of GigaScience, an online open-access journal that focuses on research using or producing large datasets in all areas of biological and biomedical sciences. GigaScience is a new type of journal that provides standard scientific publishing linked directly to a database that hosts all the relevant data. The primary goals for the journal, detailed in this editorial, are to promote more rapid data release, broader use and reuse of data, improved reproducibility of results, and direct, easy access between analyses and their data. Direct and permanent connections of scientific analyses and their data (achieved by assigning all hosted data a citable DOI) will enable better analysis and deeper interpretation of the data in the future. PMID:23587310
Large and linked in scientific publishing.
Goodman, Laurie; Edmunds, Scott C; Basford, Alexandra T
2012-07-12
We are delighted to announce the launch of GigaScience, an online open-access journal that focuses on research using or producing large datasets in all areas of biological and biomedical sciences. GigaScience is a new type of journal that provides standard scientific publishing linked directly to a database that hosts all the relevant data. The primary goals for the journal, detailed in this editorial, are to promote more rapid data release, broader use and reuse of data, improved reproducibility of results, and direct, easy access between analyses and their data. Direct and permanent connections of scientific analyses and their data (achieved by assigning all hosted data a citable DOI) will enable better analysis and deeper interpretation of the data in the future.
Lehtomäki, Joona; Tuominen, Sakari; Toivonen, Tuuli; Leinonen, Antti
2015-01-01
The boreal region is facing intensifying resource extraction pressure, but the lack of comprehensive biodiversity data makes operative forest conservation planning difficult. Many countries have implemented forest inventory schemes and are making extensive and up-to-date forest databases increasingly available. Some of the more detailed inventory databases, however, remain proprietary and unavailable for conservation planning. Here, we investigate how well different open and proprietary forest inventory data sets suit the purpose of conservation prioritization in Finland. We also explore how much priorities are affected by using the less accurate but open data. First, we construct a set of indices for forest conservation value based on quantitative information commonly found in forest inventories. These include the maturity of the trees, tree species composition, and site fertility. Secondly, using these data and accounting for connectivity between forest types, we investigate the patterns in conservation priority. For prioritization, we use Zonation, a method and software for spatial conservation prioritization. We then validate the prioritizations by comparing them to known areas of high conservation value. We show that the overall priority patterns are relatively consistent across different data sources and analysis options. However, the coarse data cannot be used to accurately identify the high-priority areas as it misses much of the fine-scale variation in forest structures. We conclude that, while inventory data collected for forestry purposes may be useful for forest conservation purposes, it needs to be detailed enough to be able to account for more fine-scaled features of high conservation value. These results underline the importance of making detailed inventory data publicly available. Finally, we discuss how the prioritization methodology we used could be integrated into operative forest management, especially in countries in the boreal zone. PMID:26317227
Lehtomäki, Joona; Tuominen, Sakari; Toivonen, Tuuli; Leinonen, Antti
2015-01-01
The boreal region is facing intensifying resource extraction pressure, but the lack of comprehensive biodiversity data makes operative forest conservation planning difficult. Many countries have implemented forest inventory schemes and are making extensive and up-to-date forest databases increasingly available. Some of the more detailed inventory databases, however, remain proprietary and unavailable for conservation planning. Here, we investigate how well different open and proprietary forest inventory data sets suit the purpose of conservation prioritization in Finland. We also explore how much priorities are affected by using the less accurate but open data. First, we construct a set of indices for forest conservation value based on quantitative information commonly found in forest inventories. These include the maturity of the trees, tree species composition, and site fertility. Secondly, using these data and accounting for connectivity between forest types, we investigate the patterns in conservation priority. For prioritization, we use Zonation, a method and software for spatial conservation prioritization. We then validate the prioritizations by comparing them to known areas of high conservation value. We show that the overall priority patterns are relatively consistent across different data sources and analysis options. However, the coarse data cannot be used to accurately identify the high-priority areas as it misses much of the fine-scale variation in forest structures. We conclude that, while inventory data collected for forestry purposes may be useful for forest conservation purposes, it needs to be detailed enough to be able to account for more fine-scaled features of high conservation value. These results underline the importance of making detailed inventory data publicly available. Finally, we discuss how the prioritization methodology we used could be integrated into operative forest management, especially in countries in the boreal zone.
Spatial and environmental connectivity analysis in a cholera vaccine trial.
Emch, Michael; Ali, Mohammad; Root, Elisabeth D; Yunus, Mohammad
2009-02-01
This paper develops theory and methods for vaccine trials that utilize spatial and environmental information. Satellite imagery is used to identify whether households are connected to one another via water bodies in a study area in rural Bangladesh. Then relationships between neighborhood-level cholera vaccine coverage and placebo incidence and neighborhood-level spatial variables are measured. The study hypothesis is that unvaccinated people who are environmentally connected to people who have been vaccinated will be at lower risk compared to unvaccinated people who are environmentally connected to people who have not been vaccinated. We use four datasets including: a cholera vaccine trial database, a longitudinal demographic database of the rural population from which the vaccine trial participants were selected, a household-level geographic information system (GIS) database of the same study area, and high resolution Quickbird satellite imagery. An environmental connectivity metric was constructed by integrating the satellite imagery with the vaccine and demographic databases linked with GIS. The results show that there is a relationship between neighborhood rates of cholera vaccination and placebo incidence. Thus, people are indirectly protected when more people in their environmentally connected neighborhood are vaccinated. This result is similar to our previous work that used a simpler Euclidean distance neighborhood to measure neighborhood vaccine coverage [Ali, M., Emch, M., von Seidlein, L., Yunus, M., Sack, D. A., Holmgren, J., et al. (2005). Herd immunity conferred by killed oral cholera vaccines in Bangladesh. Lancet, 366(9479), 44-49]. Our new method of measuring environmental connectivity is more precise since it takes into account the transmission mode of cholera and therefore this study validates our assertion that the oral cholera vaccine provides indirect protection in addition to direct protection.
OpenID Connect as a security service in cloud-based medical imaging systems.
Ma, Weina; Sartipi, Kamran; Sharghigoorabi, Hassan; Koff, David; Bak, Peter
2016-04-01
The evolution of cloud computing is driving the next generation of medical imaging systems. However, privacy and security concerns have been consistently regarded as the major obstacles for adoption of cloud computing by healthcare domains. OpenID Connect, combining OpenID and OAuth together, is an emerging representational state transfer-based federated identity solution. It is one of the most adopted open standards to potentially become the de facto standard for securing cloud computing and mobile applications, which is also regarded as "Kerberos of cloud." We introduce OpenID Connect as an authentication and authorization service in cloud-based diagnostic imaging (DI) systems, and propose enhancements that allow for incorporating this technology within distributed enterprise environments. The objective of this study is to offer solutions for secure sharing of medical images among diagnostic imaging repository (DI-r) and heterogeneous picture archiving and communication systems (PACS) as well as Web-based and mobile clients in the cloud ecosystem. The main objective is to use OpenID Connect open-source single sign-on and authorization service and in a user-centric manner, while deploying DI-r and PACS to private or community clouds should provide equivalent security levels to traditional computing model.
Graphic-based musculoskeletal model for biomechanical analyses and animation.
Chao, Edmund Y S
2003-04-01
The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.
An open experimental database for exploring inorganic materials
Zakutayev, Andriy; Wunder, Nick; Schwarting, Marcus; ...
2018-04-03
The use of advanced machine learning algorithms in experimental materials science is limited by the lack of sufficiently large and diverse datasets amenable to data mining. If publicly open, such data resources would also enable materials research by scientists without access to expensive experimental equipment. Here, we report on our progress towards a publicly open High Throughput Experimental Materials (HTEM) Database (htem.nrel.gov). This database currently contains 140,000 sample entries, characterized by structural (100,000), synthetic (80,000), chemical (70,000), and optoelectronic (50,000) properties of inorganic thin film materials, grouped in >4,000 sample entries across >100 materials systems; more than a half ofmore » these data are publicly available. This article shows how the HTEM database may enable scientists to explore materials by browsing web-based user interface and an application programming interface. This paper also describes a HTE approach to generating materials data, and discusses the laboratory information management system (LIMS), that underpin HTEM database. Finally, this manuscript illustrates how advanced machine learning algorithms can be adopted to materials science problems using this open data resource.« less
An open experimental database for exploring inorganic materials.
Zakutayev, Andriy; Wunder, Nick; Schwarting, Marcus; Perkins, John D; White, Robert; Munch, Kristin; Tumas, William; Phillips, Caleb
2018-04-03
The use of advanced machine learning algorithms in experimental materials science is limited by the lack of sufficiently large and diverse datasets amenable to data mining. If publicly open, such data resources would also enable materials research by scientists without access to expensive experimental equipment. Here, we report on our progress towards a publicly open High Throughput Experimental Materials (HTEM) Database (htem.nrel.gov). This database currently contains 140,000 sample entries, characterized by structural (100,000), synthetic (80,000), chemical (70,000), and optoelectronic (50,000) properties of inorganic thin film materials, grouped in >4,000 sample entries across >100 materials systems; more than a half of these data are publicly available. This article shows how the HTEM database may enable scientists to explore materials by browsing web-based user interface and an application programming interface. This paper also describes a HTE approach to generating materials data, and discusses the laboratory information management system (LIMS), that underpin HTEM database. Finally, this manuscript illustrates how advanced machine learning algorithms can be adopted to materials science problems using this open data resource.
An open experimental database for exploring inorganic materials
Zakutayev, Andriy; Wunder, Nick; Schwarting, Marcus; Perkins, John D.; White, Robert; Munch, Kristin; Tumas, William; Phillips, Caleb
2018-01-01
The use of advanced machine learning algorithms in experimental materials science is limited by the lack of sufficiently large and diverse datasets amenable to data mining. If publicly open, such data resources would also enable materials research by scientists without access to expensive experimental equipment. Here, we report on our progress towards a publicly open High Throughput Experimental Materials (HTEM) Database (htem.nrel.gov). This database currently contains 140,000 sample entries, characterized by structural (100,000), synthetic (80,000), chemical (70,000), and optoelectronic (50,000) properties of inorganic thin film materials, grouped in >4,000 sample entries across >100 materials systems; more than a half of these data are publicly available. This article shows how the HTEM database may enable scientists to explore materials by browsing web-based user interface and an application programming interface. This paper also describes a HTE approach to generating materials data, and discusses the laboratory information management system (LIMS), that underpin HTEM database. Finally, this manuscript illustrates how advanced machine learning algorithms can be adopted to materials science problems using this open data resource. PMID:29611842
An open experimental database for exploring inorganic materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakutayev, Andriy; Wunder, Nick; Schwarting, Marcus
The use of advanced machine learning algorithms in experimental materials science is limited by the lack of sufficiently large and diverse datasets amenable to data mining. If publicly open, such data resources would also enable materials research by scientists without access to expensive experimental equipment. Here, we report on our progress towards a publicly open High Throughput Experimental Materials (HTEM) Database (htem.nrel.gov). This database currently contains 140,000 sample entries, characterized by structural (100,000), synthetic (80,000), chemical (70,000), and optoelectronic (50,000) properties of inorganic thin film materials, grouped in >4,000 sample entries across >100 materials systems; more than a half ofmore » these data are publicly available. This article shows how the HTEM database may enable scientists to explore materials by browsing web-based user interface and an application programming interface. This paper also describes a HTE approach to generating materials data, and discusses the laboratory information management system (LIMS), that underpin HTEM database. Finally, this manuscript illustrates how advanced machine learning algorithms can be adopted to materials science problems using this open data resource.« less
A scalable database model for multiparametric time series: a volcano observatory case study
NASA Astrophysics Data System (ADS)
Montalto, Placido; Aliotta, Marco; Cassisi, Carmelo; Prestifilippo, Michele; Cannata, Andrea
2014-05-01
The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.
A multidisciplinary database for geophysical time series management
NASA Astrophysics Data System (ADS)
Montalto, P.; Aliotta, M.; Cassisi, C.; Prestifilippo, M.; Cannata, A.
2013-12-01
The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.
Validation and extraction of molecular-geometry information from small-molecule databases.
Long, Fei; Nicholls, Robert A; Emsley, Paul; Graǽulis, Saulius; Merkys, Andrius; Vaitkus, Antanas; Murshudov, Garib N
2017-02-01
A freely available small-molecule structure database, the Crystallography Open Database (COD), is used for the extraction of molecular-geometry information on small-molecule compounds. The results are used for the generation of new ligand descriptions, which are subsequently used by macromolecular model-building and structure-refinement software. To increase the reliability of the derived data, and therefore the new ligand descriptions, the entries from this database were subjected to very strict validation. The selection criteria made sure that the crystal structures used to derive atom types, bond and angle classes are of sufficiently high quality. Any suspicious entries at a crystal or molecular level were removed from further consideration. The selection criteria included (i) the resolution of the data used for refinement (entries solved at 0.84 Å resolution or higher) and (ii) the structure-solution method (structures must be from a single-crystal experiment and all atoms of generated molecules must have full occupancies), as well as basic sanity checks such as (iii) consistency between the valences and the number of connections between atoms, (iv) acceptable bond-length deviations from the expected values and (v) detection of atomic collisions. The derived atom types and bond classes were then validated using high-order moment-based statistical techniques. The results of the statistical analyses were fed back to fine-tune the atom typing. The developed procedure was repeated four times, resulting in fine-grained atom typing, bond and angle classes. The procedure will be repeated in the future as and when new entries are deposited in the COD. The whole procedure can also be applied to any source of small-molecule structures, including the Cambridge Structural Database and the ZINC database.
Use of multi-opening burrow systems by black-footed ferrets
Biggins, Dean E.
2012-01-01
Multi-opening burrow systems constructed by prairie dogs (Cynomys) ostensibly provide escape routes when prairie dogs are pursued by predators capable of entering the burrows, such as black-footed ferrets (Mustela nigripes), or by predators that can rapidly dig into the tunnels, such as American badgers (Taxidea taxus). Because badgers also prey on ferrets, ferrets might similarly benefit from multi-opening burrow systems. Using an air blower, white-tailed prairie dog (Cynomys leucurus) burrow openings were tested for connectivity on plots occupied by black-footed ferrets and on randomly selected plots in Wyoming. Significantly more connected openings were found on ferret-occupied plots than on random plots. Connected openings might be due to modifications by ferrets in response to plugging by prairie dogs, due to selection by ferrets for complex systems with multiple openings that are already unobstructed, or simply due to ferrets lingering at kill sites that were multi-opening systems selected by their prairie dog prey.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaponov, Yu.A.; Igarashi, N.; Hiraki, M.
2004-05-12
An integrated controlling system and a unified database for high throughput protein crystallography experiments have been developed. Main features of protein crystallography experiments (purification, crystallization, crystal harvesting, data collection, data processing) were integrated into the software under development. All information necessary to perform protein crystallography experiments is stored (except raw X-ray data that are stored in a central data server) in a MySQL relational database. The database contains four mutually linked hierarchical trees describing protein crystals, data collection of protein crystal and experimental data processing. A database editor was designed and developed. The editor supports basic database functions to view,more » create, modify and delete user records in the database. Two search engines were realized: direct search of necessary information in the database and object oriented search. The system is based on TCP/IP secure UNIX sockets with four predefined sending and receiving behaviors, which support communications between all connected servers and clients with remote control functions (creating and modifying data for experimental conditions, data acquisition, viewing experimental data, and performing data processing). Two secure login schemes were designed and developed: a direct method (using the developed Linux clients with secure connection) and an indirect method (using the secure SSL connection using secure X11 support from any operating system with X-terminal and SSH support). A part of the system has been implemented on a new MAD beam line, NW12, at the Photon Factory Advanced Ring for general user experiments.« less
Wang, Lei; Alpert, Kathryn I.; Calhoun, Vince D.; Cobia, Derin J.; Keator, David B.; King, Margaret D.; Kogan, Alexandr; Landis, Drew; Tallis, Marcelo; Turner, Matthew D.; Potkin, Steven G.; Turner, Jessica A.; Ambite, Jose Luis
2015-01-01
SchizConnect (www.schizconnect.org) is built to address the issues of multiple data repositories in schizophrenia neuroimaging studies. It includes a level of mediation—translating across data sources—so that the user can place one query, e.g. for diffusion images from male individuals with schizophrenia, and find out from across participating data sources how many datasets there are, as well as downloading the imaging and related data. The current version handles the Data Usage Agreements across different studies, as well as interpreting database-specific terminologies into a common framework. New data repositories can also be mediated to bring immediate access to existing datasets. Compared with centralized, upload data sharing models, SchizConnect is a unique, virtual database with a focus on schizophrenia and related disorders that can mediate live data as information are being updated at each data source. It is our hope that SchizConnect can facilitate testing new hypotheses through aggregated datasets, promoting discovery related to the mechanisms underlying schizophrenic dysfunction. PMID:26142271
PathwayAccess: CellDesigner plugins for pathway databases.
Van Hemert, John L; Dickerson, Julie A
2010-09-15
CellDesigner provides a user-friendly interface for graphical biochemical pathway description. Many pathway databases are not directly exportable to CellDesigner models. PathwayAccess is an extensible suite of CellDesigner plugins, which connect CellDesigner directly to pathway databases using respective Java application programming interfaces. The process is streamlined for creating new PathwayAccess plugins for specific pathway databases. Three PathwayAccess plugins, MetNetAccess, BioCycAccess and ReactomeAccess, directly connect CellDesigner to the pathway databases MetNetDB, BioCyc and Reactome. PathwayAccess plugins enable CellDesigner users to expose pathway data to analytical CellDesigner functions, curate their pathway databases and visually integrate pathway data from different databases using standard Systems Biology Markup Language and Systems Biology Graphical Notation. Implemented in Java, PathwayAccess plugins run with CellDesigner version 4.0.1 and were tested on Ubuntu Linux, Windows XP and 7, and MacOSX. Source code, binaries, documentation and video walkthroughs are freely available at http://vrac.iastate.edu/~jlv.
Allinea Parallel Profiling and Debugging Tools on the Peregrine System |
client for your platform. (Mac/Windows/Linux) Configuration to connect to Peregrine: Open the Allinea view it # directly through x11 forwarding just type 'map', # it will open a GUI. $ map # to profile an enable x-forwarding when connecting to # Peregrine. $ map # This will open the GUI Debugging using
Drapery assembly including insulated drapery liner
Cukierski, Gwendolyn
1983-01-01
A drapery assembly is disclosed for covering a framed wall opening, the assembly including drapery panels hung on a horizontal traverse rod, the rod having a pair of master slides and means for displacing the master slides between open and closed positions. A pair of insulating liner panels are positioned behind the drapery, the remote side edges of the liner panels being connected with the side portions of the opening frame, and the adjacent side edges of the liner panels being connected with a pair of vertically arranged center support members adapted for sliding movement longitudinally of a horizontal track member secured to the upper horizontal portion of the opening frame. Pivotally arranged brackets connect the center support members with the master slides of the traverse rod whereby movement of the master slides to effect opening and closing of the drapery panels effects simultaneous opening and closing of the liner panels.
The Web-Database Connection Tools for Sharing Information on the Campus Intranet.
ERIC Educational Resources Information Center
Thibeault, Nancy E.
This paper evaluates four tools for creating World Wide Web pages that interface with Microsoft Access databases: DB Gateway, Internet Database Assistant (IDBA), Microsoft Internet Database Connector (IDC), and Cold Fusion. The system requirements and features of each tool are discussed. A sample application, "The Virtual Help Desk"…
Resources available for autism research in the big data era: a systematic review
Milne, Elizabeth
2017-01-01
Recently, there has been a move encouraged by many stakeholders towards generating big, open data in many areas of research. One area where big, open data is particularly valuable is in research relating to complex heterogeneous disorders such as Autism Spectrum Disorder (ASD). The inconsistencies of findings and the great heterogeneity of ASD necessitate the use of big and open data to tackle important challenges such as understanding and defining the heterogeneity and potential subtypes of ASD. To this end, a number of initiatives have been established that aim to develop big and/or open data resources for autism research. In order to provide a useful data reference for autism researchers, a systematic search for ASD data resources was conducted using the Scopus database, the Google search engine, and the pages on ‘recommended repositories’ by key journals, and the findings were translated into a comprehensive list focused on ASD data. The aim of this review is to systematically search for all available ASD data resources providing the following data types: phenotypic, neuroimaging, human brain connectivity matrices, human brain statistical maps, biospecimens, and ASD participant recruitment. A total of 33 resources were found containing different types of data from varying numbers of participants. Description of the data available from each data resource, and links to each resource is provided. Moreover, key implications are addressed and underrepresented areas of data are identified. PMID:28097074
Open-access databases as unprecedented resources and drivers of cultural change in fisheries science
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamay, Ryan A; Utz, Ryan
2014-01-01
Open-access databases with utility in fisheries science have grown exponentially in quantity and scope over the past decade, with profound impacts to our discipline. The management, distillation, and sharing of an exponentially growing stream of open-access data represents several fundamental challenges in fisheries science. Many of the currently available open-access resources may not be universally known among fisheries scientists. We therefore introduce many national- and global-scale open-access databases with applications in fisheries science and provide an example of how they can be harnessed to perform valuable analyses without additional field efforts. We also discuss how the development, maintenance, and utilizationmore » of open-access data are likely to pose technical, financial, and educational challenges to fisheries scientists. Such cultural implications that will coincide with the rapidly increasing availability of free data should compel the American Fisheries Society to actively address these problems now to help ease the forthcoming cultural transition.« less
Baili, Paolo; Torresani, Michele; Agresti, Roberto; Rosito, Giuseppe; Daidone, Maria Grazia; Veneroni, Silvia; Cavallo, Ilaria; Funaro, Francesco; Giunco, Marco; Turco, Alberto; Amash, Hade; Scavo, Antonio; Minicozzi, Pamela; Bella, Francesca; Meneghini, Elisabetta; Sant, Milena
2015-01-01
In clinical research, many potentially useful variables are available via the routine activity of cancer center-based clinical registries (CCCR). We present the experience of the breast cancer clinical registry at Fondazione IRCCS "Istituto Nazionale dei Tumori" to give an example of how a CCCR can be planned, implemented, and used. Five criteria were taken into consideration while planning our CCCR: (a) available clinical and administrative databases ought to be exploited to the maximum extent; (b) open source software should be used; (c) a Web-based interface must be designed; (d) CCCR data must be compatible with population-based cancer registry data; (e) CCCR must be an open system, able to be connected with other data repositories. The amount of work needed for the implementation of a CCCR is inversely linked with the amount of available coded data: the fewer data are available in the input databases as coded variables, the more work will be necessary, for information technology staff, text mining analysis, and registrars (for collecting data from clinical records). A cancer registry in a comprehensive cancer center can be used for several research aspects, such as estimate of the number of cases needed for clinical studies, assessment of biobank specimens with specific characteristics, evaluation of clinical practice and adhesion to clinical guidelines, comparative studies between clinical and population sets of patients, studies on cancer prognosis, and studies on cancer survivorship.
OpenID Connect as a security service in cloud-based medical imaging systems
Ma, Weina; Sartipi, Kamran; Sharghigoorabi, Hassan; Koff, David; Bak, Peter
2016-01-01
Abstract. The evolution of cloud computing is driving the next generation of medical imaging systems. However, privacy and security concerns have been consistently regarded as the major obstacles for adoption of cloud computing by healthcare domains. OpenID Connect, combining OpenID and OAuth together, is an emerging representational state transfer-based federated identity solution. It is one of the most adopted open standards to potentially become the de facto standard for securing cloud computing and mobile applications, which is also regarded as “Kerberos of cloud.” We introduce OpenID Connect as an authentication and authorization service in cloud-based diagnostic imaging (DI) systems, and propose enhancements that allow for incorporating this technology within distributed enterprise environments. The objective of this study is to offer solutions for secure sharing of medical images among diagnostic imaging repository (DI-r) and heterogeneous picture archiving and communication systems (PACS) as well as Web-based and mobile clients in the cloud ecosystem. The main objective is to use OpenID Connect open-source single sign-on and authorization service and in a user-centric manner, while deploying DI-r and PACS to private or community clouds should provide equivalent security levels to traditional computing model. PMID:27340682
33 CFR 117.795 - Jamaica Bay and Connecting Waterways.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Connecting Waterways. (a) The draw of the Marine Parkway bridge, mile 3.0 over Rockaway Inlet, shall open on signal Monday through Friday from 8 a.m. to 4 p.m. At all other times, the draw shall open on signal if at least eight hours notice is given; however, the draw shall open on signal if at least a one hour...
33 CFR 117.795 - Jamaica Bay and Connecting Waterways.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Connecting Waterways. (a) The draw of the Marine Parkway bridge, mile 3.0 over Rockaway Inlet, shall open on signal Monday through Friday from 8 a.m. to 4 p.m. At all other times, the draw shall open on signal if at least eight hours notice is given; however, the draw shall open on signal if at least a one hour...
33 CFR 117.795 - Jamaica Bay and Connecting Waterways.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Connecting Waterways. (a) The draw of the Marine Parkway bridge, mile 3.0 over Rockaway Inlet, shall open on signal Monday through Friday from 8 a.m. to 4 p.m. At all other times, the draw shall open on signal if at least eight hours notice is given; however, the draw shall open on signal if at least a one hour...
33 CFR 117.795 - Jamaica Bay and Connecting Waterways.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Connecting Waterways. (a) The draw of the Marine Parkway bridge, mile 3.0 over Rockaway Inlet, shall open on signal Monday through Friday from 8 a.m. to 4 p.m. At all other times, the draw shall open on signal if at least eight hours notice is given; however, the draw shall open on signal if at least a one hour...
33 CFR 117.795 - Jamaica Bay and Connecting Waterways.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Connecting Waterways. (a) The draw of the Marine Parkway bridge, mile 3.0 over Rockaway Inlet, shall open on signal Monday through Friday from 8 a.m. to 4 p.m. At all other times, the draw shall open on signal if at least eight hours notice is given; however, the draw shall open on signal if at least a one hour...
Implementation of an open adoption research data management system for clinical studies.
Müller, Jan; Heiss, Kirsten Ingmar; Oberhoffer, Renate
2017-07-06
Research institutions need to manage multiple studies with individual data sets, processing rules and different permissions. So far, there is no standard technology that provides an easy to use environment to create databases and user interfaces for clinical trials or research studies. Therefore various software solutions are being used-from custom software, explicitly designed for a specific study, to cost intensive commercial Clinical Trial Management Systems (CTMS) up to very basic approaches with self-designed Microsoft ® databases. The technology applied to conduct those studies varies tremendously from study to study, making it difficult to evaluate data across various studies (meta-analysis) and keeping a defined level of quality in database design, data processing, displaying and exporting. Furthermore, the systems being used to collect study data are often operated redundantly to systems used in patient care. As a consequence the data collection in studies is inefficient and data quality may suffer from unsynchronized datasets, non-normalized database scenarios and manually executed data transfers. With OpenCampus Research we implemented an open adoption software (OAS) solution on an open source basis, which provides a standard environment for state-of-the-art research database management at low cost.
New mutations and an updated database for the patched-1 (PTCH1) gene.
Reinders, Marie G; van Hout, Antonius F; Cosgun, Betûl; Paulussen, Aimée D; Leter, Edward M; Steijlen, Peter M; Mosterd, Klara; van Geel, Michel; Gille, Johan J
2018-05-01
Basal cell nevus syndrome (BCNS) is an autosomal dominant disorder characterized by multiple basal cell carcinomas (BCCs), maxillary keratocysts, and cerebral calcifications. BCNS most commonly is caused by a germline mutation in the patched-1 (PTCH1) gene. PTCH1 mutations are also described in patients with holoprosencephaly. We have established a locus-specific database for the PTCH1 gene using the Leiden Open Variation Database (LOVD). We included 117 new PTCH1 variations, in addition to 331 previously published unique PTCH1 mutations. These new mutations were found in 141 patients who had a positive PTCH1 mutation analysis in either the VU University Medical Centre (VUMC) or Maastricht University Medical Centre (MUMC) between 1995 and 2015. The database contains 331 previously published unique PTCH1 mutations and 117 new PTCH1 variations. We have established a locus-specific database for the PTCH1 gene using the Leiden Open Variation Database (LOVD). The database provides an open collection for both clinicians and researchers and is accessible online at http://www.lovd.nl/PTCH1. © 2018 The Authors. Molecular Genetics & Genomic Medicine published by Wiley Periodicals, Inc.
Wong, W P; Camfield, D A; Woods, W; Sarris, J; Pipingas, A
2015-10-01
Whilst a number of previous studies have been conducted in order to investigate functional brain changes associated with eyes-closed meditation techniques, there is a relative scarcity in the literature with regards to changes occurring during eyes-open meditation. The current project used magnetoencephalography (MEG) to investigate differences in spectral power and functional connectivity between 11 long-term mindfulness meditators (LTMMs) with >5 years of experience and 12 meditation-naïve control participants both during baseline eyes-open rest and eyes-open open-monitoring (OM) mindfulness meditation. During resting with eyes-open, prior to meditating, greater mean alpha power was observed for LTMMs in comparison to controls. However, during the course of OM meditation, a significantly greater increase in theta power was observed over a broad fronto-centro-parietal region for control participants in comparison to LTMMs. In contrast, whole-head mean connectivity was found to be significantly greater for long-term meditators in comparison to controls in the theta band both during rest as well as during meditation. Additionally, mean connectivity was significantly lower for long-term meditators in the low gamma band during rest and significantly lower in both low and high gamma bands during meditation; and the variance of low-gamma connectivity scores for long-term meditators was significantly decreased compared to the control group. The current study provides important new information as to the trait functional changes in brain activity associated with long-term mindfulness meditation, as well as the state changes specifically associated with eyes-open open monitoring meditation techniques. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
McNeil, B.
2016-02-01
We know that universities who host Massive Open Online Courses (MOOC's) are driving deep global engagement by allowing the public to enrol in open and free quality course online. At thinkable.org, we have pioneered the development of 'Massive Open Online Grants or MOOG's, which allow research organizations & societies to host open grants, prizes & contests that allows the public to vote, connect & engage with high quality research for the first time. In order to reach a wide audience, researchers create short video summaries of their work that allows anyone to learn, vote, share and connect with research important to them. We have found this to help both research organizations and researchers reach a wide audience to demonstrate public impact & innovation, drive collaborative partnerships and raise public funding for their research. In this talk, I will outline our experience in developing MOOG's and the immense future opportunities to revolutionize how high-quality research engages with society by using technology to empower deep connections between ocean researchers & the wider public.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conrad, B.; Depp, D.
1994-12-31
Networks are hot and CD-ROM is also hot, but how do they mix? CD-ROM is a relatively inexpensive medium for storing and delivering information, and increasingly, users are connected to networks. But the technologies have developed separately, and there are obstacles to their integration. Drawing on their experience networking CD-ROMs at Oak Ridge National Laboratory, the authors discuss CD-ROM`s strengths and weaknesses as a technology for delivering information to the desktop. CD-ROM networking solutions are LAN-based, not ``open systems.`` Despite this limitation, due to the large number of information resources available on CD-ROM and the relative ease of installing andmore » maintaining databases on CD-ROM, CD-ROMs remain an essential piece of the electronic information puzzle.« less
Connecting to HPC VPN | High-Performance Computing | NREL
and password will match your NREL network account login/password. From OS X or Linux, open a terminal finalized. Open a Remote Desktop connection using server name WINHPC02 (this is the login node). Mac Mac
NASA Technical Reports Server (NTRS)
Li, Chung-Sheng (Inventor); Smith, John R. (Inventor); Chang, Yuan-Chi (Inventor); Jhingran, Anant D. (Inventor); Padmanabhan, Sriram K. (Inventor); Hsiao, Hui-I (Inventor); Choy, David Mun-Hien (Inventor); Lin, Jy-Jine James (Inventor); Fuh, Gene Y. C. (Inventor); Williams, Robin (Inventor)
2004-01-01
Methods and apparatus for providing a multi-tier object-relational database architecture are disclosed. In one illustrative embodiment of the present invention, a multi-tier database architecture comprises an object-relational database engine as a top tier, one or more domain-specific extension modules as a bottom tier, and one or more universal extension modules as a middle tier. The individual extension modules of the bottom tier operationally connect with the one or more universal extension modules which, themselves, operationally connect with the database engine. The domain-specific extension modules preferably provide such functions as search, index, and retrieval services of images, video, audio, time series, web pages, text, XML, spatial data, etc. The domain-specific extension modules may include one or more IBM DB2 extenders, Oracle data cartridges and/or Informix datablades, although other domain-specific extension modules may be used.
DOT National Transportation Integrated Search
2012-03-01
This project initiated the development of a computerized database of ITS facilities, including conduits, junction : boxes, cameras, connections, etc. The current system consists of a database of conduit sections of various lengths. : Over the length ...
"Big data" and "open data": What kind of access should researchers enjoy?
Chatellier, Gilles; Varlet, Vincent; Blachier-Poisson, Corinne
2016-02-01
The healthcare sector is currently facing a new paradigm, the explosion of "big data". Coupled with advances in computer technology, the field of "big data" appears promising, allowing us to better understand the natural history of diseases, to follow-up new technologies (devices, drugs) implementation and to participate in precision medicine, etc. Data sources are multiple (medical and administrative data, electronic medical records, data from rapidly developing technologies such as DNA sequencing, connected devices, etc.) and heterogeneous while their use requires complex methods for accurate analysis. Moreover, faced with this new paradigm, we must determine who could (or should) have access to which data, how to combine collective interest and protection of personal data and how to finance in the long-term both operating costs and databases interrogation. This article analyses the opportunities and challenges related to the use of open and/or "big data", from the viewpoint of pharmacologists and representatives of the pharmaceutical and medical device industry. Copyright © 2016 Société française de pharmacologie et de thérapeutique. Published by Elsevier Masson SAS. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harwood, R.G.; Billington, C.J.; Buitrago, J.
1996-12-01
A Technical Core Group (TCG) was set up in March 1994 to review the design practice provisions for grouted pile to sleeve connections, mechanical connections and repairs as part of the international harmonization process for the new ISO Standard, ISO 13819-2, Petroleum and Natural Gas Industries--Offshore Structures, Part 2: Fixed Steel Structures. This paper provides an overview of the development of the proposed new design provisions for grouted connections including, the gathering and screening of the data, the evolution of the design formulae, and the evaluation of the resistance factor. Detailed comparisons of the new formulae with current design practicemore » (API, HSE and DnV) are also included. In the development of the new provisions the TCG has been given access to the largest database ever assembled on this topic. This database includes all the major testing programs performed over the last 20 years, and recent UK and Norwegian research projects not previously reported. The limitations in the database are discussed and the areas where future research would be of benefit are highlighted.« less
Wang, Lei; Alpert, Kathryn I; Calhoun, Vince D; Cobia, Derin J; Keator, David B; King, Margaret D; Kogan, Alexandr; Landis, Drew; Tallis, Marcelo; Turner, Matthew D; Potkin, Steven G; Turner, Jessica A; Ambite, Jose Luis
2016-01-01
SchizConnect (www.schizconnect.org) is built to address the issues of multiple data repositories in schizophrenia neuroimaging studies. It includes a level of mediation--translating across data sources--so that the user can place one query, e.g. for diffusion images from male individuals with schizophrenia, and find out from across participating data sources how many datasets there are, as well as downloading the imaging and related data. The current version handles the Data Usage Agreements across different studies, as well as interpreting database-specific terminologies into a common framework. New data repositories can also be mediated to bring immediate access to existing datasets. Compared with centralized, upload data sharing models, SchizConnect is a unique, virtual database with a focus on schizophrenia and related disorders that can mediate live data as information is being updated at each data source. It is our hope that SchizConnect can facilitate testing new hypotheses through aggregated datasets, promoting discovery related to the mechanisms underlying schizophrenic dysfunction. Copyright © 2015 Elsevier Inc. All rights reserved.
RefPrimeCouch—a reference gene primer CouchApp
Silbermann, Jascha; Wernicke, Catrin; Pospisil, Heike; Frohme, Marcus
2013-01-01
To support a quantitative real-time polymerase chain reaction standardization project, a new reference gene database application was required. The new database application was built with the explicit goal of simplifying not only the development process but also making the user interface more responsive and intuitive. To this end, CouchDB was used as the backend with a lightweight dynamic user interface implemented client-side as a one-page web application. Data entry and curation processes were streamlined using an OpenRefine-based workflow. The new RefPrimeCouch database application provides its data online under an Open Database License. Database URL: http://hpclife.th-wildau.de:5984/rpc/_design/rpc/view.html PMID:24368831
RefPrimeCouch--a reference gene primer CouchApp.
Silbermann, Jascha; Wernicke, Catrin; Pospisil, Heike; Frohme, Marcus
2013-01-01
To support a quantitative real-time polymerase chain reaction standardization project, a new reference gene database application was required. The new database application was built with the explicit goal of simplifying not only the development process but also making the user interface more responsive and intuitive. To this end, CouchDB was used as the backend with a lightweight dynamic user interface implemented client-side as a one-page web application. Data entry and curation processes were streamlined using an OpenRefine-based workflow. The new RefPrimeCouch database application provides its data online under an Open Database License. Database URL: http://hpclife.th-wildau.de:5984/rpc/_design/rpc/view.html.
A Concept for Continuous Monitoring that Reduces Redundancy in Information Assurance Processes
2011-09-01
System.out.println(“Driver loaded”); String url=“jdbc:postgresql://localhost/IAcontrols”; String user = “ postgres ”; String pwd... postgres ”; Connection DB_mobile_conn = DriverManager.getConnection(url,user,pwd); System.out.println(“Database Connect ok...user = “ postgres ”; String pwd = “ postgres ”; Connection DB_mobile_conn = DriverManager.getConnection(url,user,pwd); System.out.println
Archetype relational mapping - a practical openEHR persistence solution.
Wang, Li; Min, Lingtong; Wang, Rui; Lu, Xudong; Duan, Huilong
2015-11-05
One of the primary obstacles to the widespread adoption of openEHR methodology is the lack of practical persistence solutions for future-proof electronic health record (EHR) systems as described by the openEHR specifications. This paper presents an archetype relational mapping (ARM) persistence solution for the archetype-based EHR systems to support healthcare delivery in the clinical environment. First, the data requirements of the EHR systems are analysed and organized into archetype-friendly concepts. The Clinical Knowledge Manager (CKM) is queried for matching archetypes; when necessary, new archetypes are developed to reflect concepts that are not encompassed by existing archetypes. Next, a template is designed for each archetype to apply constraints related to the local EHR context. Finally, a set of rules is designed to map the archetypes to data tables and provide data persistence based on the relational database. A comparison study was conducted to investigate the differences among the conventional database of an EHR system from a tertiary Class A hospital in China, the generated ARM database, and the Node + Path database. Five data-retrieving tests were designed based on clinical workflow to retrieve exams and laboratory tests. Additionally, two patient-searching tests were designed to identify patients who satisfy certain criteria. The ARM database achieved better performance than the conventional database in three of the five data-retrieving tests, but was less efficient in the remaining two tests. The time difference of query executions conducted by the ARM database and the conventional database is less than 130 %. The ARM database was approximately 6-50 times more efficient than the conventional database in the patient-searching tests, while the Node + Path database requires far more time than the other two databases to execute both the data-retrieving and the patient-searching tests. The ARM approach is capable of generating relational databases using archetypes and templates for archetype-based EHR systems, thus successfully adapting to changes in data requirements. ARM performance is similar to that of conventionally-designed EHR systems, and can be applied in a practical clinical environment. System components such as ARM can greatly facilitate the adoption of openEHR architecture within EHR systems.
Online with the world - International telecommunications connections (and how to make them)
NASA Technical Reports Server (NTRS)
Jack, Robert F.
1986-01-01
The intricacies involved in connecting to online services in Europe are discussed. A sample connection is presented. It is noted that European services usually carry single files on large databases; thus, good searchers can save some money by avoiding backfiles.
Forrest, Laura; Mitchell, Gillian; Thrupp, Letitia; Petelin, Lara; Richardson, Kate; Mascarenhas, Lyon; Young, Mary-Anne
2018-01-01
Clinical genetics units hold large amounts of information which could be utilised to benefit patients and their families. In Australia, a national research database, the Inherited Cancer Connect (ICCon) database, is being established that comprises clinical genetic data held for all carriers of mutations in cancer predisposition genes. Consumer input was sought to establish the acceptability of the inclusion of clinical genetic data into a research database. A qualitative approach using a modified nominal group technique was used to collect data through consumer forums conducted in three Australian states. Individuals who had previously received care from Familial Cancer Centres were invited to participate. Twenty-four consumers participated in three forums. Participants expressed positive attitudes about the establishment of the ICCon database, which were informed by the perceived benefits of the database including improved health outcomes for individuals with inherited cancer syndromes. Most participants were comfortable to waive consent for their clinical information to be included in the research database in a de-identified format. As major stakeholders, consumers have an integral role in contributing to the development and conduct of the ICCon database. As an initial step in the development of the ICCon database, the forums demonstrated consumers' acceptance of important aspects of the database including waiver of consent.
Lee, Young Han
2012-01-01
The objectives are (1) to introduce an easy open-source macro program as connection software and (2) to illustrate the practical usages in radiologic reading environment by simulating the radiologic reading process. The simulation is a set of radiologic reading process to do a practical task in the radiologic reading room. The principal processes are: (1) to view radiologic images on the Picture Archiving and Communicating System (PACS), (2) to connect the HIS/EMR (Hospital Information System/Electronic Medical Record) system, (3) to make an automatic radiologic reporting system, and (4) to record and recall information of interesting cases. This simulation environment was designed by using open-source macro program as connection software. The simulation performed well on the Window-based PACS workstation. Radiologists practiced the steps of the simulation comfortably by utilizing the macro-powered radiologic environment. This macro program could automate several manual cumbersome steps in the radiologic reading process. This program successfully acts as connection software for the PACS software, EMR/HIS, spreadsheet, and other various input devices in the radiologic reading environment. A user-friendly efficient radiologic reading environment could be established by utilizing open-source macro program as connection software. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Flat connections in open string mirror symmetry
NASA Astrophysics Data System (ADS)
Alim, Murad; Hecht, Michael; Jockers, Hans; Mayr, Peter; Mertens, Adrian; Soroush, Masoud
2012-06-01
We study a flat connection defined on the open-closed deformation space of open string mirror symmetry for type II compactifications on Calabi-Yau threefolds with D-branes. We use flatness and integrability conditions to define distinguished flat coordinates and the superpotential function at an arbitrary point in the open-closed deformation space. Integrability conditions are given for concrete deformation spaces with several closed and open string deformations. We study explicit examples for expansions around different limit points, including orbifold Gromov-Witten invariants, and brane configurations with several brane moduli. In particular, the latter case covers stacks of parallel branes with non-Abelian symmetry.
cMapper: gene-centric connectivity mapper for EBI-RDF platform.
Shoaib, Muhammad; Ansari, Adnan Ahmad; Ahn, Sung-Min
2017-01-15
In this era of biological big data, data integration has become a common task and a challenge for biologists. The Resource Description Framework (RDF) was developed to enable interoperability of heterogeneous datasets. The EBI-RDF platform enables an efficient data integration of six independent biological databases using RDF technologies and shared ontologies. However, to take advantage of this platform, biologists need to be familiar with RDF technologies and SPARQL query language. To overcome this practical limitation of the EBI-RDF platform, we developed cMapper, a web-based tool that enables biologists to search the EBI-RDF databases in a gene-centric manner without a thorough knowledge of RDF and SPARQL. cMapper allows biologists to search data entities in the EBI-RDF platform that are connected to genes or small molecules of interest in multiple biological contexts. The input to cMapper consists of a set of genes or small molecules, and the output are data entities in six independent EBI-RDF databases connected with the given genes or small molecules in the user's query. cMapper provides output to users in the form of a graph in which nodes represent data entities and the edges represent connections between data entities and inputted set of genes or small molecules. Furthermore, users can apply filters based on database, taxonomy, organ and pathways in order to focus on a core connectivity graph of their interest. Data entities from multiple databases are differentiated based on background colors. cMapper also enables users to investigate shared connections between genes or small molecules of interest. Users can view the output graph on a web browser or download it in either GraphML or JSON formats. cMapper is available as a web application with an integrated MySQL database. The web application was developed using Java and deployed on Tomcat server. We developed the user interface using HTML5, JQuery and the Cytoscape Graph API. cMapper can be accessed at http://cmapper.ewostech.net Readers can download the development manual from the website http://cmapper.ewostech.net/docs/cMapperDocumentation.pdf. Source Code is available at https://github.com/muhammadshoaib/cmapperContact:smahn@gachon.ac.krSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
myChEMBL: a virtual machine implementation of open data and cheminformatics tools.
Ochoa, Rodrigo; Davies, Mark; Papadatos, George; Atkinson, Francis; Overington, John P
2014-01-15
myChEMBL is a completely open platform, which combines public domain bioactivity data with open source database and cheminformatics technologies. myChEMBL consists of a Linux (Ubuntu) Virtual Machine featuring a PostgreSQL schema with the latest version of the ChEMBL database, as well as the latest RDKit cheminformatics libraries. In addition, a self-contained web interface is available, which can be modified and improved according to user specifications. The VM is available at: ftp://ftp.ebi.ac.uk/pub/databases/chembl/VM/myChEMBL/current. The web interface and web services code is available at: https://github.com/rochoa85/myChEMBL.
NASA Technical Reports Server (NTRS)
Pryor, Richard Lee (Inventor)
1977-01-01
A line driver including a pair of complementary transistors having their conduction paths serially connected between an operating and a reference potential and their bases connected through a first switch to a signal input terminal. A second switch is connected between the common base connection and the common connection of the conduction paths. With the second switch open and the first closed, an output voltage, responsive to the input signal, corresponding to first or second binary values is obtained. When the second switch is closed and the first opened, the transistor pair is turned off, disconnecting the line driver from its load, thereby providing tri-state logic operation.
Atomic and Molecular Databases, VAMDC (Virtual Atomic and Molecular Data Centre)
NASA Astrophysics Data System (ADS)
Dubernet, Marie-Lise; Zwölf, Carlo Maria; Moreau, Nicolas; Awa Ba, Yaya; VAMDC Consortium
2015-08-01
The "Virtual Atomic and Molecular Data Centre Consortium",(VAMDC Consortium, http://www.vamdc.eu) is a Consortium bound by an Memorandum of Understanding aiming at ensuring the sustainability of the VAMDC e-infrastructure. The current VAMDC e-infrastructure inter-connects about 30 atomic and molecular databases with the number of connected databases increasing every year: some databases are well-known databases such as CDMS, JPL, HITRAN, VALD,.., other databases have been created since the start of VAMDC. About 90% of our databases are used for astrophysical applications. The data can be queried, retrieved, visualized in a single format from a general portal (http://portal.vamdc.eu) and VAMDC is also developing standalone tools in order to retrieve and handle the data. VAMDC provides software and support in order to include databases within the VAMDC e-infrastructure. One current feature of VAMDC is the constrained environnement of description of data that ensures a higher quality for distribution of data; a future feature is the link of VAMDC with evaluation/validation groups. The talk will present the VAMDC Consortium and the VAMDC e infrastructure with its underlying technology, its services, its science use cases and its etension towards other communities than the academic research community.
Hancock, David; Wilson, Michael; Velarde, Giles; Morrison, Norman; Hayes, Andrew; Hulme, Helen; Wood, A Joseph; Nashar, Karim; Kell, Douglas B; Brass, Andy
2005-11-03
maxdLoad2 is a relational database schema and Java application for microarray experimental annotation and storage. It is compliant with all standards for microarray meta-data capture; including the specification of what data should be recorded, extensive use of standard ontologies and support for data exchange formats. The output from maxdLoad2 is of a form acceptable for submission to the ArrayExpress microarray repository at the European Bioinformatics Institute. maxdBrowse is a PHP web-application that makes contents of maxdLoad2 databases accessible via web-browser, the command-line and web-service environments. It thus acts as both a dissemination and data-mining tool. maxdLoad2 presents an easy-to-use interface to an underlying relational database and provides a full complement of facilities for browsing, searching and editing. There is a tree-based visualization of data connectivity and the ability to explore the links between any pair of data elements, irrespective of how many intermediate links lie between them. Its principle novel features are: the flexibility of the meta-data that can be captured, the tools provided for importing data from spreadsheets and other tabular representations, the tools provided for the automatic creation of structured documents, the ability to browse and access the data via web and web-services interfaces. Within maxdLoad2 it is very straightforward to customise the meta-data that is being captured or change the definitions of the meta-data. These meta-data definitions are stored within the database itself allowing client software to connect properly to a modified database without having to be specially configured. The meta-data definitions (configuration file) can also be centralized allowing changes made in response to revisions of standards or terminologies to be propagated to clients without user intervention.maxdBrowse is hosted on a web-server and presents multiple interfaces to the contents of maxd databases. maxdBrowse emulates many of the browse and search features available in the maxdLoad2 application via a web-browser. This allows users who are not familiar with maxdLoad2 to browse and export microarray data from the database for their own analysis. The same browse and search features are also available via command-line and SOAP server interfaces. This both enables scripting of data export for use embedded in data repositories and analysis environments, and allows access to the maxd databases via web-service architectures. maxdLoad2 http://www.bioinf.man.ac.uk/microarray/maxd/ and maxdBrowse http://dbk.ch.umist.ac.uk/maxdBrowse are portable and compatible with all common operating systems and major database servers. They provide a powerful, flexible package for annotation of microarray experiments and a convenient dissemination environment. They are available for download and open sourced under the Artistic License.
The International Space Station Comparative Maintenance Analysis(CMAM)
2004-09-01
External Component • Entire ORU Database 2. Database Connectivity The CMAM ORU database consists of three tables: an ORU master parts list , an ISS...Flight table, and an ISS Subsystem table. The ORU master parts list and the ISS Flight table can be updated or modified from the CMAM user interface
1991-06-24
52 Gross Industrial Output in April [CEI Database ...Jan-Apr Statistics on Payments to Employees [CEI Database ] ....................................................... 56 Jan-Apr Statistics on Labor...Productivity [CEI Database ] .............................................................. 56 TRANSPORTATION Hebei Province Opens Two Air Routes [HEBEI
Faculty Views of Open Web Resource Use by College Students
ERIC Educational Resources Information Center
Tomaiuolo, Nicholas G.
2005-01-01
This article assesses both the extent of students' use of open Web resources and library subscription databases and professors' satisfaction with that use as reported by a survey of 120 community college and university English faculty. It was concluded that although library budgets allocate significant funds to offer subscription databases,…
Models in Translational Oncology: A Public Resource Database for Preclinical Cancer Research.
Galuschka, Claudia; Proynova, Rumyana; Roth, Benjamin; Augustin, Hellmut G; Müller-Decker, Karin
2017-05-15
The devastating diseases of human cancer are mimicked in basic and translational cancer research by a steadily increasing number of tumor models, a situation requiring a platform with standardized reports to share model data. Models in Translational Oncology (MiTO) database was developed as a unique Web platform aiming for a comprehensive overview of preclinical models covering genetically engineered organisms, models of transplantation, chemical/physical induction, or spontaneous development, reviewed here. MiTO serves data entry for metastasis profiles and interventions. Moreover, cell lines and animal lines including tool strains can be recorded. Hyperlinks for connection with other databases and file uploads as supplementary information are supported. Several communication tools are offered to facilitate exchange of information. Notably, intellectual property can be protected prior to publication by inventor-defined accessibility of any given model. Data recall is via a highly configurable keyword search. Genome editing is expected to result in changes of the spectrum of model organisms, a reason to open MiTO for species-independent data. Registered users may deposit own model fact sheets (FS). MiTO experts check them for plausibility. Independently, manually curated FS are provided to principle investigators for revision and publication. Importantly, noneditable versions of reviewed FS can be cited in peer-reviewed journals. Cancer Res; 77(10); 2557-63. ©2017 AACR . ©2017 American Association for Cancer Research.
Intelligent communication assistant for databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakobson, G.; Shaked, V.; Rowley, S.
1983-01-01
An intelligent communication assistant for databases, called FRED (front end for databases) is explored. FRED is designed to facilitate access to database systems by users of varying levels of experience. FRED is a second generation of natural language front-ends for databases and intends to solve two critical interface problems existing between end-users and databases: connectivity and communication problems. The authors report their experiences in developing software for natural language query processing, dialog control, and knowledge representation, as well as the direction of future work. 10 references.
Pesälä, Samuli; Virtanen, Mikko J; Sane, Jussi; Mustonen, Pekka; Kaila, Minna; Helve, Otto
2017-11-06
People using the Internet to find information on health issues, such as specific diseases, usually start their search from a general search engine, for example, Google. Internet searches such as these may yield results and data of questionable quality and reliability. Health Library is a free-of-charge medical portal on the Internet providing medical information for the general public. Physician's Databases, an Internet evidence-based medicine source, provides medical information for health care professionals (HCPs) to support their clinical practice. Both databases are available throughout Finland, but the latter is used only by health professionals and pharmacies. Little is known about how the general public seeks medical information from medical sources on the Internet, how this behavior differs from HCPs' queries, and what causes possible differences in behavior. The aim of our study was to evaluate how the general public's and HCPs' information-seeking trends from Internet medical databases differ seasonally and temporally. In addition, we aimed to evaluate whether the general public's information-seeking trends could be utilized for disease surveillance and whether media coverage could affect these seeking trends. Lyme disease, serving as a well-defined disease model with distinct seasonal variation, was chosen as a case study. Two Internet medical databases, Health Library and Physician's Databases, were used. We compared the general public's article openings on Lyme disease from Health Library to HCPs' article openings on Lyme disease from Physician's Databases seasonally across Finland from 2011 to 2015. Additionally, media publications related to Lyme disease were searched from the largest and most popular media websites in Finland. Both databases, Health Library and Physician's Databases, show visually similar patterns in temporal variations of article openings on Lyme disease in Finland from 2011 to 2015. However, Health Library openings show not only an increasing trend over time but also greater fluctuations, especially during peak opening seasons. Outside these seasons, publications in the media coincide with Health Library article openings only occasionally. Lyme disease-related information-seeking behaviors between the general public and HCPs from Internet medical portals share similar temporal variations, which is consistent with the trend seen in epidemiological data. Therefore, the general public's article openings could be used as a supplementary source of information for disease surveillance. The fluctuations in article openings appeared stronger among the general public, thus, suggesting that different factors such as media coverage, affect the information-seeking behaviors of the public versus professionals. However, media coverage may also have an influence on HCPs. Not every publication was associated with an increase in openings, but the higher the media coverage by some publications, the higher the general public's access to Health Library. ©Samuli Pesälä, Mikko J Virtanen, Jussi Sane, Pekka Mustonen, Minna Kaila, Otto Helve. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 06.11.2017.
Virtanen, Mikko J; Sane, Jussi; Mustonen, Pekka; Kaila, Minna; Helve, Otto
2017-01-01
Background People using the Internet to find information on health issues, such as specific diseases, usually start their search from a general search engine, for example, Google. Internet searches such as these may yield results and data of questionable quality and reliability. Health Library is a free-of-charge medical portal on the Internet providing medical information for the general public. Physician’s Databases, an Internet evidence-based medicine source, provides medical information for health care professionals (HCPs) to support their clinical practice. Both databases are available throughout Finland, but the latter is used only by health professionals and pharmacies. Little is known about how the general public seeks medical information from medical sources on the Internet, how this behavior differs from HCPs’ queries, and what causes possible differences in behavior. Objective The aim of our study was to evaluate how the general public’s and HCPs’ information-seeking trends from Internet medical databases differ seasonally and temporally. In addition, we aimed to evaluate whether the general public’s information-seeking trends could be utilized for disease surveillance and whether media coverage could affect these seeking trends. Methods Lyme disease, serving as a well-defined disease model with distinct seasonal variation, was chosen as a case study. Two Internet medical databases, Health Library and Physician’s Databases, were used. We compared the general public’s article openings on Lyme disease from Health Library to HCPs’ article openings on Lyme disease from Physician’s Databases seasonally across Finland from 2011 to 2015. Additionally, media publications related to Lyme disease were searched from the largest and most popular media websites in Finland. Results Both databases, Health Library and Physician’s Databases, show visually similar patterns in temporal variations of article openings on Lyme disease in Finland from 2011 to 2015. However, Health Library openings show not only an increasing trend over time but also greater fluctuations, especially during peak opening seasons. Outside these seasons, publications in the media coincide with Health Library article openings only occasionally. Conclusions Lyme disease–related information-seeking behaviors between the general public and HCPs from Internet medical portals share similar temporal variations, which is consistent with the trend seen in epidemiological data. Therefore, the general public’s article openings could be used as a supplementary source of information for disease surveillance. The fluctuations in article openings appeared stronger among the general public, thus, suggesting that different factors such as media coverage, affect the information-seeking behaviors of the public versus professionals. However, media coverage may also have an influence on HCPs. Not every publication was associated with an increase in openings, but the higher the media coverage by some publications, the higher the general public’s access to Health Library. PMID:29109071
Working with Specify in a Paleo-Geological Context
NASA Astrophysics Data System (ADS)
Molineux, A.; Thompson, A. C.; Appleton, L.
2014-12-01
For geological collections with limited funding an open source relational database provides an opportunity to digitize specimens and related data. At the Non-vertebrate Paleontology Lab, a large mixed paleo and geological repository on a restricted budget, we opted for one such database, Specify. Initially created at Kansas University for neontological collections and based on a single computer, Specify has moved into the networked scene and will soon be web-based as Specify 7. We currently use the server version of Specify 6, networked to all computers in the lab each running a desktop client, often with six users at any one time. Along with improved access there have been great efforts to broaden the applicability of this database to other disciplines. Current developments are of great importance to us because they focus on the geological aspects of lithostratigraphy and chronostratigaphy and their relationship to other variables. Adoption of this software has required constant change as we move to take advantage of the great improvements. We enjoy the interaction with the developers and their willingness to listen and consider our issues. Here we discuss some of the ways in which we have fashioned Specify into a database that provides us with the flexibility that we need without removing the ability to share our data with other aggregators through accepted protocols. We discuss the customization of forms, the attachment of media and tracking of original media files, our efforts to incorporate geological specimens, and our plans to link the individual specimen record GUIDs to an IGSN numbers and thence to future connections to data derived from our specimens.
Liljekvist, Mads Svane; Andresen, Kristoffer; Pommergaard, Hans-Christian; Rosenberg, Jacob
2015-01-01
Background. Open access (OA) journals allows access to research papers free of charge to the reader. Traditionally, biomedical researchers use databases like MEDLINE and EMBASE to discover new advances. However, biomedical OA journals might not fulfill such databases' criteria, hindering dissemination. The Directory of Open Access Journals (DOAJ) is a database exclusively listing OA journals. The aim of this study was to investigate DOAJ's coverage of biomedical OA journals compared with the conventional biomedical databases. Methods. Information on all journals listed in four conventional biomedical databases (MEDLINE, PubMed Central, EMBASE and SCOPUS) and DOAJ were gathered. Journals were included if they were (1) actively publishing, (2) full OA, (3) prospectively indexed in one or more database, and (4) of biomedical subject. Impact factor and journal language were also collected. DOAJ was compared with conventional databases regarding the proportion of journals covered, along with their impact factor and publishing language. The proportion of journals with articles indexed by DOAJ was determined. Results. In total, 3,236 biomedical OA journals were included in the study. Of the included journals, 86.7% were listed in DOAJ. Combined, the conventional biomedical databases listed 75.0% of the journals; 18.7% in MEDLINE; 36.5% in PubMed Central; 51.5% in SCOPUS and 50.6% in EMBASE. Of the journals in DOAJ, 88.7% published in English and 20.6% had received impact factor for 2012 compared with 93.5% and 26.0%, respectively, for journals in the conventional biomedical databases. A subset of 51.1% and 48.5% of the journals in DOAJ had articles indexed from 2012 and 2013, respectively. Of journals exclusively listed in DOAJ, one journal had received an impact factor for 2012, and 59.6% of the journals had no content from 2013 indexed in DOAJ. Conclusions. DOAJ is the most complete registry of biomedical OA journals compared with five conventional biomedical databases. However, DOAJ only indexes articles for half of the biomedical journals listed, making it an incomplete source for biomedical research papers in general.
Two terminal micropower radar sensor
McEwan, Thomas E.
1995-01-01
A simple, low power ultra-wideband radar motion sensor/switch configuration connects a power source and load to ground. The switch is connected to and controlled by the signal output of a radar motion sensor. The power input of the motion sensor is connected to the load through a diode which conducts power to the motion sensor when the switch is open. A storage capacitor or rechargeable battery is connected to the power input of the motion sensor. The storage capacitor or battery is charged when the switch is open and powers the motion sensor when the switch is closed. The motion sensor and switch are connected between the same two terminals between the source/load and ground.
Two terminal micropower radar sensor
McEwan, T.E.
1995-11-07
A simple, low power ultra-wideband radar motion sensor/switch configuration connects a power source and load to ground. The switch is connected to and controlled by the signal output of a radar motion sensor. The power input of the motion sensor is connected to the load through a diode which conducts power to the motion sensor when the switch is open. A storage capacitor or rechargeable battery is connected to the power input of the motion sensor. The storage capacitor or battery is charged when the switch is open and powers the motion sensor when the switch is closed. The motion sensor and switch are connected between the same two terminals between the source/load and ground. 3 figs.
ERIC Educational Resources Information Center
Williams, Martha E.
1982-01-01
Provides update to 13-year analysis of finances of major database producer noting actions taken to improve finances (decrease expenses, increase efficiency, develop new products, market strategies and services, change pricing scheme, omit print products, increase prices) and consequences of actions (revenue increase, connect hour increase). Five…
Tao of Gateway: Providing Internet Access to Licensed Databases.
ERIC Educational Resources Information Center
McClellan, Gregory A.; Garrison, William V.
1997-01-01
Illustrates an approach for providing networked access to licensed databases over the Internet by positioning the library between patron and vendor. Describes how the gateway systems and database connection servers work and discusses how treatment of security has evolved with the introduction of the World Wide Web. Outlines plans to reimplement…
Massive Scale Cyber Traffic Analysis: A Driver for Graph Database Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joslyn, Cliff A.; Choudhury, S.; Haglin, David J.
2013-06-19
We describe the significance and prominence of network traffic analysis (TA) as a graph- and network-theoretical domain for advancing research in graph database systems. TA involves observing and analyzing the connections between clients, servers, hosts, and actors within IP networks, both at particular times and as extended over times. Towards that end, NetFlow (or more generically, IPFLOW) data are available from routers and servers which summarize coherent groups of IP packets flowing through the network. IPFLOW databases are routinely interrogated statistically and visualized for suspicious patterns. But the ability to cast IPFLOW data as a massive graph and query itmore » interactively, in order to e.g.\\ identify connectivity patterns, is less well advanced, due to a number of factors including scaling, and their hybrid nature combining graph connectivity and quantitative attributes. In this paper, we outline requirements and opportunities for graph-structured IPFLOW analytics based on our experience with real IPFLOW databases. Specifically, we describe real use cases from the security domain, cast them as graph patterns, show how to express them in two graph-oriented query languages SPARQL and Datalog, and use these examples to motivate a new class of "hybrid" graph-relational systems.« less
Bluetooth wireless database for scoliosis clinics.
Lou, E; Fedorak, M V; Hill, D L; Raso, J V; Moreau, M J; Mahood, J K
2003-05-01
A database system with Bluetooth wireless connectivity has been developed so that scoliosis clinics can be run more efficiently and data can be mined for research studies without significant increases in equipment cost. The wireless database system consists of a Bluetooth-enabled laptop or PC and a Bluetooth-enabled handheld personal data assistant (PDA). Each patient has a profile in the database, which has all of his or her clinical history. Immediately prior to the examination, the orthopaedic surgeon selects a patient's profile from the database and uploads that data to the PDA over a Bluetooth wireless connection. The surgeon can view the entire clinical history of the patient while in the examination room and, at the same time, enter in any new measurements and comments from the current examination. After seeing the patient, the surgeon synchronises the newly entered information with the database wirelessly and prints a record for the chart. This combination of the database and the PDA both improves efficiency and accuracy and can save significant time, as there is less duplication of work, and no dictation is required. The equipment required to implement this solution is a Bluetooth-enabled PDA and a Bluetooth wireless transceiver for the PC or laptop.
OpenID connect as a security service in Cloud-based diagnostic imaging systems
NASA Astrophysics Data System (ADS)
Ma, Weina; Sartipi, Kamran; Sharghi, Hassan; Koff, David; Bak, Peter
2015-03-01
The evolution of cloud computing is driving the next generation of diagnostic imaging (DI) systems. Cloud-based DI systems are able to deliver better services to patients without constraining to their own physical facilities. However, privacy and security concerns have been consistently regarded as the major obstacle for adoption of cloud computing by healthcare domains. Furthermore, traditional computing models and interfaces employed by DI systems are not ready for accessing diagnostic images through mobile devices. RESTful is an ideal technology for provisioning both mobile services and cloud computing. OpenID Connect, combining OpenID and OAuth together, is an emerging REST-based federated identity solution. It is one of the most perspective open standards to potentially become the de-facto standard for securing cloud computing and mobile applications, which has ever been regarded as "Kerberos of Cloud". We introduce OpenID Connect as an identity and authentication service in cloud-based DI systems and propose enhancements that allow for incorporating this technology within distributed enterprise environment. The objective of this study is to offer solutions for secure radiology image sharing among DI-r (Diagnostic Imaging Repository) and heterogeneous PACS (Picture Archiving and Communication Systems) as well as mobile clients in the cloud ecosystem. Through using OpenID Connect as an open-source identity and authentication service, deploying DI-r and PACS to private or community clouds should obtain equivalent security level to traditional computing model.
eButterfly: Leveraging Massive Online Citizen Science for Butterfly Conservation
Prudic, Kathleen L.; McFarland, Kent P.; Oliver, Jeffrey C.; Hutchinson, Rebecca A.; Long, Elizabeth C.; Kerr, Jeremy T.; Larrivée, Maxim
2017-01-01
Data collection, storage, analysis, visualization, and dissemination are changing rapidly due to advances in new technologies driven by computer science and universal access to the internet. These technologies and web connections place human observers front and center in citizen science-driven research and are critical in generating new discoveries and innovation in such fields as astronomy, biodiversity, and meteorology. Research projects utilizing a citizen science approach address scientific problems at regional, continental, and even global scales otherwise impossible for a single lab or even a small collection of academic researchers. Here we describe eButterfly an integrative checklist-based butterfly monitoring and database web-platform that leverages the skills and knowledge of recreational butterfly enthusiasts to create a globally accessible unified database of butterfly observations across North America. Citizen scientists, conservationists, policy makers, and scientists are using eButterfly data to better understand the biological patterns of butterfly species diversity and how environmental conditions shape these patterns in space and time. eButterfly in collaboration with thousands of butterfly enthusiasts has created a near real-time butterfly data resource producing tens of thousands of observations per year open to all to share and explore. PMID:28524117
Design of two-photon molecular tandem architectures for solar cells by ab initio theory
Ornso, Kristian B.; Garcia-Lastra, Juan M.; De La Torre, Gema; ...
2015-03-04
An extensive database of spectroscopic properties of molecules from ab initio calculations is used to design molecular complexes for use in tandem solar cells that convert two photons into a single electron–hole pair, thereby increasing the output voltage while covering a wider spectral range. Three different architectures are considered: the first two involve a complex consisting of two dye molecules with appropriately matched frontier orbitals, connected by a molecular diode. Optimized combinations of dye molecules are determined by taking advantage of our computational database of the structural and energetic properties of several thousand porphyrin dyes. The third design is amore » molecular analogy of the intermediate band solar cell, and involves a single dye molecule with strong intersystem crossing to ensure a long lifetime of the intermediate state. Based on the calculated energy levels and molecular orbitals, energy diagrams are presented for the individual steps in the operation of such tandem solar cells. We find that theoretical open circuit voltages of up to 1.8 V can be achieved using these tandem designs. Questions about the practical implementation of prototypical devices, such as the synthesis of the tandem molecules and potential loss mechanisms, are addressed.« less
eButterfly: Leveraging Massive Online Citizen Science for Butterfly Consevation.
Prudic, Kathleen L; McFarland, Kent P; Oliver, Jeffrey C; Hutchinson, Rebecca A; Long, Elizabeth C; Kerr, Jeremy T; Larrivée, Maxim
2017-05-18
Data collection, storage, analysis, visualization, and dissemination are changing rapidly due to advances in new technologies driven by computer science and universal access to the internet. These technologies and web connections place human observers front and center in citizen science-driven research and are critical in generating new discoveries and innovation in such fields as astronomy, biodiversity, and meteorology. Research projects utilizing a citizen science approach address scientific problems at regional, continental, and even global scales otherwise impossible for a single lab or even a small collection of academic researchers. Here we describe eButterfly an integrative checklist-based butterfly monitoring and database web-platform that leverages the skills and knowledge of recreational butterfly enthusiasts to create a globally accessible unified database of butterfly observations across North America. Citizen scientists, conservationists, policy makers, and scientists are using eButterfly data to better understand the biological patterns of butterfly species diversity and how environmental conditions shape these patterns in space and time. eButterfly in collaboration with thousands of butterfly enthusiasts has created a near real-time butterfly data resource producing tens of thousands of observations per year open to all to share and explore.
Low Cost Real Time Autonomous Remote Monitoring Platform
NASA Astrophysics Data System (ADS)
Rodríguez, J. R.; Maldonado, P. M.; Pierson, J. J.; Harris, L.
2016-02-01
Environmental scientists have a need for gathering multiple parameters during specific time periods to answer their research questions. Most available monitoring systems are very expensive and closed systems, which limits the potential to scale up research projects. We developed a low cost, autonomous, real-time monitoring platform that is both open hardware/software and easy to build, deploy, manage and maintain. The hardware is built with off-the-shelf components and a credit card sized computer called Raspberry Pi, running an open source operating (Raspbian). The system runs off a set of batteries and a solar panel, which makes it ideal for remote locations. The software is divided into three parts: 1) a framework for abstracting the sensors (initializing, pooling and communications) designed in python and using a fully object-oriented design, making it easy for new sensor to be added with minimal code changes, 2) a web front end for managing the entire system, 3) a data store (database) framework for local and remote data retrieval and reporting services. Connectivity to the system can be accomplished through a Wi-Fi or cellular Internet connection. Scientists are being forced to do more with less, in response our platform will provide them with a flexible system that can improve the process of data gathering with an accessible, modular, low-cost, and efficient monitoring system. Currently, we have the required permits from the Department of Natural Resources in Puerto Rico to deploy the platform at the Laguna Grande Bioluminescence Lagoon in Fajardo, PR. This station will include probes for pH, DO, Conductivity and water temperature.
A Dedicated Computational Platform for Cellular Monte Carlo T-CAD Software Tools
2015-07-14
computer that establishes an encrypted Virtual Private Network ( OpenVPN [44]) based on the Secure Socket Layer (SSL) paradigm. Each user is given a...security certificate for each device used to connect to the computing nodes. Stable OpenVPN clients are available for Linux, Microsoft Windows, Apple OSX...platform is granted by an encrypted connection base on the Secure Socket Layer (SSL) protocol, and implemented in the OpenVPN Virtual Personal Network
A Web-based open-source database for the distribution of hyperspectral signatures
NASA Astrophysics Data System (ADS)
Ferwerda, J. G.; Jones, S. D.; Du, Pei-Jun
2006-10-01
With the coming of age of field spectroscopy as a non-destructive means to collect information on the physiology of vegetation, there is a need for storage of signatures, and, more importantly, their metadata. Without the proper organisation of metadata, the signatures itself become limited. In order to facilitate re-distribution of data, a database for the storage & distribution of hyperspectral signatures and their metadata was designed. The database was built using open-source software, and can be used by the hyperspectral community to share their data. Data is uploaded through a simple web-based interface. The database recognizes major file-formats by ASD, GER and International Spectronics. The database source code is available for download through the hyperspectral.info web domain, and we happily invite suggestion for additions & modification for the database to be submitted through the online forums on the same website.
Lazarou, Stavros; Vita, Vasiliki; Ekonomou, Lambros
2018-02-01
The data of this article represent a real electricity distribution network on twenty kilovolts (20 kV) at medium voltage level of the Hellenic electricity distribution system [1]. This network has been chosen as suitable for smart grid analysis. It demonstrates moderate penetration of renewable sources and it has capability in part of time for reverse power flows. It is suitable for studies of load aggregation, storage, demand response. It represents a rural line of fifty-five kilometres (55 km) total length, a typical length for this type. It serves forty-five (45) medium to low voltage transformers and twenty-four (24) connections to photovoltaic plants. The total installed load capacity is twelve mega-volt-ampere (12 MVA), however the maximum observed load is lower. The data are ready to perform load flow simulation on Matpower [2] for the maximum observed load power on the half production for renewables. The simulation results and processed data for creating the source code are also provided on the database available at http://dx.doi.org/10.7910/DVN/1I6MKU.
Diverter/bop system and method for a bottom supported offshore drilling rig
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roche, J.R.; Alexander, G.G.; Carbaugh, W.L.
1986-07-01
A system is described adapted for alternative use as a diverter or a blowout preventer for a bottom supported drilling rig and adapted for connection to a permanent housing attached to rig structural members beneath a drilling rig rotary table, the permanent housing having an outlet connectable to a rig fluid system flow line. The system consists of: a fluid flow controller having a controller housing with a lower cylindrical opening and an upper cylindrical opening and a vertical path therebetween and a first outlet passage and a second outlet passage provided in its wall, a packing element disposed withinmore » the controller housing, and annular piston means adapted for moving from a first position to a second position, whereby in the first position the piston means wall prevents interior fluid from communicating with the outlet passages in the controller housing wall and in the second position the piston means wall allows fluid communication of interior fluid with the outlet passages and urges the annular packing element to close about an object extending through the bore of the controller housing or to close the vertical flow path through through the controller housing in the absence of any object in the vertical flow path, means for connecting a vent line to the outlet passage provided in the controller housing wall, a lower telescoping spool having a lower joining means at its lower end for joining alternatively to structural casing or to a mandrel connected to a conductor string cemented within the structural casing and an upper connection means at its upper end for connection to the lower cylindrical opening of the fluid flow controller, and an upper telescoping spool having a lower connection means for connection to the upper cylindrical opening of the fluid flow controller.« less
ERIC Educational Resources Information Center
West-Puckett, Stephanie; Smith, Anna; Cantrill, Christina; Zamora, Mia
2018-01-01
To better understand the impacts of participatory design in English language arts teacher education, this critical case study focuses on the National Writing Project's Connected Learning Massive, Open, Online Collaboration (CLMOOC) that engaged educators in playing with the connected learning framework. The authors draw from 5 years of interaction…
49 CFR 177.837 - Class 3 materials.
Code of Federal Regulations, 2014 CFR
2014-10-01
... through an open filling hole, one end of a bond wire shall be connected to the stationary system piping or... interconnected.) This connection must be made before any filling hole is opened, and must remain in place until after the last filling hole has been closed. Additional bond wires are not needed around All-Metal...
49 CFR 177.837 - Class 3 materials.
Code of Federal Regulations, 2012 CFR
2012-10-01
... through an open filling hole, one end of a bond wire shall be connected to the stationary system piping or... interconnected.) This connection must be made before any filling hole is opened, and must remain in place until after the last filling hole has been closed. Additional bond wires are not needed around All-Metal...
49 CFR 177.837 - Class 3 materials.
Code of Federal Regulations, 2013 CFR
2013-10-01
... through an open filling hole, one end of a bond wire shall be connected to the stationary system piping or... interconnected.) This connection must be made before any filling hole is opened, and must remain in place until after the last filling hole has been closed. Additional bond wires are not needed around All-Metal...
NASA Astrophysics Data System (ADS)
Das, I.; Oberai, K.; Sarathi Roy, P.
2012-07-01
Landslides exhibit themselves in different mass movement processes and are considered among the most complex natural hazards occurring on the earth surface. Making landslide database available online via WWW (World Wide Web) promotes the spreading and reaching out of the landslide information to all the stakeholders. The aim of this research is to present a comprehensive database for generating landslide hazard scenario with the help of available historic records of landslides and geo-environmental factors and make them available over the Web using geospatial Free & Open Source Software (FOSS). FOSS reduces the cost of the project drastically as proprietary software's are very costly. Landslide data generated for the period 1982 to 2009 were compiled along the national highway road corridor in Indian Himalayas. All the geo-environmental datasets along with the landslide susceptibility map were served through WEBGIS client interface. Open source University of Minnesota (UMN) mapserver was used as GIS server software for developing web enabled landslide geospatial database. PHP/Mapscript server-side application serve as a front-end application and PostgreSQL with PostGIS extension serve as a backend application for the web enabled landslide spatio-temporal databases. This dynamic virtual visualization process through a web platform brings an insight into the understanding of the landslides and the resulting damage closer to the affected people and user community. The landslide susceptibility dataset is also made available as an Open Geospatial Consortium (OGC) Web Feature Service (WFS) which can be accessed through any OGC compliant open source or proprietary GIS Software.
NASA Astrophysics Data System (ADS)
Jacobsen, Jurma; Edlich, Stefan
2009-02-01
There is a broad range of potential useful mobile location-based applications. One crucial point seems to be to make them available to the public at large. This case illuminates the abilities of Android - the operating system for mobile devices - to fulfill this demand in the mashup way by use of some special geocoding web services and one integrated web service for getting the nearest cash machines data. It shows an exemplary approach for building mobile location-based mashups for everyone: 1. As a basis for reaching as many people as possible the open source Android OS is assumed to spread widely. 2. Everyone also means that the handset has not to be an expensive GPS device. This is realized by re-utilization of the existing GSM infrastructure with the Cell of Origin (COO) method which makes a lookup of the CellID in one of the growing web available CellID databases. Some of these databases are still undocumented and not yet published. Furthermore the Google Maps API for Mobile (GMM) and the open source counterpart OpenCellID are used. The user's current position localization via lookup of the closest cell to which the handset is currently connected to (COO) is not as precise as GPS, but appears to be sufficient for lots of applications. For this reason the GPS user is the most pleased one - for this user the system is fully automated. In contrary there could be some users who doesn't own a GPS cellular. This user should refine his/her location by one click on the map inside of the determined circular region. The users are then shown and guided by a path to the nearest cash machine by integrating Google Maps API with an overlay. Additionally, the GPS user can keep track of him- or herself by getting a frequently updated view via constantly requested precise GPS data for his or her position.
Preliminary geologic map of the Piru 7.5' quadrangle, southern California: a digital database
Yerkes, R.F.; Campbell, Russell H.
1995-01-01
This Open-File report is a digital geologic map database. This pamphlet serves to introduce and describe the digital data. There is no paper map included in the Open-File report. This digital map database is compiled from previously published sources combined with some new mapping and modifications in nomenclature. The geologic map database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U. S. Geological Survey. For detailed descriptions of the units, their stratigraphic relations and sources of geologic mapping consult Yerkes and Campbell (1995). More specific information about the units may be available in the original sources.
Jones, Benjamin M.; Arp, Christopher D.; Whitman, Matthew S.; Nigro, Debora A.; Nitze, Ingmar; Beaver, John; Gadeke, Anne; Zuck, Callie; Liljedahl, Anna K.; Daanen, Ronald; Torvinen, Eric; Fritz, Stacey; Grosse, Guido
2017-01-01
Lakes are dominant and diverse landscape features in the Arctic, but conventional land cover classification schemes typically map them as a single uniform class. Here, we present a detailed lake-centric geospatial database for an Arctic watershed in northern Alaska. We developed a GIS dataset consisting of 4362 lakes that provides information on lake morphometry, hydrologic connectivity, surface area dynamics, surrounding terrestrial ecotypes, and other important conditions describing Arctic lakes. Analyzing the geospatial database relative to fish and bird survey data shows relations to lake depth and hydrologic connectivity, which are being used to guide research and aid in the management of aquatic resources in the National Petroleum Reserve in Alaska. Further development of similar geospatial databases is needed to better understand and plan for the impacts of ongoing climate and land-use changes occurring across lake-rich landscapes in the Arctic.
SciELO, Scientific Electronic Library Online, a Database of Open Access Journals
ERIC Educational Resources Information Center
Meneghini, Rogerio
2013-01-01
This essay discusses SciELO, a scientific journal database operating in 14 countries. It covers over 1000 journals providing open access to full text and table sets of scientometrics data. In Brazil it is responsible for a collection of nearly 300 journals, selected along 15 years as the best Brazilian periodicals in natural and social sciences.…
ReprDB and panDB: minimalist databases with maximal microbial representation.
Zhou, Wei; Gay, Nicole; Oh, Julia
2018-01-18
Profiling of shotgun metagenomic samples is hindered by a lack of unified microbial reference genome databases that (i) assemble genomic information from all open access microbial genomes, (ii) have relatively small sizes, and (iii) are compatible to various metagenomic read mapping tools. Moreover, computational tools to rapidly compile and update such databases to accommodate the rapid increase in new reference genomes do not exist. As a result, database-guided analyses often fail to profile a substantial fraction of metagenomic shotgun sequencing reads from complex microbiomes. We report pipelines that efficiently traverse all open access microbial genomes and assemble non-redundant genomic information. The pipelines result in two species-resolution microbial reference databases of relatively small sizes: reprDB, which assembles microbial representative or reference genomes, and panDB, for which we developed a novel iterative alignment algorithm to identify and assemble non-redundant genomic regions in multiple sequenced strains. With the databases, we managed to assign taxonomic labels and genome positions to the majority of metagenomic reads from human skin and gut microbiomes, demonstrating a significant improvement over a previous database-guided analysis on the same datasets. reprDB and panDB leverage the rapid increases in the number of open access microbial genomes to more fully profile metagenomic samples. Additionally, the databases exclude redundant sequence information to avoid inflated storage or memory space and indexing or analyzing time. Finally, the novel iterative alignment algorithm significantly increases efficiency in pan-genome identification and can be useful in comparative genomic analyses.
76 FR 42677 - Notice of Intent To Seek Approval To Collect Information
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-19
... and maintains an on-line recipe database, the Recipe Finder, as a popular feature to the SNAP-Ed Connection Web site. The purpose of the Recipe Finder database is to provide SNAP-Ed providers with low-cost... inclusion in the database. SNAP-Ed staff and providers benefit from collecting and posting feedback on...
Andresen, Kristoffer; Pommergaard, Hans-Christian; Rosenberg, Jacob
2015-01-01
Background. Open access (OA) journals allows access to research papers free of charge to the reader. Traditionally, biomedical researchers use databases like MEDLINE and EMBASE to discover new advances. However, biomedical OA journals might not fulfill such databases’ criteria, hindering dissemination. The Directory of Open Access Journals (DOAJ) is a database exclusively listing OA journals. The aim of this study was to investigate DOAJ’s coverage of biomedical OA journals compared with the conventional biomedical databases. Methods. Information on all journals listed in four conventional biomedical databases (MEDLINE, PubMed Central, EMBASE and SCOPUS) and DOAJ were gathered. Journals were included if they were (1) actively publishing, (2) full OA, (3) prospectively indexed in one or more database, and (4) of biomedical subject. Impact factor and journal language were also collected. DOAJ was compared with conventional databases regarding the proportion of journals covered, along with their impact factor and publishing language. The proportion of journals with articles indexed by DOAJ was determined. Results. In total, 3,236 biomedical OA journals were included in the study. Of the included journals, 86.7% were listed in DOAJ. Combined, the conventional biomedical databases listed 75.0% of the journals; 18.7% in MEDLINE; 36.5% in PubMed Central; 51.5% in SCOPUS and 50.6% in EMBASE. Of the journals in DOAJ, 88.7% published in English and 20.6% had received impact factor for 2012 compared with 93.5% and 26.0%, respectively, for journals in the conventional biomedical databases. A subset of 51.1% and 48.5% of the journals in DOAJ had articles indexed from 2012 and 2013, respectively. Of journals exclusively listed in DOAJ, one journal had received an impact factor for 2012, and 59.6% of the journals had no content from 2013 indexed in DOAJ. Conclusions. DOAJ is the most complete registry of biomedical OA journals compared with five conventional biomedical databases. However, DOAJ only indexes articles for half of the biomedical journals listed, making it an incomplete source for biomedical research papers in general. PMID:26038727
MyMolDB: a micromolecular database solution with open source and free components.
Xia, Bing; Tai, Zheng-Fu; Gu, Yu-Cheng; Li, Bang-Jing; Ding, Li-Sheng; Zhou, Yan
2011-10-01
To manage chemical structures in small laboratories is one of the important daily tasks. Few solutions are available on the internet, and most of them are closed source applications. The open-source applications typically have limited capability and basic cheminformatics functionalities. In this article, we describe an open-source solution to manage chemicals in research groups based on open source and free components. It has a user-friendly interface with the functions of chemical handling and intensive searching. MyMolDB is a micromolecular database solution that supports exact, substructure, similarity, and combined searching. This solution is mainly implemented using scripting language Python with a web-based interface for compound management and searching. Almost all the searches are in essence done with pure SQL on the database by using the high performance of the database engine. Thus, impressive searching speed has been archived in large data sets for no external Central Processing Unit (CPU) consuming languages were involved in the key procedure of the searching. MyMolDB is an open-source software and can be modified and/or redistributed under GNU General Public License version 3 published by the Free Software Foundation (Free Software Foundation Inc. The GNU General Public License, Version 3, 2007. Available at: http://www.gnu.org/licenses/gpl.html). The software itself can be found at http://code.google.com/p/mymoldb/. Copyright © 2011 Wiley Periodicals, Inc.
VIVO Open Source Software: Connecting Facilities to Promote Discovery and Further Research.
NASA Astrophysics Data System (ADS)
Gross, M. B.; Rowan, L. R.; Mayernik, M. S.; Daniels, M. D.; Stott, D.; Allison, J.; Maull, K. E.; Krafft, D. B.; Khan, H.
2016-12-01
EarthCollab (http://earthcube.org/group/earthcollab), a National Science Foundation (NSF) EarthCube Building Block project, has adapted an open source semantic web application, VIVO, for use within the earth science domain. EarthCollab is a partnership between UNAVCO, an NSF facility supporting research through geodetic services, the Earth Observing Laboratory (EOL) at the National Center for Atmospheric Research (NCAR), and Cornell University, where VIVO was created to highlight the scholarly output of researchers at universities. Two public sites have been released: Connect UNAVCO (connect.unavco.org) and Arctic Data Connects (vivo.eol.ucar.edu). The core VIVO software and ontology have been extended to work better with concepts necessary for capturing work within UNAVCO's and EOL's province such as principal investigators for continuous GPS/GNSS stations at UNAVCO and keywords describing cruise datasets at EOL. The sites increase discoverability of large and diverse data archives by linking data with people, research, and field projects. Disambiguation is a major challenge when using VIVO and open data when "anyone can say anything about anything." Concepts and controlled vocabularies help to build consistent and easily searchable connections within VIVO. We use aspects of subject heading services such as FAST and LOC, as well as AGU and GSA fields of research and subject areas to reveal connections, especially with VIVO instances at other institutions. VIVO works effectively with persistent IDs and the projects strive to utilize publication and data DOIs, ORCIDs for people, and ISNI and GRID for organizations. ORCID, an open source project, is very useful for disambiguation and unlike other identifier systems for people developed by publishers, makes public data available via an API. VIVO utilizes Solr and Freemarker, which are open source search engine and templating technologies, respectively. Additionally, a handful of popular open source libraries and applications are being used in the project such as D3.js, jQuery, Leaflet, and Elasticsearch. Our implementation of these open source projects within VIVO is available for adaptation by other institutions using VIVO via GitHub (git.io/vG9AJ).
Design Support System for Open Distance Learning Student Teamwork
NASA Astrophysics Data System (ADS)
Putranto, A.; Pradipto, Y. D.
2017-01-01
Open distance learning students in doing team assignment, they seldom face to some problems such as student fell unfair in numbers of workload contribution, instructors also do not know which students do more work than others. So there are some questions ie: how to connect between instructor, team members, and working documents. Methods will be used are first, analyzing current condition and last by designing systems to connect between instructor, team members, and document. The expected result is support systems for open distance learning student teamwork.
Design and Deployment of a General Purpose, Open Source LoRa to Wi-Fi Hub and Data Logger
NASA Astrophysics Data System (ADS)
DeBell, T. C.; Udell, C.; Kwon, M.; Selker, J. S.; Lopez Alcala, J. M.
2017-12-01
Methods and technologies facilitating internet connectivity and near-real-time status updates for in site environmental sensor data are of increasing interest in Earth Science. However, Open Source, Do-It-Yourself technologies that enable plug and play functionality for web-connected sensors and devices remain largely inaccessible for typical researchers in our community. The Openly Published Environmental Sensing Lab at Oregon State University (OPEnS Lab) constructed an Open Source 900 MHz Long Range Radio (LoRa) receiver hub with SD card data logger, Ethernet and Wi-Fi shield, and 3D printed enclosure that dynamically uploads transmissions from multiple wirelessly-connected environmental sensing devices. Data transmissions may be received from devices up to 20km away. The hub time-stamps, saves to SD card, and uploads all transmissions to a Google Drive spreadsheet to be accessed in near-real-time by researchers and GeoVisualization applications (such as Arc GIS) for access, visualization, and analysis. This research expands the possibilities of scientific observation of our Earth, transforming the technology, methods, and culture by combining open-source development and cutting edge technology. This poster details our methods and evaluates the application of using 3D printing, Arduino Integrated Development Environment (IDE), Adafruit's Open-Hardware Feather development boards, and the WIZNET5500 Ethernet shield for designing this open-source, general purpose LoRa to Wi-Fi data logger.
NCI-CONNECT - Current Initiatives | Center for Cancer Research
NCI-CONNECT Current Studies The Comprehensive Oncology Network Evaluating Rare CNS Tumors, or NCI-CONNECT has a number of studies that are open and available for participation and will be adding more.
NASA Astrophysics Data System (ADS)
Xing, Fangyuan; Wang, Honghuan; Yin, Hongxi; Li, Ming; Luo, Shenzi; Wu, Chenguang
2016-02-01
With the extensive application of cloud computing and data centres, as well as the constantly emerging services, the big data with the burst characteristic has brought huge challenges to optical networks. Consequently, the software defined optical network (SDON) that combines optical networks with software defined network (SDN), has attracted much attention. In this paper, an OpenFlow-enabled optical node employed in optical cross-connect (OXC) and reconfigurable optical add/drop multiplexer (ROADM), is proposed. An open source OpenFlow controller is extended on routing strategies. In addition, the experiment platform based on OpenFlow protocol for software defined optical network, is designed. The feasibility and availability of the OpenFlow-enabled optical nodes and the extended OpenFlow controller are validated by the connectivity test, protection switching and load balancing experiments in this test platform.
Open access intrapartum CTG database.
Chudáček, Václav; Spilka, Jiří; Burša, Miroslav; Janků, Petr; Hruban, Lukáš; Huptych, Michal; Lhotská, Lenka
2014-01-13
Cardiotocography (CTG) is a monitoring of fetal heart rate and uterine contractions. Since 1960 it is routinely used by obstetricians to assess fetal well-being. Many attempts to introduce methods of automatic signal processing and evaluation have appeared during the last 20 years, however still no significant progress similar to that in the domain of adult heart rate variability, where open access databases are available (e.g. MIT-BIH), is visible. Based on a thorough review of the relevant publications, presented in this paper, the shortcomings of the current state are obvious. A lack of common ground for clinicians and technicians in the field hinders clinically usable progress. Our open access database of digital intrapartum cardiotocographic recordings aims to change that. The intrapartum CTG database consists in total of 552 intrapartum recordings, which were acquired between April 2010 and August 2012 at the obstetrics ward of the University Hospital in Brno, Czech Republic. All recordings were stored in electronic form in the OB TraceVue®;system. The recordings were selected from 9164 intrapartum recordings with clinical as well as technical considerations in mind. All recordings are at most 90 minutes long and start a maximum of 90 minutes before delivery. The time relation of CTG to delivery is known as well as the length of the second stage of labor which does not exceed 30 minutes. The majority of recordings (all but 46 cesarean sections) is - on purpose - from vaginal deliveries. All recordings have available biochemical markers as well as some more general clinical features. Full description of the database and reasoning behind selection of the parameters is presented in the paper. A new open-access CTG database is introduced which should give the research community common ground for comparison of results on reasonably large database. We anticipate that after reading the paper, the reader will understand the context of the field from clinical and technical perspectives which will enable him/her to use the database and also understand its limitations.
Datacube Services in Action, Using Open Source and Open Standards
NASA Astrophysics Data System (ADS)
Baumann, P.; Misev, D.
2016-12-01
Array Databases comprise novel, promising technology for massive spatio-temporal datacubes, extending the SQL paradigm of "any query, anytime" to n-D arrays. On server side, such queries can be optimized, parallelized, and distributed based on partitioned array storage. The rasdaman ("raster data manager") system, which has pioneered Array Databases, is available in open source on www.rasdaman.org. Its declarative query language extends SQL with array operators which are optimized and parallelized on server side. The rasdaman engine, which is part of OSGeo Live, is mature and in operational use databases individually holding dozens of Terabytes. Further, the rasdaman concepts have strongly impacted international Big Data standards in the field, including the forthcoming MDA ("Multi-Dimensional Array") extension to ISO SQL, the OGC Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS) standards, and the forthcoming INSPIRE WCS/WCPS; in both OGC and INSPIRE, OGC is WCS Core Reference Implementation. In our talk we present concepts, architecture, operational services, and standardization impact of open-source rasdaman, as well as experiences made.
Open Window: When Easily Identifiable Genomes and Traits Are in the Public Domain
Angrist, Misha
2014-01-01
“One can't be of an enquiring and experimental nature, and still be very sensible.” - Charles Fort [1] As the costs of personal genetic testing “self-quantification” fall, publicly accessible databases housing people's genotypic and phenotypic information are gradually increasing in number and scope. The latest entrant is openSNP, which allows participants to upload their personal genetic/genomic and self-reported phenotypic data. I believe the emergence of such open repositories of human biological data is a natural reflection of inquisitive and digitally literate people's desires to make genomic and phenotypic information more easily available to a community beyond the research establishment. Such unfettered databases hold the promise of contributing mightily to science, science education and medicine. That said, in an age of increasingly widespread governmental and corporate surveillance, we would do well to be mindful that genomic DNA is uniquely identifying. Participants in open biological databases are engaged in a real-time experiment whose outcome is unknown. PMID:24647311
Solving the problem of Trans-Genomic Query with alignment tables.
Parker, Douglass Stott; Hsiao, Ruey-Lung; Xing, Yi; Resch, Alissa M; Lee, Christopher J
2008-01-01
The trans-genomic query (TGQ) problem--enabling the free query of biological information, even across genomes--is a central challenge facing bioinformatics. Solutions to this problem can alter the nature of the field, moving it beyond the jungle of data integration and expanding the number and scope of questions that can be answered. An alignment table is a binary relationship on locations (sequence segments). An important special case of alignment tables are hit tables ? tables of pairs of highly similar segments produced by alignment tools like BLAST. However, alignment tables also include general binary relationships, and can represent any useful connection between sequence locations. They can be curated, and provide a high-quality queryable backbone of connections between biological information. Alignment tables thus can be a natural foundation for TGQ, as they permit a central part of the TGQ problem to be reduced to purely technical problems involving tables of locations.Key challenges in implementing alignment tables include efficient representation and indexing of sequence locations. We define a location datatype that can be incorporated naturally into common off-the-shelf database systems. We also describe an implementation of alignment tables in BLASTGRES, an extension of the open-source POSTGRESQL database system that provides indexing and operators on locations required for querying alignment tables. This paper also reviews several successful large-scale applications of alignment tables for Trans-Genomic Query. Tables with millions of alignments have been used in queries about alternative splicing, an area of genomic analysis concerning the way in which a single gene can yield multiple transcripts. Comparative genomics is a large potential application area for TGQ and alignment tables.
Problem formulation, metrics, open government, and on-line collaboration
NASA Astrophysics Data System (ADS)
Ziegler, C. R.; Schofield, K.; Young, S.; Shaw, D.
2010-12-01
Problem formulation leading to effective environmental management, including synthesis and application of science by government agencies, may benefit from collaborative on-line environments. This is illustrated by two interconnected projects: 1) literature-based evidence tools that support causal assessment and problem formulation, and 2) development of output, outcome, and sustainability metrics for tracking environmental conditions. Specifically, peer-production mechanisms allow for global contribution to science-based causal evidence databases, and subsequent crowd-sourced development of causal networks supported by that evidence. In turn, science-based causal networks may inform problem formulation and selection of metrics or indicators to track environmental condition (or problem status). Selecting and developing metrics in a collaborative on-line environment may improve stakeholder buy-in, the explicit relevance of metrics to planning, and the ability to approach problem apportionment or accountability, and to define success or sustainability. Challenges include contribution governance, data-sharing incentives, linking on-line interfaces to data service providers, and the intersection of environmental science and social science. Degree of framework access and confidentiality may vary by group and/or individual, but may ultimately be geared at demonstrating connections between science and decision making and supporting a culture of open government, by fostering transparency, public engagement, and collaboration.
Exploring the single-cell RNA-seq analysis landscape with the scRNA-tools database.
Zappia, Luke; Phipson, Belinda; Oshlack, Alicia
2018-06-25
As single-cell RNA-sequencing (scRNA-seq) datasets have become more widespread the number of tools designed to analyse these data has dramatically increased. Navigating the vast sea of tools now available is becoming increasingly challenging for researchers. In order to better facilitate selection of appropriate analysis tools we have created the scRNA-tools database (www.scRNA-tools.org) to catalogue and curate analysis tools as they become available. Our database collects a range of information on each scRNA-seq analysis tool and categorises them according to the analysis tasks they perform. Exploration of this database gives insights into the areas of rapid development of analysis methods for scRNA-seq data. We see that many tools perform tasks specific to scRNA-seq analysis, particularly clustering and ordering of cells. We also find that the scRNA-seq community embraces an open-source and open-science approach, with most tools available under open-source licenses and preprints being extensively used as a means to describe methods. The scRNA-tools database provides a valuable resource for researchers embarking on scRNA-seq analysis and records the growth of the field over time.
Fokkema, Ivo F A C; den Dunnen, Johan T; Taschner, Peter E M
2005-08-01
The completion of the human genome project has initiated, as well as provided the basis for, the collection and study of all sequence variation between individuals. Direct access to up-to-date information on sequence variation is currently provided most efficiently through web-based, gene-centered, locus-specific databases (LSDBs). We have developed the Leiden Open (source) Variation Database (LOVD) software approaching the "LSDB-in-a-Box" idea for the easy creation and maintenance of a fully web-based gene sequence variation database. LOVD is platform-independent and uses PHP and MySQL open source software only. The basic gene-centered and modular design of the database follows the recommendations of the Human Genome Variation Society (HGVS) and focuses on the collection and display of DNA sequence variations. With minimal effort, the LOVD platform is extendable with clinical data. The open set-up should both facilitate and promote functional extension with scripts written by the community. The LOVD software is freely available from the Leiden Muscular Dystrophy pages (www.DMD.nl/LOVD/). To promote the use of LOVD, we currently offer curators the possibility to set up an LSDB on our Leiden server. (c) 2005 Wiley-Liss, Inc.
Why open drug discovery needs four simple rules for licensing data and models.
Williams, Antony J; Wilbanks, John; Ekins, Sean
2012-01-01
When we look at the rapid growth of scientific databases on the Internet in the past decade, we tend to take the accessibility and provenance of the data for granted. As we see a future of increased database integration, the licensing of the data may be a hurdle that hampers progress and usability. We have formulated four rules for licensing data for open drug discovery, which we propose as a starting point for consideration by databases and for their ultimate adoption. This work could also be extended to the computational models derived from such data. We suggest that scientists in the future will need to consider data licensing before they embark upon re-using such content in databases they construct themselves.
Comparison of the NCI open database with seven large chemical structural databases.
Voigt, J H; Bienfait, B; Wang, S; Nicklaus, M C
2001-01-01
Eight large chemical databases have been analyzed and compared to each other. Central to this comparison is the open National Cancer Institute (NCI) database, consisting of approximately 250 000 structures. The other databases analyzed are the Available Chemicals Directory ("ACD," from MDL, release 1.99, 3D-version); the ChemACX ("ACX," from CamSoft, Version 4.5); the Maybridge Catalog and the Asinex database (both as distributed by CamSoft as part of ChemInfo 4.5); the Sigma-Aldrich Catalog (CD-ROM, 1999 Version); the World Drug Index ("WDI," Derwent, version 1999.03); and the organic part of the Cambridge Crystallographic Database ("CSD," from Cambridge Crystallographic Data Center, 1999 Version 5.18). The database properties analyzed are internal duplication rates; compounds unique to each database; cumulative occurrence of compounds in an increasing number of databases; overlap of identical compounds between two databases; similarity overlap; diversity; and others. The crystallographic database CSD and the WDI show somewhat less overlap with the other databases than those with each other. In particular the collections of commercial compounds and compilations of vendor catalogs have a substantial degree of overlap among each other. Still, no database is completely a subset of any other, and each appears to have its own niche and thus "raison d'être". The NCI database has by far the highest number of compounds that are unique to it. Approximately 200 000 of the NCI structures were not found in any of the other analyzed databases.
78 FR 55684 - ConnectED Workshop
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-11
... schools for digital learning. The ConnectED Workshop will discuss the growing bandwidth needs of K-12 schools as more schools use mobile devices to enrich the learning experience; as teachers increasingly... Distance Learning and Telemedicine Program; and the U.S. Department of Education. The meeting will be open...
Varela, Sara; González-Hernández, Javier; Casabella, Eduardo; Barrientos, Rafael
2014-01-01
Citizen science projects store an enormous amount of information about species distribution, diversity and characteristics. Researchers are now beginning to make use of this rich collection of data. However, access to these databases is not always straightforward. Apart from the largest and international projects, citizen science repositories often lack specific Application Programming Interfaces (APIs) to connect them to the scientific environments. Thus, it is necessary to develop simple routines to allow researchers to take advantage of the information collected by smaller citizen science projects, for instance, programming specific packages to connect them to popular scientific environments (like R). Here, we present rAvis, an R-package to connect R-users with Proyecto AVIS (http://proyectoavis.com), a Spanish citizen science project with more than 82,000 bird observation records. We develop several functions to explore the database, to plot the geographic distribution of the species occurrences, and to generate personal queries to the database about species occurrences (number of individuals, distribution, etc.) and birdwatcher observations (number of species recorded by each collaborator, UTMs visited, etc.). This new R-package will allow scientists to access this database and to exploit the information generated by Spanish birdwatchers over the last 40 years.
Neurotree: a collaborative, graphical database of the academic genealogy of neuroscience.
David, Stephen V; Hayden, Benjamin Y
2012-01-01
Neurotree is an online database that documents the lineage of academic mentorship in neuroscience. Modeled on the tree format typically used to describe biological genealogies, the Neurotree web site provides a concise summary of the intellectual history of neuroscience and relationships between individuals in the current neuroscience community. The contents of the database are entirely crowd-sourced: any internet user can add information about researchers and the connections between them. As of July 2012, Neurotree has collected information from 10,000 users about 35,000 researchers and 50,000 mentor relationships, and continues to grow. The present report serves to highlight the utility of Neurotree as a resource for academic research and to summarize some basic analysis of its data. The tree structure of the database permits a variety of graphical analyses. We find that the connectivity and graphical distance between researchers entered into Neurotree early has stabilized and thus appears to be mostly complete. The connectivity of more recent entries continues to mature. A ranking of researcher fecundity based on their mentorship reveals a sustained period of influential researchers from 1850-1950, with the most influential individuals active at the later end of that period. Finally, a clustering analysis reveals that some subfields of neuroscience are reflected in tightly interconnected mentor-trainee groups.
Neurotree: A Collaborative, Graphical Database of the Academic Genealogy of Neuroscience
David, Stephen V.; Hayden, Benjamin Y.
2012-01-01
Neurotree is an online database that documents the lineage of academic mentorship in neuroscience. Modeled on the tree format typically used to describe biological genealogies, the Neurotree web site provides a concise summary of the intellectual history of neuroscience and relationships between individuals in the current neuroscience community. The contents of the database are entirely crowd-sourced: any internet user can add information about researchers and the connections between them. As of July 2012, Neurotree has collected information from 10,000 users about 35,000 researchers and 50,000 mentor relationships, and continues to grow. The present report serves to highlight the utility of Neurotree as a resource for academic research and to summarize some basic analysis of its data. The tree structure of the database permits a variety of graphical analyses. We find that the connectivity and graphical distance between researchers entered into Neurotree early has stabilized and thus appears to be mostly complete. The connectivity of more recent entries continues to mature. A ranking of researcher fecundity based on their mentorship reveals a sustained period of influential researchers from 1850–1950, with the most influential individuals active at the later end of that period. Finally, a clustering analysis reveals that some subfields of neuroscience are reflected in tightly interconnected mentor-trainee groups. PMID:23071595
Varela, Sara; González-Hernández, Javier; Casabella, Eduardo; Barrientos, Rafael
2014-01-01
Citizen science projects store an enormous amount of information about species distribution, diversity and characteristics. Researchers are now beginning to make use of this rich collection of data. However, access to these databases is not always straightforward. Apart from the largest and international projects, citizen science repositories often lack specific Application Programming Interfaces (APIs) to connect them to the scientific environments. Thus, it is necessary to develop simple routines to allow researchers to take advantage of the information collected by smaller citizen science projects, for instance, programming specific packages to connect them to popular scientific environments (like R). Here, we present rAvis, an R-package to connect R-users with Proyecto AVIS (http://proyectoavis.com), a Spanish citizen science project with more than 82,000 bird observation records. We develop several functions to explore the database, to plot the geographic distribution of the species occurrences, and to generate personal queries to the database about species occurrences (number of individuals, distribution, etc.) and birdwatcher observations (number of species recorded by each collaborator, UTMs visited, etc.). This new R-package will allow scientists to access this database and to exploit the information generated by Spanish birdwatchers over the last 40 years. PMID:24626233
Designing an algorithm to preserve privacy for medical record linkage with error-prone data.
Pal, Doyel; Chen, Tingting; Zhong, Sheng; Khethavath, Praveen
2014-01-20
Linking medical records across different medical service providers is important to the enhancement of health care quality and public health surveillance. In records linkage, protecting the patients' privacy is a primary requirement. In real-world health care databases, records may well contain errors due to various reasons such as typos. Linking the error-prone data and preserving data privacy at the same time are very difficult. Existing privacy preserving solutions for this problem are only restricted to textual data. To enable different medical service providers to link their error-prone data in a private way, our aim was to provide a holistic solution by designing and developing a medical record linkage system for medical service providers. To initiate a record linkage, one provider selects one of its collaborators in the Connection Management Module, chooses some attributes of the database to be matched, and establishes the connection with the collaborator after the negotiation. In the Data Matching Module, for error-free data, our solution offered two different choices for cryptographic schemes. For error-prone numerical data, we proposed a newly designed privacy preserving linking algorithm named the Error-Tolerant Linking Algorithm, that allows the error-prone data to be correctly matched if the distance between the two records is below a threshold. We designed and developed a comprehensive and user-friendly software system that provides privacy preserving record linkage functions for medical service providers, which meets the regulation of Health Insurance Portability and Accountability Act. It does not require a third party and it is secure in that neither entity can learn the records in the other's database. Moreover, our novel Error-Tolerant Linking Algorithm implemented in this software can work well with error-prone numerical data. We theoretically proved the correctness and security of our Error-Tolerant Linking Algorithm. We have also fully implemented the software. The experimental results showed that it is reliable and efficient. The design of our software is open so that the existing textual matching methods can be easily integrated into the system. Designing algorithms to enable medical records linkage for error-prone numerical data and protect data privacy at the same time is difficult. Our proposed solution does not need a trusted third party and is secure in that in the linking process, neither entity can learn the records in the other's database.
Kongsholm, Gertrud Gansmo; Nielsen, Anna Katrine Toft; Damkier, Per
2015-11-01
It is well documented that drug-drug interaction databases (DIDs) differ substantially with respect to classification of drug-drug interactions (DDIs). The aim of this study was to study online available transparency of ownership, funding, information, classifications, staff training, and underlying documentation of the five most commonly used open access English language-based online DIDs and the three most commonly used subscription English language-based online DIDs in the literature. We conducted a systematic literature search to identify the five most commonly used open access and the three most commonly used subscription DIDs in the medical literature. The following parameters were assessed for each of the databases: Ownership, classification of interactions, primary information sources, and staff qualification. We compared the overall proportion of yes/no answers from open access databases and subscription databases by Fisher's exact test-both prior to and after requesting missing information. Among open access DIDs, 20/60 items could be verified from the webpage directly compared to 24/36 for the subscription DIDs (p = 0.0028). Following personal request, these numbers rose to 22/60 and 30/36, respectively (p < 0.0001). For items within the "classification of interaction" domain, proportions were 3/25 versus 11/15 available from the webpage (P = 0.0001) and 3/25 versus 15/15 (p < 0.0001) available upon personal request. Available information on online available transparency of ownership, funding, information, classifications, staff training, and underlying documentation varies substantially among various DIDs. Open access DIDs had a statistically lower score on parameters assessed.
Open Source Service Agent (OSSA) in the intelligence community's Open Source Architecture
NASA Technical Reports Server (NTRS)
Fiene, Bruce F.
1994-01-01
The Community Open Source Program Office (COSPO) has developed an architecture for the intelligence community's new Open Source Information System (OSIS). The architecture is a multi-phased program featuring connectivity, interoperability, and functionality. OSIS is based on a distributed architecture concept. The system is designed to function as a virtual entity. OSIS will be a restricted (non-public), user configured network employing Internet communications. Privacy and authentication will be provided through firewall protection. Connection to OSIS can be made through any server on the Internet or through dial-up modems provided the appropriate firewall authentication system is installed on the client.
The background, criteria, and usage of the intermodal passenger connectivity database
DOT National Transportation Integrated Search
2009-04-01
Intermodal connections, the links that allow passengers to switch from one mode to another to complete a trip, have been an important element of federal transportation policy since passage of the Intermodal Surface Transportation Efficiency Act of 19...
Hürlimann, Eveline; Schur, Nadine; Boutsika, Konstantina; Stensgaard, Anna-Sofie; Laserna de Himpsl, Maiti; Ziegelbauer, Kathrin; Laizer, Nassor; Camenzind, Lukas; Di Pasquale, Aurelio; Ekpo, Uwem F; Simoonga, Christopher; Mushinge, Gabriel; Saarnak, Christopher F L; Utzinger, Jürg; Kristensen, Thomas K; Vounatsou, Penelope
2011-12-01
After many years of general neglect, interest has grown and efforts came under way for the mapping, control, surveillance, and eventual elimination of neglected tropical diseases (NTDs). Disease risk estimates are a key feature to target control interventions, and serve as a benchmark for monitoring and evaluation. What is currently missing is a georeferenced global database for NTDs providing open-access to the available survey data that is constantly updated and can be utilized by researchers and disease control managers to support other relevant stakeholders. We describe the steps taken toward the development of such a database that can be employed for spatial disease risk modeling and control of NTDs. With an emphasis on schistosomiasis in Africa, we systematically searched the literature (peer-reviewed journals and 'grey literature'), contacted Ministries of Health and research institutions in schistosomiasis-endemic countries for location-specific prevalence data and survey details (e.g., study population, year of survey and diagnostic techniques). The data were extracted, georeferenced, and stored in a MySQL database with a web interface allowing free database access and data management. At the beginning of 2011, our database contained more than 12,000 georeferenced schistosomiasis survey locations from 35 African countries available under http://www.gntd.org. Currently, the database is expanded to a global repository, including a host of other NTDs, e.g. soil-transmitted helminthiasis and leishmaniasis. An open-access, spatially explicit NTD database offers unique opportunities for disease risk modeling, targeting control interventions, disease monitoring, and surveillance. Moreover, it allows for detailed geostatistical analyses of disease distribution in space and time. With an initial focus on schistosomiasis in Africa, we demonstrate the proof-of-concept that the establishment and running of a global NTD database is feasible and should be expanded without delay.
Combining computational models, semantic annotations and simulation experiments in a graph database
Henkel, Ron; Wolkenhauer, Olaf; Waltemath, Dagmar
2015-01-01
Model repositories such as the BioModels Database, the CellML Model Repository or JWS Online are frequently accessed to retrieve computational models of biological systems. However, their storage concepts support only restricted types of queries and not all data inside the repositories can be retrieved. In this article we present a storage concept that meets this challenge. It grounds on a graph database, reflects the models’ structure, incorporates semantic annotations and simulation descriptions and ultimately connects different types of model-related data. The connections between heterogeneous model-related data and bio-ontologies enable efficient search via biological facts and grant access to new model features. The introduced concept notably improves the access of computational models and associated simulations in a model repository. This has positive effects on tasks such as model search, retrieval, ranking, matching and filtering. Furthermore, our work for the first time enables CellML- and Systems Biology Markup Language-encoded models to be effectively maintained in one database. We show how these models can be linked via annotations and queried. Database URL: https://sems.uni-rostock.de/projects/masymos/ PMID:25754863
Reliability database development for use with an object-oriented fault tree evaluation program
NASA Technical Reports Server (NTRS)
Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann
1989-01-01
A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.
ERIC Educational Resources Information Center
Erickson, Lisa B.
2013-01-01
In today's connected world, the reach of the Internet and collaborative social media tools have opened up new opportunities for individuals, regardless of their location, to share their knowledge, expertise, and creativity with others. These tools have also opened up opportunities for organizations to connect with new sources of innovation to…
Hameed, Sarah O; White, J Wilson; Miller, Seth H; Nickols, Kerry J; Morgan, Steven G
2016-06-29
Demographic connectivity is fundamental to the persistence and resilience of metapopulations, but our understanding of the link between reproduction and recruitment is notoriously poor in open-coast marine populations. We provide the first evidence of high local retention and limited connectivity among populations spanning 700 km along an open coast in an upwelling system. Using extensive field measurements of fecundity, population size and settlement in concert with a Bayesian inverse modelling approach, we estimated that, on average, Petrolisthes cinctipes larvae disperse only 6.9 km (±25.0 km s.d.) from natal populations, despite spending approximately six weeks in an open-coast system that was once assumed to be broadly dispersive. This estimate differed substantially from our prior dispersal estimate (153.9 km) based on currents and larval duration and behaviour, revealing the importance of employing demographic data in larval dispersal estimates. Based on this estimate, we predict that demographic connectivity occurs predominantly among neighbouring populations less than 30 km apart. Comprehensive studies of larval production, settlement and connectivity are needed to advance an understanding of the ecology and evolution of life in the sea as well as to conserve ecosystems. Our novel approach provides a tractable framework for addressing these questions for species occurring in discrete coastal populations. © 2016 The Author(s).
DEVELOPMENT AND APPLICATION OF THE DORIAN (DOSE-RESPONSE INFORMATION ANALYSIS) SYSTEM
Development of an Open Global Oil and Gas Infrastructure Inventory and Geodatabase
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, Kelly
This submission contains a technical report describing the development process and visual graphics for the Global Oil and Gas Infrastructure database. Access the GOGI database using the following link: https://edx.netl.doe.gov/dataset/global-oil-gas-features-database
Fujimura, Tomomi; Umemura, Hiroyuki
2018-01-15
The present study describes the development and validation of a facial expression database comprising five different horizontal face angles in dynamic and static presentations. The database includes twelve expression types portrayed by eight Japanese models. This database was inspired by the dimensional and categorical model of emotions: surprise, fear, sadness, anger with open mouth, anger with closed mouth, disgust with open mouth, disgust with closed mouth, excitement, happiness, relaxation, sleepiness, and neutral (static only). The expressions were validated using emotion classification and Affect Grid rating tasks [Russell, Weiss, & Mendelsohn, 1989. Affect Grid: A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology, 57(3), 493-502]. The results indicate that most of the expressions were recognised as the intended emotions and could systematically represent affective valence and arousal. Furthermore, face angle and facial motion information influenced emotion classification and valence and arousal ratings. Our database will be available online at the following URL. https://www.dh.aist.go.jp/database/face2017/ .
A Runway Surface Monitor using Internet of Things
NASA Astrophysics Data System (ADS)
Troiano, Amedeo; Pasero, Eros
2014-05-01
The monitoring of runway surfaces, for the detection of ice formation or presence of water, is an important issue for reducing maintenance costs and improving traffic safety. An innovative sensor was developed to detect the presence of ice or water on its surface, and its repeatability, stability and reliability were assessed in different simulations and experiments, performed both in laboratory and in the field. Three sensors were embedded in the runway of the Turin-Caselle airport, in the north-west of Italy, to check the state of its surface. Each sensor was connected to a GPRS modem to send the collected data to a common database. The entire system was installed about three years ago, and up to now it shows correct work and automatic reactivation after malfunctions without any external help. The state of the runway surface is virtual represented in an internet website, using the Internet of Things features and opening new scenarios.
SedInConnect: a stand-alone, free and open source tool for the assessment of sediment connectivity
NASA Astrophysics Data System (ADS)
Crema, Stefano; Cavalli, Marco
2018-02-01
There is a growing call, within the scientific community, for solid theoretic frameworks and usable indices/models to assess sediment connectivity. Connectivity plays a significant role in characterizing structural properties of the landscape and, when considered in combination with forcing processes (e.g., rainfall-runoff modelling), can represent a valuable analysis for an improved landscape management. In this work, the authors present the development and application of SedInConnect: a free, open source and stand-alone application for the computation of the Index of Connectivity (IC), as expressed in Cavalli et al. (2013) with the addition of specific innovative features. The tool is intended to have a wide variety of users, both from the scientific community and from the authorities involved in the environmental planning. Thanks to its open source nature, the tool can be adapted and/or integrated according to the users' requirements. Furthermore, presenting an easy-to-use interface and being a stand-alone application, the tool can help management experts in the quantitative assessment of sediment connectivity in the context of hazard and risk assessment. An application to a sample dataset and an overview on up-to-date applications of the approach and of the tool shows the development potential of such analyses. The modelled connectivity, in fact, appears suitable not only to characterize sediment dynamics at the catchment scale but also to integrate prediction models and as a tool for helping geomorphological interpretation.
BIRS - Bioterrorism Information Retrieval System.
Tewari, Ashish Kumar; Rashi; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Jain, Chakresh Kumar
2013-01-01
Bioterrorism is the intended use of pathogenic strains of microbes to widen terror in a population. There is a definite need to promote research for development of vaccines, therapeutics and diagnostic methods as a part of preparedness to any bioterror attack in the future. BIRS is an open-access database of collective information on the organisms related to bioterrorism. The architecture of database utilizes the current open-source technology viz PHP ver 5.3.19, MySQL and IIS server under windows platform for database designing. Database stores information on literature, generic- information and unique pathways of about 10 microorganisms involved in bioterrorism. This may serve as a collective repository to accelerate the drug discovery and vaccines designing process against such bioterrorist agents (microbes). The available data has been validated from various online resources and literature mining in order to provide the user with a comprehensive information system. The database is freely available at http://www.bioterrorism.biowaves.org.
Characterization of Emergent Data Networks Among Long-Tail Data
NASA Astrophysics Data System (ADS)
Elag, Mostafa; Kumar, Praveen; Hedstrom, Margaret; Myers, James; Plale, Beth; Marini, Luigi; McDonald, Robert
2014-05-01
Data curation underpins data-driven scientific advancements. It manages the information flux across multiple users throughout data life cycle as well as increases data sustainability and reusability. The exponential growth in data production spanning across the Earth Science involving individual and small research groups, which is termed as log-tail data, increases the data-knowledge latency among related domains. It has become clear that an advanced framework-agnostic metadata and ontologies for long-tail data is required to increase their visibility to each other, and provide concise and meaningful descriptions that reveal their connectivity. Despite the advancement that has been achieved by various sophisticated data management models in different Earth Science disciplines, it is not always straightforward to derive relationships among long-tail data. Semantic data clustering algorithms and pre-defined logic rules that are oriented toward prediction of possible data relationships, is one method to address these challenges. Our work advances the connectivity of related long-tail data by introducing the design for an ontology-based knowledge management system. In this work, we present the system architecture, its components, and illustrate how it can be used to scrutinize the connectivity among datasets. To demonstrate the capabilities of this "data network" prototype, we implemented this approach within the Sustainable Environment Actionable Data (SEAD) environment, an open-source semantic content repository that provides a RDF database for long-tail data, and show how emergent relationships among datasets can be identified.
Modeling functional neuroanatomy for an anatomy information system.
Niggemann, Jörg M; Gebert, Andreas; Schulz, Stefan
2008-01-01
Existing neuroanatomical ontologies, databases and information systems, such as the Foundational Model of Anatomy (FMA), represent outgoing connections from brain structures, but cannot represent the "internal wiring" of structures and as such, cannot distinguish between different independent connections from the same structure. Thus, a fundamental aspect of Neuroanatomy, the functional pathways and functional systems of the brain such as the pupillary light reflex system, is not adequately represented. This article identifies underlying anatomical objects which are the source of independent connections (collections of neurons) and uses these as basic building blocks to construct a model of functional neuroanatomy and its functional pathways. The basic representational elements of the model are unnamed groups of neurons or groups of neuron segments. These groups, their relations to each other, and the relations to the objects of macroscopic anatomy are defined. The resulting model can be incorporated into the FMA. The capabilities of the presented model are compared to the FMA and the Brain Architecture Management System (BAMS). Internal wiring as well as functional pathways can correctly be represented and tracked. This model bridges the gap between representations of single neurons and their parts on the one hand and representations of spatial brain structures and areas on the other hand. It is capable of drawing correct inferences on pathways in a nervous system. The object and relation definitions are related to the Open Biomedical Ontology effort and its relation ontology, so that this model can be further developed into an ontology of neuronal functional systems.
Means for supporting fuel elements in a nuclear reactor
Andrews, Harry N.; Keller, Herbert W.
1980-01-01
A grid structure for a nuclear reactor fuel assembly comprising a plurality of connecting members forming at least one longitudinally extending opening peripheral and inner fuel element openings through each of which openings at least one nuclear fuel element extends, said connecting members forming wall means surrounding said each peripheral and inner fuel element opening, a pair of rigid projections longitudinally spaced from one another extending from a portion of said wall means into said each peripheral and inner opening for rigidly engaging said each fuel element, respectively, yet permit individual longitudinal slippage thereof, and resilient means formed integrally on and from said wall means and positioned in said each peripheral and inner opening in opposed relationship with said projections and located to engage said fuel element to bias the latter into engagement with said rigid projections, respectively
Systems and methods for photovoltaic string protection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krein, Philip T.; Kim, Katherine A.; Pilawa-Podgurski, Robert C. N.
A system and method includes a circuit for protecting a photovoltaic string. A bypass switch connects in parallel to the photovoltaic string and a hot spot protection switch connects in series with the photovoltaic string. A first control signal controls opening and closing of the bypass switch and a second control signal controls opening and closing of the hot spot protection switch. Upon detection of a hot spot condition the first control signal closes the bypass switch and after the bypass switch is closed the second control signal opens the hot spot protection switch.
A graph-based approach to construct target-focused libraries for virtual screening.
Naderi, Misagh; Alvin, Chris; Ding, Yun; Mukhopadhyay, Supratik; Brylinski, Michal
2016-01-01
Due to exorbitant costs of high-throughput screening, many drug discovery projects commonly employ inexpensive virtual screening to support experimental efforts. However, the vast majority of compounds in widely used screening libraries, such as the ZINC database, will have a very low probability to exhibit the desired bioactivity for a given protein. Although combinatorial chemistry methods can be used to augment existing compound libraries with novel drug-like compounds, the broad chemical space is often too large to be explored. Consequently, the trend in library design has shifted to produce screening collections specifically tailored to modulate the function of a particular target or a protein family. Assuming that organic compounds are composed of sets of rigid fragments connected by flexible linkers, a molecule can be decomposed into its building blocks tracking their atomic connectivity. On this account, we developed eSynth, an exhaustive graph-based search algorithm to computationally synthesize new compounds by reconnecting these building blocks following their connectivity patterns. We conducted a series of benchmarking calculations against the Directory of Useful Decoys, Enhanced database. First, in a self-benchmarking test, the correctness of the algorithm is validated with the objective to recover a molecule from its building blocks. Encouragingly, eSynth can efficiently rebuild more than 80 % of active molecules from their fragment components. Next, the capability to discover novel scaffolds is assessed in a cross-benchmarking test, where eSynth successfully reconstructed 40 % of the target molecules using fragments extracted from chemically distinct compounds. Despite an enormous chemical space to be explored, eSynth is computationally efficient; half of the molecules are rebuilt in less than a second, whereas 90 % take only about a minute to be generated. eSynth can successfully reconstruct chemically feasible molecules from molecular fragments. Furthermore, in a procedure mimicking the real application, where one expects to discover novel compounds based on a small set of already developed bioactives, eSynth is capable of generating diverse collections of molecules with the desired activity profiles. Thus, we are very optimistic that our effort will contribute to targeted drug discovery. eSynth is freely available to the academic community at www.brylinski.org/content/molecular-synthesis.Graphical abstractAssuming that organic compounds are composed of sets of rigid fragments connected by flexible linkers, a molecule can be decomposed into its building blocks tracking their atomic connectivity. Here, we developed eSynth, an automated method to synthesize new compounds by reconnecting these building blocks following the connectivity patterns via an exhaustive graph-based search algorithm. eSynth opens up a possibility to rapidly construct virtual screening libraries for targeted drug discovery.
Accordian-folded boot shield for flexible swivel connection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoh, J.C.
1986-08-26
This patent describes an apparatus for connecting a first boot section to a second boot section, the first and second boots having openings therethrough, the second boot having at least two adjacent accordion folds at the end having the opening. The second boot is positioned through the opening of the first boot such that a first of the accordion folds is within the first boot and a second of the accordion folds is outside of the first boot comprising: first and second annular discs, the first disc being positioned within and across the first accordion fold, the second disc beingmore » positioned within and across the second accordion fold such that the first boot is moveably and rigidly connected between the first and second accordion folds.« less
Reactome graph database: Efficient access to complex pathway data
Korninger, Florian; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D’Eustachio, Peter
2018-01-01
Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types. PMID:29377902
Reactome graph database: Efficient access to complex pathway data.
Fabregat, Antonio; Korninger, Florian; Viteri, Guilherme; Sidiropoulos, Konstantinos; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning
2018-01-01
Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.
Büssow, Konrad; Hoffmann, Steve; Sievert, Volker
2002-12-19
Functional genomics involves the parallel experimentation with large sets of proteins. This requires management of large sets of open reading frames as a prerequisite of the cloning and recombinant expression of these proteins. A Java program was developed for retrieval of protein and nucleic acid sequences and annotations from NCBI GenBank, using the XML sequence format. Annotations retrieved by ORFer include sequence name, organism and also the completeness of the sequence. The program has a graphical user interface, although it can be used in a non-interactive mode. For protein sequences, the program also extracts the open reading frame sequence, if available, and checks its correct translation. ORFer accepts user input in the form of single or lists of GenBank GI identifiers or accession numbers. It can be used to extract complete sets of open reading frames and protein sequences from any kind of GenBank sequence entry, including complete genomes or chromosomes. Sequences are either stored with their features in a relational database or can be exported as text files in Fasta or tabulator delimited format. The ORFer program is freely available at http://www.proteinstrukturfabrik.de/orfer. The ORFer program allows for fast retrieval of DNA sequences, protein sequences and their open reading frames and sequence annotations from GenBank. Furthermore, storage of sequences and features in a relational database is supported. Such a database can supplement a laboratory information system (LIMS) with appropriate sequence information.
Interchange Slip-Running Reconnection and Sweeping SEP-Beams
NASA Technical Reports Server (NTRS)
Masson, S.; Aulanier, G.; Pariat, E.; Klein, K.-L.
2011-01-01
We present a new model to explain how particles, accelerated at a reconnection site that is not magnetically connected to the Earth, could eventually propagate along the well-connected open flux tube. Our model is based on the results of a low-beta resistive magnetohydrodynamics simulation of a three-dimensional line-tied and initially current-free bipole, that is embedded in a non-uniform open potential field. The topology of this configuration is that of an asymmetric coronal null-point, with a closed fan surface and an open outer spine. When driven by slow photospheric shearing motions, field lines, initially fully anchored below the fan dome, reconnect at the null point, and jump to the open magnetic domain. This is the standard interchange mode as sketched and calculated in 2D. The key result in 3D is that, reconnected open field lines located in the vicinity of the outer spine, keep reconnecting continuously, across an open quasi-separatrix layer, as previously identified for non-open-null-point reconnection. The apparent slipping motion of these field lines leads to form an extended narrow magnetic flux tube at high altitude. Because of the slip-running reconnection, we conjecture that if energetic particles would be travelling through, or be accelerated inside, the diffusion region, they would be successively injected along continuously reconnecting field lines that are connected farther and farther from the spine. At the scale of the full Sun, owing to the super-radial expansion of field lines below 3 solar radius, such energetic particles could easily be injected in field lines slipping over significant distances, and could eventually reach the distant flux tube that is well-connected to the Earth.
Venkataraman, Archana; Duncan, James S.; Yang, Daniel Y.-J.; Pelphrey, Kevin A.
2015-01-01
Resting-state functional magnetic resonance imaging (rsfMRI) studies reveal a complex pattern of hyper- and hypo-connectivity in children with autism spectrum disorder (ASD). Whereas rsfMRI findings tend to implicate the default mode network and subcortical areas in ASD, task fMRI and behavioral experiments point to social dysfunction as a unifying impairment of the disorder. Here, we leverage a novel Bayesian framework for whole-brain functional connectomics that aggregates population differences in connectivity to localize a subset of foci that are most affected by ASD. Our approach is entirely data-driven and does not impose spatial constraints on the region foci or dictate the trajectory of altered functional pathways. We apply our method to data from the openly shared Autism Brain Imaging Data Exchange (ABIDE) and pinpoint two intrinsic functional networks that distinguish ASD patients from typically developing controls. One network involves foci in the right temporal pole, left posterior cingulate cortex, left supramarginal gyrus, and left middle temporal gyrus. Automated decoding of this network by the Neurosynth meta-analytic database suggests high-level concepts of “language” and “comprehension” as the likely functional correlates. The second network consists of the left banks of the superior temporal sulcus, right posterior superior temporal sulcus extending into temporo-parietal junction, and right middle temporal gyrus. Associated functionality of these regions includes “social” and “person”. The abnormal pathways emanating from the above foci indicate that ASD patients simultaneously exhibit reduced long-range or inter-hemispheric connectivity and increased short-range or intra-hemispheric connectivity. Our findings reveal new insights into ASD and highlight possible neural mechanisms of the disorder. PMID:26106561
USDA Branded Food Products Database, Release 2
USDA-ARS?s Scientific Manuscript database
The USDA Branded Food Products Database is the ongoing result of a Public-Private Partnership (PPP), whose goal is to enhance public health and the sharing of open data by complementing the USDA National Nutrient Database for Standard Reference (SR) with nutrient composition of branded foods and pri...
The Open Spectral Database: an open platform for sharing and searching spectral data.
Chalk, Stuart J
2016-01-01
A number of websites make available spectral data for download (typically as JCAMP-DX text files) and one (ChemSpider) that also allows users to contribute spectral files. As a result, searching and retrieving such spectral data can be time consuming, and difficult to reuse if the data is compressed in the JCAMP-DX file. What is needed is a single resource that allows submission of JCAMP-DX files, export of the raw data in multiple formats, searching based on multiple chemical identifiers, and is open in terms of license and access. To address these issues a new online resource called the Open Spectral Database (OSDB) http://osdb.info/ has been developed and is now available. Built using open source tools, using open code (hosted on GitHub), providing open data, and open to community input about design and functionality, the OSDB is available for anyone to submit spectral data, making it searchable and available to the scientific community. This paper details the concept and coding, internal architecture, export formats, Representational State Transfer (REST) Application Programming Interface and options for submission of data. The OSDB website went live in November 2015. Concurrently, the GitHub repository was made available at https://github.com/stuchalk/OSDB/, and is open for collaborators to join the project, submit issues, and contribute code. The combination of a scripting environment (PHPStorm), a PHP Framework (CakePHP), a relational database (MySQL) and a code repository (GitHub) provides all the capabilities to easily develop REST based websites for ingestion, curation and exposure of open chemical data to the community at all levels. It is hoped this software stack (or equivalent ones in other scripting languages) will be leveraged to make more chemical data available for both humans and computers.
Do open access biomedical journals benefit smaller countries? The Slovenian experience.
Turk, Nana
2011-06-01
Scientists from smaller countries have problems gaining visibility for their research. Does open access publishing provide a solution? Slovenia is a small country with around 5000 medical doctors, 1300 dentists and 1000 pharmacists. A search of Slovenia's Bibliographic database was carried out to identity all biomedical journals and those which are open access. Slovenia has 18 medical open access journals, but none has an impact factor and only 10 are indexed by Slovenian and international bibliographic databases. The visibility and quality of medical papers is poor. The solution might be to reduce the number of journals and encourage Slovenian scientists to publish their best articles in them. © 2011 The authors. Health Information and Libraries Journal © 2011 Health Libraries Group.
Open source hardware and software platform for robotics and artificial intelligence applications
NASA Astrophysics Data System (ADS)
Liang, S. Ng; Tan, K. O.; Lai Clement, T. H.; Ng, S. K.; Mohammed, A. H. Ali; Mailah, Musa; Azhar Yussof, Wan; Hamedon, Zamzuri; Yussof, Zulkifli
2016-02-01
Recent developments in open source hardware and software platforms (Android, Arduino, Linux, OpenCV etc.) have enabled rapid development of previously expensive and sophisticated system within a lower budget and flatter learning curves for developers. Using these platform, we designed and developed a Java-based 3D robotic simulation system, with graph database, which is integrated in online and offline modes with an Android-Arduino based rubbish picking remote control car. The combination of the open source hardware and software system created a flexible and expandable platform for further developments in the future, both in the software and hardware areas, in particular in combination with graph database for artificial intelligence, as well as more sophisticated hardware, such as legged or humanoid robots.
Concierge: Personal Database Software for Managing Digital Research Resources
Sakai, Hiroyuki; Aoyama, Toshihiro; Yamaji, Kazutsuna; Usui, Shiro
2007-01-01
This article introduces a desktop application, named Concierge, for managing personal digital research resources. Using simple operations, it enables storage of various types of files and indexes them based on content descriptions. A key feature of the software is a high level of extensibility. By installing optional plug-ins, users can customize and extend the usability of the software based on their needs. In this paper, we also introduce a few optional plug-ins: literature management, electronic laboratory notebook, and XooNlps client plug-ins. XooNIps is a content management system developed to share digital research resources among neuroscience communities. It has been adopted as the standard database system in Japanese neuroinformatics projects. Concierge, therefore, offers comprehensive support from management of personal digital research resources to their sharing in open-access neuroinformatics databases such as XooNIps. This interaction between personal and open-access neuroinformatics databases is expected to enhance the dissemination of digital research resources. Concierge is developed as an open source project; Mac OS X and Windows XP versions have been released at the official site (http://concierge.sourceforge.jp). PMID:18974800
Why Open Drug Discovery Needs Four Simple Rules for Licensing Data and Models
Williams, Antony J.; Wilbanks, John; Ekins, Sean
2012-01-01
When we look at the rapid growth of scientific databases on the Internet in the past decade, we tend to take the accessibility and provenance of the data for granted. As we see a future of increased database integration, the licensing of the data may be a hurdle that hampers progress and usability. We have formulated four rules for licensing data for open drug discovery, which we propose as a starting point for consideration by databases and for their ultimate adoption. This work could also be extended to the computational models derived from such data. We suggest that scientists in the future will need to consider data licensing before they embark upon re-using such content in databases they construct themselves. PMID:23028298
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-14
... 217/EUROCAE WG-44: Terrain and Airport Mapping Databases AGENCY: Federal Aviation Administration (FAA... Databases: For the tenth meeting DATES: The meeting will be held December 6-9, 2011, from 9 a.m. to 5 p.m... Mapping Databases. The agenda will include the following: December 6, 2011 Open Plenary Session. Chairman...
Nozzle dam having a unitary plug
Veronesi, L.; Wepfer, R.M.
1992-12-15
Apparatus for sealing the primary-side coolant flow nozzles of a nuclear steam generator is disclosed. The steam generator has relatively small diameter manway openings for providing access to the interior of the steam generator including the inside surface of each nozzle, the manway openings having a diameter substantially less than the inside diameter of each nozzle. The apparatus includes a bracket having an outside surface for matingly sealingly engaging the inside surface of the nozzle. The bracket also has a plurality of openings longitudinally therethrough and a plurality of slots transversely therein in communication with each opening. A plurality of unitary plugs sized to pass through the manway opening are matingly sealingly disposed in each opening of the bracket for sealingly plugging each opening. Each plug includes a plurality of arms operable to engage the slots of the bracket for connecting each plug to the bracket, so that the nozzle is sealed as the plugs seal the openings and are connected to the bracket. 16 figs.
Nozzle dam having a unitary plug
Veronesi, Luciano; Wepfer, Robert M.
1992-01-01
Apparatus for sealing the primary-side coolant flow nozzles of a nuclear steam generator. The steam generator has relatively small diameter manway openings for providing access to the interior of the steam generator including the inside surface of each nozzle, the manway openings having a diameter substantially less than the inside diameter of each nozzle. The apparatus includes a bracket having an outside surface for matingly sealingly engaging the inside surface of the nozzle. The bracket also has a plurality of openings longitudinally therethrough and a plurality of slots transversely therein in communication with each opening. A plurality of unitary plugs sized to pass through the manway opening are matingly sealingly disposed in each opening of the bracket for sealingly plugging each opening. Each plug includes a plurality of arms operable to engage the slots of the bracket for connecting each plug to the bracket, so that the nozzle is sealed as the plugs seal the openings and are connected to the bracket.
Open-Source, Web-Based Dashboard Components for DICOM Connectivity.
Bustamante, Catalina; Pineda, Julian; Rascovsky, Simon; Arango, Andres
2016-08-01
The administration of a DICOM network within an imaging healthcare institution requires tools that allow for monitoring of connectivity and availability for adequate uptime measurements and help guide technology management strategies. We present the implementation of an open-source widget for the Dashing framework that provides basic dashboard functionality allowing for monitoring of a DICOM network using network "ping" and DICOM "C-ECHO" operations.
12 CFR 7.1020 - Purchase of open accounts.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Purchase of open accounts. 7.1020 Section 7... OPERATIONS Bank Powers § 7.1020 Purchase of open accounts. (a) General. The purchase of open accounts is a... national bank may purchase open accounts in connection with export transactions; the accounts should be...
12 CFR 7.1020 - Purchase of open accounts.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 1 2014-01-01 2014-01-01 false Purchase of open accounts. 7.1020 Section 7... OPERATIONS Bank Powers § 7.1020 Purchase of open accounts. (a) General. The purchase of open accounts is a... national bank may purchase open accounts in connection with export transactions; the accounts should be...
12 CFR 7.1020 - Purchase of open accounts.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 1 2012-01-01 2012-01-01 false Purchase of open accounts. 7.1020 Section 7... OPERATIONS Bank Powers § 7.1020 Purchase of open accounts. (a) General. The purchase of open accounts is a... national bank may purchase open accounts in connection with export transactions; the accounts should be...
12 CFR 7.1020 - Purchase of open accounts.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 1 2013-01-01 2013-01-01 false Purchase of open accounts. 7.1020 Section 7... OPERATIONS Bank Powers § 7.1020 Purchase of open accounts. (a) General. The purchase of open accounts is a... national bank may purchase open accounts in connection with export transactions; the accounts should be...
12 CFR 7.1020 - Purchase of open accounts.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 1 2011-01-01 2011-01-01 false Purchase of open accounts. 7.1020 Section 7... OPERATIONS Bank Powers § 7.1020 Purchase of open accounts. (a) General. The purchase of open accounts is a... national bank may purchase open accounts in connection with export transactions; the accounts should be...
NASA Astrophysics Data System (ADS)
Barrado, D.; Bouy, H.; Bouvier, J.; Moraux, E.; Sarro, L. M.; Bertin, E.; Cuillandre, J.-C.; Stauffer, J. R.; Lillo-Box, J.; Pollock, A.
2016-12-01
Context. Stellar clusters open the window to understanding stellar evolution and, in particular, the change with time and the dependence on mass of different stellar properties. As such, stellar clusters act as laboratories where different theories can be tested. Aims: We try to understand the origin of the connection between lithium depletion in F, G, and K stars, rotation and activity in the Pleiades open cluster. Methods: We have collected all the relevant data in the literature, including information regarding rotation period, binarity, and activity, and cross-matched this data with proper motions, multiwavelength photometry, and membership probability from the DANCe database. To avoid biases, we only included single members of the Pleiades with probabilities larger than 75% in the discussion. Results: The analysis confirms that there is a strong link between activity, rotation, and the lithium equivalent width excess, especially for the range Lum(bol) = 0.5-0.2L⊙ (about K2-K7 spectral types or 0.75-0.95 M⊙). Conclusions: It is not possible to disentangle these effects, but we cannot exclude that the observed lithium overabundance is partially an observational effect from enhanced activity owing to a large coverage by stellar spots induced by high rotation rates. Since a bona fide lithium enhancement is present in young, fast rotators, both activity and rotation should play a role in the lithium problem. Tables 1-3 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/596/A113
Keeping Connected: The Design and Overview of the Research
ERIC Educational Resources Information Center
Moss, Julianne
2014-01-01
The special issue "Keeping Connected: Identity, Social Connection and Education for Young People" opens with a paper that discusses the research design and overview of a three-year project by a Melbourne (Australia)-based multi-disciplinary team. Over 2007-2009, the Keeping Connected team of 10 researchers investigated the lives of…
ZeBase: an open-source relational database for zebrafish laboratories.
Hensley, Monica R; Hassenplug, Eric; McPhail, Rodney; Leung, Yuk Fai
2012-03-01
Abstract ZeBase is an open-source relational database for zebrafish inventory. It is designed for the recording of genetic, breeding, and survival information of fish lines maintained in a single- or multi-laboratory environment. Users can easily access ZeBase through standard web-browsers anywhere on a network. Convenient search and reporting functions are available to facilitate routine inventory work; such functions can also be automated by simple scripting. Optional barcode generation and scanning are also built-in for easy access to the information related to any fish. Further information of the database and an example implementation can be found at http://zebase.bio.purdue.edu.
Database of historically documented springs and spring flow measurements in Texas
Heitmuller, Franklin T.; Reece, Brian D.
2003-01-01
Springs are naturally occurring features that convey excess ground water to the land surface; they represent a transition from ground water to surface water. Water issues through one opening, multiple openings, or numerous seeps in the rock or soil. The database of this report provides information about springs and spring flow in Texas including spring names, identification numbers, location, and, if available, water source and use. This database does not include every spring in Texas, but is limited to an aggregation of selected digital and hard-copy data of the U.S. Geological Survey (USGS), the Texas Water Development Board (TWDB), and Capitol Environmental Services.
Goldacre, Ben; Gray, Jonathan
2016-04-08
OpenTrials is a collaborative and open database for all available structured data and documents on all clinical trials, threaded together by individual trial. With a versatile and expandable data schema, it is initially designed to host and match the following documents and data for each trial: registry entries; links, abstracts, or texts of academic journal papers; portions of regulatory documents describing individual trials; structured data on methods and results extracted by systematic reviewers or other researchers; clinical study reports; and additional documents such as blank consent forms, blank case report forms, and protocols. The intention is to create an open, freely re-usable index of all such information and to increase discoverability, facilitate research, identify inconsistent data, enable audits on the availability and completeness of this information, support advocacy for better data and drive up standards around open data in evidence-based medicine. The project has phase I funding. This will allow us to create a practical data schema and populate the database initially through web-scraping, basic record linkage techniques, crowd-sourced curation around selected drug areas, and import of existing sources of structured and documents. It will also allow us to create user-friendly web interfaces onto the data and conduct user engagement workshops to optimise the database and interface designs. Where other projects have set out to manually and perfectly curate a narrow range of information on a smaller number of trials, we aim to use a broader range of techniques and attempt to match a very large quantity of information on all trials. We are currently seeking feedback and additional sources of structured data.
Mesh Networking in the Tactical Environment Using White Space Technolog
2015-12-01
Connect network with multi- ple clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Table 4.5 Results of White Space simulation...functionality to devices seeking to allocate unutilized spectrum space . The devices are able to poll the database, via a connection to a web based...and 28 schools, all of whom were provided Internet connectivity by Adaptrum white space devices [16]. The use of white space devices made this
Three Dimensional Visualization of GOES Cloud Data Using Octress
1993-06-01
structure for CAD of integrated circuits that can subdivide the cubes into more complex polyhedrons . Medical imaging is also taking advantage of the...CIGOES 501 FORMAT(A) CALL OPENDBCPARAM’, ISTATRM) IF (ISTATRM .NE. 0) CALL FRIMERRC Error opening database .’, "+ ISTATRM) CALL OLDIMAGE(1, CIGOES, STATUS...image name (no .ext):’ ACCEPT 501, CIGOES 501 FORMAT(A) CALL OPENDB(’PARAM’, ISTATRM) IF (ISTATRM .NE. 0) CALL FRIMERRC Error opening database
Cloud-Based Distributed Control of Unmanned Systems
2015-04-01
during mission execution. At best, the data is saved onto hard-drives and is accessible only by the local team. Data history in a form available and...following open source technologies: GeoServer, OpenLayers, PostgreSQL , and PostGIS are chosen to implement the back-end database and server. A brief...geospatial map data. 3. PostgreSQL : An SQL-compliant object-relational database that easily scales to accommodate large amounts of data - upwards to
GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database
NASA Astrophysics Data System (ADS)
Bottigli, U.; Cerello, P.; Cheran, S.; Delogu, P.; Fantacci, M. E.; Fauci, F.; Golosio, B.; Lauria, A.; Lopez Torres, E.; Magro, R.; Masala, G. L.; Oliva, P.; Palmiero, R.; Raso, G.; Retico, A.; Stumbo, S.; Tangaro, S.
2003-09-01
The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN (National Institute of Nuclear Physics) sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used to acquire new images, as archive and to perform statistical analysis. The images (18×24 cm2, digitised by a CCD linear scanner with a 85 μm pitch and 4096 gray levels) are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized throught the connection of all the hospitals and research centers in GRID tecnology. In each hospital local patients digital images are stored in the local database. Using GRID connection, GPCALMA will allow each node to work on distributed database data as well as local database data. Using its database the GPCALMA tools perform several analysis. A texture analysis, i.e. an automated classification on adipose, dense or glandular texture, can be provided by the system. GPCALMA software also allows classification of pathological features, in particular massive lesions (both opacities and spiculated lesions) analysis and microcalcification clusters analysis. The detection of pathological features is made using neural network software that provides a selection of areas showing a given "suspicion level" of lesion occurrence. The performance of the GPCALMA system will be presented in terms of the ROC (Receiver Operating Characteristic) curves. The results of GPCALMA system as "second reader" will also be presented.
NREL Opens Large Database of Inorganic Thin-Film Materials | News | NREL
Inorganic Thin-Film Materials April 3, 2018 An extensive experimental database of inorganic thin-film Energy Laboratory (NREL) is now publicly available. The High Throughput Experimental Materials (HTEM Schroeder / NREL) "All existing experimental databases either contain many entries or have all this
17 CFR 38.552 - Elements of an acceptable audit trail program.
Code of Federal Regulations, 2014 CFR
2014-04-01
... of the order shall also be captured. (b) Transaction history database. A designated contract market's audit trail program must include an electronic transaction history database. An adequate transaction history database includes a history of all trades executed via open outcry or via entry into an electronic...
17 CFR 38.552 - Elements of an acceptable audit trail program.
Code of Federal Regulations, 2013 CFR
2013-04-01
... of the order shall also be captured. (b) Transaction history database. A designated contract market's audit trail program must include an electronic transaction history database. An adequate transaction history database includes a history of all trades executed via open outcry or via entry into an electronic...
Global Oil & Gas Features Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly Rose; Jennifer Bauer; Vic Baker
This submission contains a zip file with the developed Global Oil & Gas Features Database (as an ArcGIS geodatabase). Access the technical report describing how this database was produced using the following link: https://edx.netl.doe.gov/dataset/development-of-an-open-global-oil-and-gas-infrastructure-inventory-and-geodatabase
Virus taxonomy: the database of the International Committee on Taxonomy of Viruses (ICTV)
Dempsey, Donald M; Hendrickson, Robert Curtis; Orton, Richard J; Siddell, Stuart G; Smith, Donald B
2018-01-01
Abstract The International Committee on Taxonomy of Viruses (ICTV) is charged with the task of developing, refining, and maintaining a universal virus taxonomy. This task encompasses the classification of virus species and higher-level taxa according to the genetic and biological properties of their members; naming virus taxa; maintaining a database detailing the currently approved taxonomy; and providing the database, supporting proposals, and other virus-related information from an open-access, public web site. The ICTV web site (http://ictv.global) provides access to the current taxonomy database in online and downloadable formats, and maintains a complete history of virus taxa back to the first release in 1971. The ICTV has also published the ICTV Report on Virus Taxonomy starting in 1971. This Report provides a comprehensive description of all virus taxa covering virus structure, genome structure, biology and phylogenetics. The ninth ICTV report, published in 2012, is available as an open-access online publication from the ICTV web site. The current, 10th report (http://ictv.global/report/), is being published online, and is replacing the previous hard-copy edition with a completely open access, continuously updated publication. No other database or resource exists that provides such a comprehensive, fully annotated compendium of information on virus taxa and taxonomy. PMID:29040670
Open source database of images DEIMOS: extension for large-scale subjective image quality assessment
NASA Astrophysics Data System (ADS)
Vítek, Stanislav
2014-09-01
DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.
MetaMapR: pathway independent metabolomic network analysis incorporating unknowns.
Grapov, Dmitry; Wanichthanarak, Kwanjeera; Fiehn, Oliver
2015-08-15
Metabolic network mapping is a widely used approach for integration of metabolomic experimental results with biological domain knowledge. However, current approaches can be limited by biochemical domain or pathway knowledge which results in sparse disconnected graphs for real world metabolomic experiments. MetaMapR integrates enzymatic transformations with metabolite structural similarity, mass spectral similarity and empirical associations to generate richly connected metabolic networks. This open source, web-based or desktop software, written in the R programming language, leverages KEGG and PubChem databases to derive associations between metabolites even in cases where biochemical domain or molecular annotations are unknown. Network calculation is enhanced through an interface to the Chemical Translation System, which allows metabolite identifier translation between >200 common biochemical databases. Analysis results are presented as interactive visualizations or can be exported as high-quality graphics and numerical tables which can be imported into common network analysis and visualization tools. Freely available at http://dgrapov.github.io/MetaMapR/. Requires R and a modern web browser. Installation instructions, tutorials and application examples are available at http://dgrapov.github.io/MetaMapR/. ofiehn@ucdavis.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Towards linked open gene mutations data
2012-01-01
Background With the advent of high-throughput technologies, a great wealth of variation data is being produced. Such information may constitute the basis for correlation analyses between genotypes and phenotypes and, in the future, for personalized medicine. Several databases on gene variation exist, but this kind of information is still scarce in the Semantic Web framework. In this paper, we discuss issues related to the integration of mutation data in the Linked Open Data infrastructure, part of the Semantic Web framework. We present the development of a mapping from the IARC TP53 Mutation database to RDF and the implementation of servers publishing this data. Methods A version of the IARC TP53 Mutation database implemented in a relational database was used as first test set. Automatic mappings to RDF were first created by using D2RQ and later manually refined by introducing concepts and properties from domain vocabularies and ontologies, as well as links to Linked Open Data implementations of various systems of biomedical interest. Since D2RQ query performances are lower than those that can be achieved by using an RDF archive, generated data was also loaded into a dedicated system based on tools from the Jena software suite. Results We have implemented a D2RQ Server for TP53 mutation data, providing data on a subset of the IARC database, including gene variations, somatic mutations, and bibliographic references. The server allows to browse the RDF graph by using links both between classes and to external systems. An alternative interface offers improved performances for SPARQL queries. The resulting data can be explored by using any Semantic Web browser or application. Conclusions This has been the first case of a mutation database exposed as Linked Data. A revised version of our prototype, including further concepts and IARC TP53 Mutation database data sets, is under development. The publication of variation information as Linked Data opens new perspectives: the exploitation of SPARQL searches on mutation data and other biological databases may support data retrieval which is presently not possible. Moreover, reasoning on integrated variation data may support discoveries towards personalized medicine. PMID:22536974
Towards linked open gene mutations data.
Zappa, Achille; Splendiani, Andrea; Romano, Paolo
2012-03-28
With the advent of high-throughput technologies, a great wealth of variation data is being produced. Such information may constitute the basis for correlation analyses between genotypes and phenotypes and, in the future, for personalized medicine. Several databases on gene variation exist, but this kind of information is still scarce in the Semantic Web framework. In this paper, we discuss issues related to the integration of mutation data in the Linked Open Data infrastructure, part of the Semantic Web framework. We present the development of a mapping from the IARC TP53 Mutation database to RDF and the implementation of servers publishing this data. A version of the IARC TP53 Mutation database implemented in a relational database was used as first test set. Automatic mappings to RDF were first created by using D2RQ and later manually refined by introducing concepts and properties from domain vocabularies and ontologies, as well as links to Linked Open Data implementations of various systems of biomedical interest. Since D2RQ query performances are lower than those that can be achieved by using an RDF archive, generated data was also loaded into a dedicated system based on tools from the Jena software suite. We have implemented a D2RQ Server for TP53 mutation data, providing data on a subset of the IARC database, including gene variations, somatic mutations, and bibliographic references. The server allows to browse the RDF graph by using links both between classes and to external systems. An alternative interface offers improved performances for SPARQL queries. The resulting data can be explored by using any Semantic Web browser or application. This has been the first case of a mutation database exposed as Linked Data. A revised version of our prototype, including further concepts and IARC TP53 Mutation database data sets, is under development.The publication of variation information as Linked Data opens new perspectives: the exploitation of SPARQL searches on mutation data and other biological databases may support data retrieval which is presently not possible. Moreover, reasoning on integrated variation data may support discoveries towards personalized medicine.
Integrated Substrate and Thin Film Design Methods
1999-02-01
Proper Representation Once the required chemical databases had been converted to the Excel format, VBA macros were written to convert chemical...ternary systems databases were imported from MS Excel to MS Access to implement SQL queries. Further, this database was connected via an ODBC model, to the... VBA macro, corresponding to each of the elements A, B, and C, respectively. The B loop began with the next alphabetical choice of element symbols
Methods to Secure Databases Against Vulnerabilities
2015-12-01
for several languages such as C, C++, PHP, Java and Python [16]. MySQL will work well with very large databases. The documentation references...using Eclipse and connected to each database management system using Python and Java drivers provided by MySQL , MongoDB, and Datastax (for Cassandra...tiers in Python and Java . Problem MySQL MongoDB Cassandra 1. Injection a. Tautologies Vulnerable Vulnerable Not Vulnerable b. Illegal query
High-Order Methods for Computational Physics
1999-03-01
computation is running in 278 Ronald D. Henderson parallel. Instead we use the concept of a voxel database (VDB) of geometric positions in the mesh [85...processor 0 Fig. 4.19. Connectivity and communications axe established by building a voxel database (VDB) of positions. A VDB maps each position to a...studies such as the highly accurate stability computations considered help expand the database for this benchmark problem. The two-dimensional linear
Accordian-folded boot shield for flexible swivel connection
Hoh, Joseph C.
1986-01-01
A flexible swivel boot connector for connecting a first boot shield section to a second boot shield section, both first and second boot sections having openings therethrough, the second boot section having at least two adjacent accordian folds at the end having the opening, the second boot section being positioned through the opening of the first boot section such that a first of the accordian folds is within the first boot section and a second of the accordian folds is outside of the first boot, includes first and second annular discs, the first disc being positioned within and across the first accordian fold, the second disc being positioned within and across the second accordian fold, such that the first boot section is moveably and rigidly connected between the first and second accordian folds of the second boot section.
Making open data work for plant scientists.
Leonelli, Sabina; Smirnoff, Nicholas; Moore, Jonathan; Cook, Charis; Bastow, Ruth
2013-11-01
Despite the clear demand for open data sharing, its implementation within plant science is still limited. This is, at least in part, because open data-sharing raises several unanswered questions and challenges to current research practices. In this commentary, some of the challenges encountered by plant researchers at the bench when generating, interpreting, and attempting to disseminate their data have been highlighted. The difficulties involved in sharing sequencing, transcriptomics, proteomics, and metabolomics data are reviewed. The benefits and drawbacks of three data-sharing venues currently available to plant scientists are identified and assessed: (i) journal publication; (ii) university repositories; and (iii) community and project-specific databases. It is concluded that community and project-specific databases are the most useful to researchers interested in effective data sharing, since these databases are explicitly created to meet the researchers' needs, support extensive curation, and embody a heightened awareness of what it takes to make data reuseable by others. Such bottom-up and community-driven approaches need to be valued by the research community, supported by publishers, and provided with long-term sustainable support by funding bodies and government. At the same time, these databases need to be linked to generic databases where possible, in order to be discoverable to the majority of researchers and thus promote effective and efficient data sharing. As we look forward to a future that embraces open access to data and publications, it is essential that data policies, data curation, data integration, data infrastructure, and data funding are linked together so as to foster data access and research productivity.
NASA Astrophysics Data System (ADS)
Schaefer, Bastian; Goedecker, Stefan; Goedecker Group Team
Based on Lennard-Jones, Silicon, Sodium-Chloride and Gold clusters, it was found that uphill barrier energies of transition states between directly connected minima tend to increase with increasing structural differences of the two minima. Based on this insight it also turned out that post-processing minima hopping data at a negligible computational cost allows to obtain qualitative topological information on potential energy surfaces that can be stored in so called qualitative connectivity databases. These qualitative connectivity databases are used for generating fingerprint disconnectivity graphs that allow to obtain a first qualitative idea on thermodynamic and kinetic properties of a system of interest. This research was supported by the NCCR MARVEL, funded by the Swiss National Science Foundation. Computer time was provided by the Swiss National Supercomputing Centre (CSCS) under Project ID No. s499.
Application of connectivity mapping in predictive toxicology based on gene-expression similarity.
Smalley, Joshua L; Gant, Timothy W; Zhang, Shu-Dong
2010-02-09
Connectivity mapping is the process of establishing connections between different biological states using gene-expression profiles or signatures. There are a number of applications but in toxicology the most pertinent is for understanding mechanisms of toxicity. In its essence the process involves comparing a query gene signature generated as a result of exposure of a biological system to a chemical to those in a database that have been previously derived. In the ideal situation the query gene-expression signature is characteristic of the event and will be matched to similar events in the database. Key criteria are therefore the means of choosing the signature to be matched and the means by which the match is made. In this article we explore these concepts with examples applicable to toxicology. (c) 2009 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Besara, Rachel
2015-03-01
For years the cost of STEM databases have exceeded the rate of inflation. Libraries have reallocated funds for years to continue to provide support to their scientific communities, but they are reaching a point at many institutions where they are no longer able to provide access to many databases considered standard to support research. A possible or partial alleviation to this problem is the federal open access mandate. However, this shift challenges the current model of publishing and data management in the sciences. This talk will discuss these topics from the perspective of research libraries supporting physics and the STEM disciplines.
NASA Astrophysics Data System (ADS)
Kim, Woojin; Boonn, William
2010-03-01
Data mining of existing radiology and pathology reports within an enterprise health system can be used for clinical decision support, research, education, as well as operational analyses. In our health system, the database of radiology and pathology reports exceeds 13 million entries combined. We are building a web-based tool to allow search and data analysis of these combined databases using freely available and open source tools. This presentation will compare performance of an open source full-text indexing tool to MySQL's full-text indexing and searching and describe implementation procedures to incorporate these capabilities into a radiology-pathology search engine.
Developing hydrological monitoring networks with Arduino
NASA Astrophysics Data System (ADS)
Buytaert, Wouter; Vega, Andres; Villacis, Marcos; Moulds, Simon
2015-04-01
The open source hardware platform Arduino is very cost-effective and versatile for the development of sensor networks. Here we report on experiments on the use of Arduino-related technologies to develop and implement hydrological monitoring networks. Arduino Uno boards were coupled to a variety of commercially available hydrological sensors and programmed for automatic data collection. Tested sensors include water level, temperature, humidity, radiation, and precipitation. Our experiments show that most of the tested analogue sensors are quite straightforward to couple to Arduino based data loggers, especially if the electronic characteristics of the sensor are available. However, some sensors have internal digital interfaces, which are more challenging to connect. Lastly, tipping bucket rain gauges prove the most challenging because of the very specific methodology, i.e. registration of bucket tips instead of measurements at regular intervals. The typically low data generation rate of hydrological instruments is very compatible with available technologies for wireless data transmission. Mesh networks such as Xbee prove very convenient and robust for dispersed networks, while wifi is also an option for shorter distances and particular topographies. Lastly, the GSM shield of the Arduino can be used to transfer data to centralized databases. In regions where no mobile internet (i.e. 3G) connection is available, data transmission via text messages may be an option, depending on the bandwidth requirements.
NSI customer service representatives and user support office: NASA Science Internet
NASA Technical Reports Server (NTRS)
1991-01-01
The NASA Science Internet, (NSI) was established in 1987 to provide NASA's Offices of Space Science and Applications (OSSA) missions with transparent wide-area data connectivity to NASA's researchers, computational resources, and databases. The NSI Office at NASA/Ames Research Center has the lead responsibility for implementing a total, open networking program to serve the OSSA community. NSI is a full-service communications provider whose services include science network planning, network engineering, applications development, network operations, and network information center/user support services. NSI's mission is to provide reliable high-speed communications to the NASA science community. To this end, the NSI Office manages and operates the NASA Science Internet, a multiprotocol network currently supporting both DECnet and TCP/IP protocols. NSI utilizes state-of-the-art network technology to meet its customers' requirements. THe NASA Science Internet interconnects with other national networks including the National Science Foundation's NSFNET, the Department of Energy's ESnet, and the Department of Defense's MILNET. NSI also has international connections to Japan, Australia, New Zealand, Chile, and several European countries. NSI cooperates with other government agencies as well as academic and commercial organizations to implement networking technologies which foster interoperability, improve reliability and performance, increase security and control, and expedite migration to the OSI protocols.
Commutation circuit for an HVDC circuit breaker
Premerlani, William J.
1981-01-01
A commutation circuit for a high voltage DC circuit breaker incorporates a resistor capacitor combination and a charging circuit connected to the main breaker, such that a commutating capacitor is discharged in opposition to the load current to force the current in an arc after breaker opening to zero to facilitate arc interruption. In a particular embodiment, a normally open commutating circuit is connected across the contacts of a main DC circuit breaker to absorb the inductive system energy trapped by breaker opening and to limit recovery voltages to a level tolerable by the commutating circuit components.
Commutation circuit for an HVDC circuit breaker
Premerlani, W.J.
1981-11-10
A commutation circuit for a high voltage DC circuit breaker incorporates a resistor capacitor combination and a charging circuit connected to the main breaker, such that a commutating capacitor is discharged in opposition to the load current to force the current in an arc after breaker opening to zero to facilitate arc interruption. In a particular embodiment, a normally open commutating circuit is connected across the contacts of a main DC circuit breaker to absorb the inductive system energy trapped by breaker opening and to limit recovery voltages to a level tolerable by the commutating circuit components. 13 figs.
A Self-Provisioning Mechanism in OpenStack for IoT Devices.
Solano, Antonio; Dormido, Raquel; Duro, Natividad; Sánchez, Juan Miguel
2016-08-17
The aim of this paper is to introduce a plug-and-play mechanism for an Internet of Things (IoT) device to instantiate a Software as a Service (SaaS) application in a private cloud, built up with OpenStack. The SaaS application is the digital avatar of a physical object connected to Internet. As a proof of concept, a Vending Machine is retrofitted and connected to Internet with and Arduino Open Hardware device. Once the self-configuration mechanism is completed, it is possible to order a product from a mobile communication device.
A Self-Provisioning Mechanism in OpenStack for IoT Devices
Solano, Antonio; Dormido, Raquel; Duro, Natividad; Sánchez, Juan Miguel
2016-01-01
The aim of this paper is to introduce a plug-and-play mechanism for an Internet of Things (IoT) device to instantiate a Software as a Service (SaaS) application in a private cloud, built up with OpenStack. The SaaS application is the digital avatar of a physical object connected to Internet. As a proof of concept, a Vending Machine is retrofitted and connected to Internet with and Arduino Open Hardware device. Once the self-configuration mechanism is completed, it is possible to order a product from a mobile communication device. PMID:27548166
Coleman, Melinda A
2013-01-01
With marine protected areas being established worldwide there is a pressing need to understand how the physical setting in which these areas are placed influences patterns of dispersal and connectivity of important marine organisms. This is particularly critical for dynamic and complex nearshore marine environments where patterns of genetic structure of organisms are often chaotic and uncoupled from broad scale physical processes. This study determines the influence of habitat heterogeneity (presence of estuaries) on patterns of genetic structure and connectivity of the common kelp, Ecklonia radiata. There was no genetic differentiation of kelp between estuaries and the open coast and the presence of estuaries did not increase genetic differentiation among open coast populations. Similarly, there were no differences in level of inbreeding or genetic diversity between estuarine and open coast populations. The presence of large estuaries along rocky coastlines does not appear to influence genetic structure of this kelp and factors other than physical heterogeneity of habitat are likely more important determinants of regional connectivity. Marine reserves are currently lacking in this bioregion and may be designated in the future. Knowledge of the factors that influence important habitat forming organisms such as kelp contribute to informed and effective marine protected area design and conservation initiatives to maintain resilience of important marine habitats.
ERIC Educational Resources Information Center
Powell, Sarah R.; Stecker, Pamela M.
2014-01-01
This article describes data-based individualization (DBI) as a continuous process connecting assessment and intervention in mathematics for students with disabilities. DBI provides teachers with an evidence-based method for individualizing interventions for students who do not demonstrate adequate response. Assessment data gathered through the use…
ERIC Educational Resources Information Center
Rzepa, Henry S.
2016-01-01
Three new examples are presented illustrating three-dimensional chemical information searches of the Cambridge structure database (CSD) from which basic core concepts in organic and inorganic chemistry emerge. These include connecting the regiochemistry of aromatic electrophilic substitution with the geometrical properties of hydrogen bonding…
DOT National Transportation Integrated Search
2009-01-01
The Database demonstrates the unity and commonality of T-M but presents each one in its separate state. Yet in that process the full panopoly of T-M is unfolded including their shared and connected state. There are thousands of Trasportation-Markings...
Integrating Digital Images into the Art and Art History Curriculum.
ERIC Educational Resources Information Center
Pitt, Sharon P.; Updike, Christina B.; Guthrie, Miriam E.
2002-01-01
Describes an Internet-based image database system connected to a flexible, in-class teaching and learning tool (the Madison Digital Image Database) developed at James Madison University to bring digital images to the arts and humanities classroom. Discusses content, copyright issues, ensuring system effectiveness, instructional impact, sharing the…
Zhang, Shu-Dong; Gant, Timothy W
2009-07-31
Connectivity mapping is a process to recognize novel pharmacological and toxicological properties in small molecules by comparing their gene expression signatures with others in a database. A simple and robust method for connectivity mapping with increased specificity and sensitivity was recently developed, and its utility demonstrated using experimentally derived gene signatures. This paper introduces sscMap (statistically significant connections' map), a Java application designed to undertake connectivity mapping tasks using the recently published method. The software is bundled with a default collection of reference gene-expression profiles based on the publicly available dataset from the Broad Institute Connectivity Map 02, which includes data from over 7000 Affymetrix microarrays, for over 1000 small-molecule compounds, and 6100 treatment instances in 5 human cell lines. In addition, the application allows users to add their custom collections of reference profiles and is applicable to a wide range of other 'omics technologies. The utility of sscMap is two fold. First, it serves to make statistically significant connections between a user-supplied gene signature and the 6100 core reference profiles based on the Broad Institute expanded dataset. Second, it allows users to apply the same improved method to custom-built reference profiles which can be added to the database for future referencing. The software can be freely downloaded from http://purl.oclc.org/NET/sscMap.
Neural Network Modeling of UH-60A Pilot Vibration
NASA Technical Reports Server (NTRS)
Kottapalli, Sesi
2003-01-01
Full-scale flight-test pilot floor vibration is modeled using neural networks and full-scale wind tunnel test data for low speed level flight conditions. Neural network connections between the wind tunnel test data and the tlxee flight test pilot vibration components (vertical, lateral, and longitudinal) are studied. Two full-scale UH-60A Black Hawk databases are used. The first database is the NASMArmy UH-60A Airloads Program flight test database. The second database is the UH-60A rotor-only wind tunnel database that was acquired in the NASA Ames SO- by 120- Foot Wind Tunnel with the Large Rotor Test Apparatus (LRTA). Using neural networks, the flight-test pilot vibration is modeled using the wind tunnel rotating system hub accelerations, and separately, using the hub loads. The results show that the wind tunnel rotating system hub accelerations and the operating parameters can represent the flight test pilot vibration. The six components of the wind tunnel N/rev balance-system hub loads and the operating parameters can also represent the flight test pilot vibration. The present neural network connections can significandy increase the value of wind tunnel testing.
Open Access Internet Resources for Nano-Materials Physics Education
NASA Astrophysics Data System (ADS)
Moeck, Peter; Seipel, Bjoern; Upreti, Girish; Harvey, Morgan; Garrick, Will
2006-05-01
Because a great deal of nano-material science and engineering relies on crystalline materials, materials physicists have to provide their own specific contributions to the National Nanotechnology Initiative. Here we briefly review two freely accessible internet-based crystallographic databases, the Nano-Crystallography Database (http://nanocrystallography.research.pdx.edu) and the Crystallography Open Database (http://crystallography.net). Information on over 34,000 full structure determinations are stored in these two databases in the Crystallographic Information File format. The availability of such crystallographic data on the internet in a standardized format allows for all kinds of web-based crystallographic calculations and visualizations. Two examples of which that are dealt with in this paper are: interactive crystal structure visualizations in three dimensions and calculations of lattice-fringe fingerprints for the identification of unknown nanocrystals from their atomic-resolution transmission electron microscopy images.
Jantzen, Rodolphe; Rance, Bastien; Katsahian, Sandrine; Burgun, Anita; Looten, Vincent
2018-01-01
Open data available largely and with minimal constraints to the general public and journalists are needed to help rebuild trust between citizens and the health system. By opening data, we can expect to increase the democratic accountability, the self-empowerment of citizens. This article aims at assessing the quality and reusability of the Transparency - Health database with regards to the FAIR principles. More specifically, we observe the quality of the identity of the French medical doctors in the Transp-db. This study shows that the quality of the data in the Transp-db does not allow to identity with certainty those who benefit from an advantage or remuneration to be confirmed, reducing noticeably the impact of the open data effort.
Implementing a Dynamic Database-Driven Course Using LAMP
ERIC Educational Resources Information Center
Laverty, Joseph Packy; Wood, David; Turchek, John
2011-01-01
This paper documents the formulation of a database driven open source architecture web development course. The design of a web-based curriculum faces many challenges: a) relative emphasis of client and server-side technologies, b) choice of a server-side language, and c) the cost and efficient delivery of a dynamic web development, database-driven…
A UNIMARC Bibliographic Format Database for ABCD
ERIC Educational Resources Information Center
Megnigbeto, Eustache
2012-01-01
Purpose: ABCD is a web-based open and free software suite for library management derived from the UNESCO CDS/ISIS software technology. The first version was launched officially in December 2009 with a MARC 21 bibliographic format database. This paper aims to detail the building of the UNIMARC bibliographic format database for ABCD.…
Development of a data entry auditing protocol and quality assurance for a tissue bank database.
Khushi, Matloob; Carpenter, Jane E; Balleine, Rosemary L; Clarke, Christine L
2012-03-01
Human transcription error is an acknowledged risk when extracting information from paper records for entry into a database. For a tissue bank, it is critical that accurate data are provided to researchers with approved access to tissue bank material. The challenges of tissue bank data collection include manual extraction of data from complex medical reports that are accessed from a number of sources and that differ in style and layout. As a quality assurance measure, the Breast Cancer Tissue Bank (http:\\\\www.abctb.org.au) has implemented an auditing protocol and in order to efficiently execute the process, has developed an open source database plug-in tool (eAuditor) to assist in auditing of data held in our tissue bank database. Using eAuditor, we have identified that human entry errors range from 0.01% when entering donor's clinical follow-up details, to 0.53% when entering pathological details, highlighting the importance of an audit protocol tool such as eAuditor in a tissue bank database. eAuditor was developed and tested on the Caisis open source clinical-research database; however, it can be integrated in other databases where similar functionality is required.
DABAM: an open-source database of X-ray mirrors metrology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele
2016-04-20
An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper,more » with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. Some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.« less
DABAM: an open-source database of X-ray mirrors metrology
Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele; Glass, Mark; Idir, Mourad; Metz, Jim; Raimondi, Lorenzo; Rebuffi, Luca; Reininger, Ruben; Shi, Xianbo; Siewert, Frank; Spielmann-Jaeggi, Sibylle; Takacs, Peter; Tomasset, Muriel; Tonnessen, Tom; Vivo, Amparo; Yashchuk, Valeriy
2016-01-01
An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper, with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. Some optics simulations are presented and discussed to illustrate the real use of the profiles from the database. PMID:27140145
DABAM: An open-source database of X-ray mirrors metrology
Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele; ...
2016-05-01
An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper,more » with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. In conclusion, some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.« less
BIRS – Bioterrorism Information Retrieval System
Tewari, Ashish Kumar; Rashi; Wadhwa, Gulshan; Sharma, Sanjeev Kumar; Jain, Chakresh Kumar
2013-01-01
Bioterrorism is the intended use of pathogenic strains of microbes to widen terror in a population. There is a definite need to promote research for development of vaccines, therapeutics and diagnostic methods as a part of preparedness to any bioterror attack in the future. BIRS is an open-access database of collective information on the organisms related to bioterrorism. The architecture of database utilizes the current open-source technology viz PHP ver 5.3.19, MySQL and IIS server under windows platform for database designing. Database stores information on literature, generic- information and unique pathways of about 10 microorganisms involved in bioterrorism. This may serve as a collective repository to accelerate the drug discovery and vaccines designing process against such bioterrorist agents (microbes). The available data has been validated from various online resources and literature mining in order to provide the user with a comprehensive information system. Availability The database is freely available at http://www.bioterrorism.biowaves.org PMID:23390356
DABAM: an open-source database of X-ray mirrors metrology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele
An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper,more » with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. Some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.« less
DABAM: An open-source database of X-ray mirrors metrology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez del Rio, Manuel; Bianchi, Davide; Cocco, Daniele
An open-source database containing metrology data for X-ray mirrors is presented. It makes available metrology data (mirror heights and slopes profiles) that can be used with simulation tools for calculating the effects of optical surface errors in the performances of an optical instrument, such as a synchrotron beamline. A typical case is the degradation of the intensity profile at the focal position in a beamline due to mirror surface errors. This database for metrology (DABAM) aims to provide to the users of simulation tools the data of real mirrors. The data included in the database are described in this paper,more » with details of how the mirror parameters are stored. An accompanying software is provided to allow simple access and processing of these data, calculate the most usual statistical parameters, and also include the option of creating input files for most used simulation codes. In conclusion, some optics simulations are presented and discussed to illustrate the real use of the profiles from the database.« less
Kuchinke, Wolfgang; Krauth, Christian; Bergmann, René; Karakoyun, Töresin; Woollard, Astrid; Schluender, Irene; Braasch, Benjamin; Eckert, Martin; Ohmann, Christian
2016-07-07
In an unprecedented rate data in the life sciences is generated and stored in many different databases. An ever increasing part of this data is human health data and therefore falls under data protected by legal regulations. As part of the BioMedBridges project, which created infrastructures that connect more than 10 ESFRI research infrastructures (RI), the legal and ethical prerequisites of data sharing were examined employing a novel and pragmatic approach. We employed concepts from computer science to create legal requirement clusters that enable legal interoperability between databases for the areas of data protection, data security, Intellectual Property (IP) and security of biosample data. We analysed and extracted access rules and constraints from all data providers (databases) involved in the building of data bridges covering many of Europe's most important databases. These requirement clusters were applied to five usage scenarios representing the data flow in different data bridges: Image bridge, Phenotype data bridge, Personalised medicine data bridge, Structural data bridge, and Biosample data bridge. A matrix was built to relate the important concepts from data protection regulations (e.g. pseudonymisation, identifyability, access control, consent management) with the results of the requirement clusters. An interactive user interface for querying the matrix for requirements necessary for compliant data sharing was created. To guide researchers without the need for legal expert knowledge through legal requirements, an interactive tool, the Legal Assessment Tool (LAT), was developed. LAT provides researchers interactively with a selection process to characterise the involved types of data and databases and provides suitable requirements and recommendations for concrete data access and sharing situations. The results provided by LAT are based on an analysis of the data access and sharing conditions for different kinds of data of major databases in Europe. Data sharing for research purposes must be opened for human health data and LAT is one of the means to achieve this aim. In summary, LAT provides requirements in an interactive way for compliant data access and sharing with appropriate safeguards, restrictions and responsibilities by introducing a culture of responsibility and data governance when dealing with human data.
Publishing Linked Open Data for Physical Samples - Lessons Learned
NASA Astrophysics Data System (ADS)
Ji, P.; Arko, R. A.; Lehnert, K.; Bristol, S.
2016-12-01
Most data and information about physical samples and associated sampling features currently reside in relational databases. Integrating common concepts from various databases has motivated us to publish Linked Open Data for collections of physical samples, using Semantic Web technologies including the Resource Description Framework (RDF), RDF Query Language (SPARQL), and Web Ontology Language (OWL). The goal of our work is threefold: To evaluate and select ontologies in different granularities for common concepts; to establish best practices and develop a generic methodology for publishing physical sample data stored in relational database as Linked Open Data; and to reuse standard community vocabularies from the International Commission on Stratigraphy (ICS), Global Volcanism Program (GVP), General Bathymetric Chart of the Oceans (GEBCO), and others. Our work leverages developments in the EarthCube GeoLink project and the Interdisciplinary Earth Data Alliance (IEDA) facility for modeling and extracting physical sample data stored in relational databases. Reusing ontologies developed by GeoLink and IEDA has facilitated discovery and integration of data and information across multiple collections including the USGS National Geochemical Database (NGDB), System for Earth Sample Registration (SESAR), and Index to Marine & Lacustrine Geological Samples (IMLGS). We have evaluated, tested, and deployed Linked Open Data tools including Morph, Virtuoso Server, LodView, LodLive, and YASGUI for converting, storing, representing, and querying data in a knowledge base (RDF triplestore). Using persistent identifiers such as Open Researcher & Contributor IDs (ORCIDs) and International Geo Sample Numbers (IGSNs) at the record level makes it possible for other repositories to link related resources such as persons, datasets, documents, expeditions, awards, etc. to samples, features, and collections. This work is supported by the EarthCube "GeoLink" project (NSF# ICER14-40221 and others) and the "USGS-IEDA Partnership to Support a Data Lifecycle Framework and Tools" project (USGS# G13AC00381).
Hürlimann, Eveline; Schur, Nadine; Boutsika, Konstantina; Stensgaard, Anna-Sofie; Laserna de Himpsl, Maiti; Ziegelbauer, Kathrin; Laizer, Nassor; Camenzind, Lukas; Di Pasquale, Aurelio; Ekpo, Uwem F.; Simoonga, Christopher; Mushinge, Gabriel; Saarnak, Christopher F. L.; Utzinger, Jürg; Kristensen, Thomas K.; Vounatsou, Penelope
2011-01-01
Background After many years of general neglect, interest has grown and efforts came under way for the mapping, control, surveillance, and eventual elimination of neglected tropical diseases (NTDs). Disease risk estimates are a key feature to target control interventions, and serve as a benchmark for monitoring and evaluation. What is currently missing is a georeferenced global database for NTDs providing open-access to the available survey data that is constantly updated and can be utilized by researchers and disease control managers to support other relevant stakeholders. We describe the steps taken toward the development of such a database that can be employed for spatial disease risk modeling and control of NTDs. Methodology With an emphasis on schistosomiasis in Africa, we systematically searched the literature (peer-reviewed journals and ‘grey literature’), contacted Ministries of Health and research institutions in schistosomiasis-endemic countries for location-specific prevalence data and survey details (e.g., study population, year of survey and diagnostic techniques). The data were extracted, georeferenced, and stored in a MySQL database with a web interface allowing free database access and data management. Principal Findings At the beginning of 2011, our database contained more than 12,000 georeferenced schistosomiasis survey locations from 35 African countries available under http://www.gntd.org. Currently, the database is expanded to a global repository, including a host of other NTDs, e.g. soil-transmitted helminthiasis and leishmaniasis. Conclusions An open-access, spatially explicit NTD database offers unique opportunities for disease risk modeling, targeting control interventions, disease monitoring, and surveillance. Moreover, it allows for detailed geostatistical analyses of disease distribution in space and time. With an initial focus on schistosomiasis in Africa, we demonstrate the proof-of-concept that the establishment and running of a global NTD database is feasible and should be expanded without delay. PMID:22180793
SQL is Dead; Long-live SQL: Relational Database Technology in Science Contexts
NASA Astrophysics Data System (ADS)
Howe, B.; Halperin, D.
2014-12-01
Relational databases are often perceived as a poor fit in science contexts: Rigid schemas, poor support for complex analytics, unpredictable performance, significant maintenance and tuning requirements --- these idiosyncrasies often make databases unattractive in science contexts characterized by heterogeneous data sources, complex analysis tasks, rapidly changing requirements, and limited IT budgets. In this talk, I'll argue that although the value proposition of typical relational database systems are weak in science, the core ideas that power relational databases have become incredibly prolific in open source science software, and are emerging as a universal abstraction for both big data and small data. In addition, I'll talk about two open source systems we are building to "jailbreak" the core technology of relational databases and adapt them for use in science. The first is SQLShare, a Database-as-a-Service system supporting collaborative data analysis and exchange by reducing database use to an Upload-Query-Share workflow with no installation, schema design, or configuration required. The second is Myria, a service that supports much larger scale data, complex analytics, and supports multiple back end systems. Finally, I'll describe some of the ways our collaborators in oceanography, astronomy, biology, fisheries science, and more are using these systems to replace script-based workflows for reasons of performance, flexibility, and convenience.
Bellec, Pierre; Lavoie-Courchesne, Sébastien; Dickinson, Phil; Lerch, Jason P; Zijdenbos, Alex P; Evans, Alan C
2012-01-01
The analysis of neuroimaging databases typically involves a large number of inter-connected steps called a pipeline. The pipeline system for Octave and Matlab (PSOM) is a flexible framework for the implementation of pipelines in the form of Octave or Matlab scripts. PSOM does not introduce new language constructs to specify the steps and structure of the workflow. All steps of analysis are instead described by a regular Matlab data structure, documenting their associated command and options, as well as their input, output, and cleaned-up files. The PSOM execution engine provides a number of automated services: (1) it executes jobs in parallel on a local computing facility as long as the dependencies between jobs allow for it and sufficient resources are available; (2) it generates a comprehensive record of the pipeline stages and the history of execution, which is detailed enough to fully reproduce the analysis; (3) if an analysis is started multiple times, it executes only the parts of the pipeline that need to be reprocessed. PSOM is distributed under an open-source MIT license and can be used without restriction for academic or commercial projects. The package has no external dependencies besides Matlab or Octave, is straightforward to install and supports of variety of operating systems (Linux, Windows, Mac). We ran several benchmark experiments on a public database including 200 subjects, using a pipeline for the preprocessing of functional magnetic resonance images (fMRI). The benchmark results showed that PSOM is a powerful solution for the analysis of large databases using local or distributed computing resources.
Bellec, Pierre; Lavoie-Courchesne, Sébastien; Dickinson, Phil; Lerch, Jason P.; Zijdenbos, Alex P.; Evans, Alan C.
2012-01-01
The analysis of neuroimaging databases typically involves a large number of inter-connected steps called a pipeline. The pipeline system for Octave and Matlab (PSOM) is a flexible framework for the implementation of pipelines in the form of Octave or Matlab scripts. PSOM does not introduce new language constructs to specify the steps and structure of the workflow. All steps of analysis are instead described by a regular Matlab data structure, documenting their associated command and options, as well as their input, output, and cleaned-up files. The PSOM execution engine provides a number of automated services: (1) it executes jobs in parallel on a local computing facility as long as the dependencies between jobs allow for it and sufficient resources are available; (2) it generates a comprehensive record of the pipeline stages and the history of execution, which is detailed enough to fully reproduce the analysis; (3) if an analysis is started multiple times, it executes only the parts of the pipeline that need to be reprocessed. PSOM is distributed under an open-source MIT license and can be used without restriction for academic or commercial projects. The package has no external dependencies besides Matlab or Octave, is straightforward to install and supports of variety of operating systems (Linux, Windows, Mac). We ran several benchmark experiments on a public database including 200 subjects, using a pipeline for the preprocessing of functional magnetic resonance images (fMRI). The benchmark results showed that PSOM is a powerful solution for the analysis of large databases using local or distributed computing resources. PMID:22493575
NASA Astrophysics Data System (ADS)
Smith, Edward M.; Wandtke, John; Robinson, Arvin E.
1999-07-01
The selection criteria for the archive were based on the objectives of the Medical Information, Communication and Archive System (MICAS), a multi-vendor incremental approach to PACS. These objectives include interoperability between all components, seamless integration of the Radiology Information System (RIS) with MICAS and eventually other hospital databases, all components must demonstrate DICOM compliance prior to acceptance and automated workflow that can be programmed to meet changes in the healthcare environment. The long-term multi-modality archive is being implemented in 3 or more phases with the first phase designed to provide a 12 to 18 month storage solution. This decision was made because the cost per GB of storage is rapidly decreasing and the speed at which data can be retrieved is increasing with time. The open-solution selected allows incorporation of leading edge, 'best of breed' hardware and software and provides maximum jukeboxes, provides maximum flexibility of workflow both within and outside of radiology. The selected solution is media independent, supports multiple jukeboxes, provides expandable storage capacity and will provide redundancy and fault tolerance at minimal cost. Some of the required attributes of the archive include scalable archive strategy, virtual image database with global query and object-oriented database. The selection process took approximately 10 months with Cemax-Icon being the vendor selected. Prior to signing a purchase order, Cemax-Icon performed a site survey, agreed upon the acceptance test protocol and provided a written guarantee of connectivity between their archive and the imaging modalities and other MICAS components.
Database Search Engines: Paradigms, Challenges and Solutions.
Verheggen, Kenneth; Martens, Lennart; Berven, Frode S; Barsnes, Harald; Vaudel, Marc
2016-01-01
The first step in identifying proteins from mass spectrometry based shotgun proteomics data is to infer peptides from tandem mass spectra, a task generally achieved using database search engines. In this chapter, the basic principles of database search engines are introduced with a focus on open source software, and the use of database search engines is demonstrated using the freely available SearchGUI interface. This chapter also discusses how to tackle general issues related to sequence database searching and shows how to minimize their impact.
HEDD: Human Enhancer Disease Database
Wang, Zhen; Zhang, Quanwei; Zhang, Wen; Lin, Jhih-Rong; Cai, Ying; Mitra, Joydeep
2018-01-01
Abstract Enhancers, as specialized genomic cis-regulatory elements, activate transcription of their target genes and play an important role in pathogenesis of many human complex diseases. Despite recent systematic identification of them in the human genome, currently there is an urgent need for comprehensive annotation databases of human enhancers with a focus on their disease connections. In response, we built the Human Enhancer Disease Database (HEDD) to facilitate studies of enhancers and their potential roles in human complex diseases. HEDD currently provides comprehensive genomic information for ∼2.8 million human enhancers identified by ENCODE, FANTOM5 and RoadMap with disease association scores based on enhancer–gene and gene–disease connections. It also provides Web-based analytical tools to visualize enhancer networks and score enhancers given a set of selected genes in a specific gene network. HEDD is freely accessible at http://zdzlab.einstein.yu.edu/1/hedd.php. PMID:29077884
Large-scale extraction of brain connectivity from the neuroscientific literature
Richardet, Renaud; Chappelier, Jean-Cédric; Telefont, Martin; Hill, Sean
2015-01-01
Motivation: In neuroscience, as in many other scientific domains, the primary form of knowledge dissemination is through published articles. One challenge for modern neuroinformatics is finding methods to make the knowledge from the tremendous backlog of publications accessible for search, analysis and the integration of such data into computational models. A key example of this is metascale brain connectivity, where results are not reported in a normalized repository. Instead, these experimental results are published in natural language, scattered among individual scientific publications. This lack of normalization and centralization hinders the large-scale integration of brain connectivity results. In this article, we present text-mining models to extract and aggregate brain connectivity results from 13.2 million PubMed abstracts and 630 216 full-text publications related to neuroscience. The brain regions are identified with three different named entity recognizers (NERs) and then normalized against two atlases: the Allen Brain Atlas (ABA) and the atlas from the Brain Architecture Management System (BAMS). We then use three different extractors to assess inter-region connectivity. Results: NERs and connectivity extractors are evaluated against a manually annotated corpus. The complete in litero extraction models are also evaluated against in vivo connectivity data from ABA with an estimated precision of 78%. The resulting database contains over 4 million brain region mentions and over 100 000 (ABA) and 122 000 (BAMS) potential brain region connections. This database drastically accelerates connectivity literature review, by providing a centralized repository of connectivity data to neuroscientists. Availability and implementation: The resulting models are publicly available at github.com/BlueBrain/bluima. Contact: renaud.richardet@epfl.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25609795
Koush, Yury; Ashburner, John; Prilepin, Evgeny; Sladky, Ronald; Zeidman, Peter; Bibikov, Sergei; Scharnowski, Frank; Nikonorov, Artem; De Ville, Dimitri Van
2017-08-01
Neurofeedback based on real-time functional magnetic resonance imaging (rt-fMRI) is a novel and rapidly developing research field. It allows for training of voluntary control over localized brain activity and connectivity and has demonstrated promising clinical applications. Because of the rapid technical developments of MRI techniques and the availability of high-performance computing, new methodological advances in rt-fMRI neurofeedback become possible. Here we outline the core components of a novel open-source neurofeedback framework, termed Open NeuroFeedback Training (OpenNFT), which efficiently integrates these new developments. This framework is implemented using Python and Matlab source code to allow for diverse functionality, high modularity, and rapid extendibility of the software depending on the user's needs. In addition, it provides an easy interface to the functionality of Statistical Parametric Mapping (SPM) that is also open-source and one of the most widely used fMRI data analysis software. We demonstrate the functionality of our new framework by describing case studies that include neurofeedback protocols based on brain activity levels, effective connectivity models, and pattern classification approaches. This open-source initiative provides a suitable framework to actively engage in the development of novel neurofeedback approaches, so that local methodological developments can be easily made accessible to a wider range of users. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Data mining the EXFOR database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, David A.; Hirdt, John; Herman, Michal
2013-12-13
The EXFOR database contains the largest collection of experimental nuclear reaction data available as well as this data's bibliographic information and experimental details. We created an undirected graph from the EXFOR datasets with graph nodes representing single observables and graph links representing the connections of various types between these observables. This graph is an abstract representation of the connections in EXFOR, similar to graphs of social networks, authorship networks, etc. Analysing this abstract graph, we are able to address very specific questions such as 1) what observables are being used as reference measurements by the experimental community? 2) are thesemore » observables given the attention needed by various standards organisations? 3) are there classes of observables that are not connected to these reference measurements? In addressing these questions, we propose several (mostly cross section) observables that should be evaluated and made into reaction reference standards.« less
Knudsen, Keith L.; Noller, Jay S.; Sowers, Janet M.; Lettis, William R.
1997-01-01
This Open-File report is a digital geologic map database. This pamphlet serves to introduce and describe the digital data. There are no paper maps included in the Open-File report. The report does include, however, PostScript plot files containing the images of the geologic map sheets with explanations, as well as the accompanying text describing the geology of the area. For those interested in a paper plot of information contained in the database or in obtaining the PostScript plot files, please see the section entitled 'For Those Who Aren't Familiar With Digital Geologic Map Databases' below. This digital map database, compiled from previously unpublished data, and new mapping by the authors, represents the general distribution of surficial deposits in the San Francisco bay region. Together with the accompanying text file (sf_geo.txt or sf_geo.pdf), it provides current information on Quaternary geology and liquefaction susceptibility of the San Francisco, California, 1:100,000 quadrangle. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:100,000 or smaller. The content and character of the database, as well as three methods of obtaining the database, are described below.
Virus taxonomy: the database of the International Committee on Taxonomy of Viruses (ICTV).
Lefkowitz, Elliot J; Dempsey, Donald M; Hendrickson, Robert Curtis; Orton, Richard J; Siddell, Stuart G; Smith, Donald B
2018-01-04
The International Committee on Taxonomy of Viruses (ICTV) is charged with the task of developing, refining, and maintaining a universal virus taxonomy. This task encompasses the classification of virus species and higher-level taxa according to the genetic and biological properties of their members; naming virus taxa; maintaining a database detailing the currently approved taxonomy; and providing the database, supporting proposals, and other virus-related information from an open-access, public web site. The ICTV web site (http://ictv.global) provides access to the current taxonomy database in online and downloadable formats, and maintains a complete history of virus taxa back to the first release in 1971. The ICTV has also published the ICTV Report on Virus Taxonomy starting in 1971. This Report provides a comprehensive description of all virus taxa covering virus structure, genome structure, biology and phylogenetics. The ninth ICTV report, published in 2012, is available as an open-access online publication from the ICTV web site. The current, 10th report (http://ictv.global/report/), is being published online, and is replacing the previous hard-copy edition with a completely open access, continuously updated publication. No other database or resource exists that provides such a comprehensive, fully annotated compendium of information on virus taxa and taxonomy. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
29 CFR 1915.163 - Ship's piping systems.
Code of Federal Regulations, 2013 CFR
2013-07-01
... are taken: (1) The isolation and shutoff valves connecting the dead system with the live system or... welded instead of bolted, at least two isolation and shutoff valves connecting the dead system with the...) Drain connections to the atmosphere on all of the dead interconnecting systems shall be opened for...
29 CFR 1915.163 - Ship's piping systems.
Code of Federal Regulations, 2014 CFR
2014-07-01
... are taken: (1) The isolation and shutoff valves connecting the dead system with the live system or... welded instead of bolted, at least two isolation and shutoff valves connecting the dead system with the...) Drain connections to the atmosphere on all of the dead interconnecting systems shall be opened for...
29 CFR 1915.163 - Ship's piping systems.
Code of Federal Regulations, 2012 CFR
2012-07-01
... are taken: (1) The isolation and shutoff valves connecting the dead system with the live system or... welded instead of bolted, at least two isolation and shutoff valves connecting the dead system with the...) Drain connections to the atmosphere on all of the dead interconnecting systems shall be opened for...
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
Orofacial manifestations of scleroderma. A literature review.
Hadj Said, M; Foletti, J M; Graillon, N; Guyot, L; Chossegros, C
2016-11-01
Scleroderma is a rare disease of the connective tissue (50 to 200 patients/1 million people; 60,000 patients in France). We conducted a literature review about the orofacial manifestations of scleroderma that have been little studied. The 45 articles found in 6 different databases by using the keywords "scleroderma", "systemic sclerosis", "oral medicine", "face" and published between 1944 and 2016 were selected, for a total of 328 patients. A total of 1187 orofacial manifestations of scleroderma were identified, occurring mainly in women (84.5%) with a mean age of 40.2 years, 10 years on average after the first manifestation of the disease. The main ones were limitation of mouth opening (69.8%), widening of the periodontal ligament (67.3%), xerostomia (63.4%), telangiectasia (36.2%) and bone lesions (34.5%). Dental root resorptions, pulp and nose calcifications were also reported but with no evident link with scleroderma. Orofacial manifestations of scleroderma are probably more common than reported. They mostly affect women with a mean age of 40. The most common oral manifestations are limitation of mouth opening, widening of the periodontal ligament and xerostomia. Because of the handicap they may be responsible for, these manifestations must be detected early in order to prevent from functional impairments and from dental and periodontal lesions. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
ERIC Educational Resources Information Center
Oliver, Astrid; Dahlquist, Janet; Tankersley, Jan; Emrich, Beth
2010-01-01
This article discusses the processes that occurred when the Library, Controller's Office, and Information Technology Department agreed to create an interface between the Library's Innovative Interfaces patron database and campus administrative software, Banner, using file transfer protocol, in an effort to streamline the Library's accounts…
An open access database for the evaluation of heart sound algorithms.
Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D
2016-12-01
In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.
Open Core Data: Connecting scientific drilling data to scientists and community data resources
NASA Astrophysics Data System (ADS)
Fils, D.; Noren, A. J.; Lehnert, K.; Diver, P.
2016-12-01
Open Core Data (OCD) is an innovative, efficient, and scalable infrastructure for data generated by scientific drilling and coring to improve discoverability, accessibility, citability, and preservation of data from the oceans and continents. OCD is building on existing community data resources that manage, store, publish, and preserve scientific drilling data, filling a critical void that currently prevents linkages between these and other data systems and tools to realize the full potential of data generated through drilling and coring. We are developing this functionality through Linked Open Data (LOD) and semantic patterns that enable data access through the use of community ontologies such as GeoLink (geolink.org, an EarthCube Building Block), a collection of protocols, formats and vocabularies from a set of participating geoscience repositories. Common shared concepts of classes such as cruise, dataset, person and others allow easier resolution of common references through shared resource IDs. These graphs are then made available via SPARQL as well as incorporated into web pages following schema.org approaches. Additionally the W3C PROV vocabulary is under evaluation for use for documentation of provenance. Further, the application of persistent identifiers for samples (IGSNs); datasets, expeditions, and projects (DOIs); and people (ORCIDs), combined with LOD approaches, provides methods to resolve and incorporate metadata and datasets. Application Program Interfaces (APIs) complement these semantic approaches to the OCD data holdings. APIs are exposed following the Swagger guidelines (swagger.io) and will be evolved into the OpenAPI (openapis.org) approach. Currently APIs are in development for the NSF funded Flyover Country mobile geoscience app (fc.umn.edu), the Neotoma Paleoecology Database (neotomadb.org), Magnetics Information Consortium (MagIC; earthref.org/MagIC), and other community tools and data systems, as well as for internal OCD use.
The open-source movement: an introduction for forestry professionals
Patrick Proctor; Paul C. Van Deusen; Linda S. Heath; Jeffrey H. Gove
2005-01-01
In recent years, the open-source movement has yielded a generous and powerful suite of software and utilities that rivals those developed by many commercial software companies. Open-source programs are available for many scientific needs: operating systems, databases, statistical analysis, Geographic Information System applications, and object-oriented programming....
Now Open Operations View All Observing databases offline May 30 Status of Gemini North eNewscast View Gemini Observatory Strategic Vision PDF Gemini North with open wind vents and observing slit at sunset . Gemini South with star-trails of the South Celestial Pole overhead. Gemini Science Meeting Open For
77 FR 74543 - Federal Advisory Committee Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-14
.../411593 . Dial-in: After you've connected your computer, audio connection instructions will be presented... the status of current research projects. FOR FURTHER INFORMATION CONTACT: The meeting is open to the...
Jin, Wenquan; Kim, DoHyeun
2018-05-26
The Internet of Things is comprised of heterogeneous devices, applications, and platforms using multiple communication technologies to connect the Internet for providing seamless services ubiquitously. With the requirement of developing Internet of Things products, many protocols, program libraries, frameworks, and standard specifications have been proposed. Therefore, providing a consistent interface to access services from those environments is difficult. Moreover, bridging the existing web services to sensor and actuator networks is also important for providing Internet of Things services in various industry domains. In this paper, an Internet of Things proxy is proposed that is based on virtual resources to bridge heterogeneous web services from the Internet to the Internet of Things network. The proxy enables clients to have transparent access to Internet of Things devices and web services in the network. The proxy is comprised of server and client to forward messages for different communication environments using the virtual resources which include the server for the message sender and the client for the message receiver. We design the proxy for the Open Connectivity Foundation network where the virtual resources are discovered by the clients as Open Connectivity Foundation resources. The virtual resources represent the resources which expose services in the Internet by web service providers. Although the services are provided by web service providers from the Internet, the client can access services using the consistent communication protocol in the Open Connectivity Foundation network. For discovering the resources to access services, the client also uses the consistent discovery interface to discover the Open Connectivity Foundation devices and virtual resources.
Modeling Functional Neuroanatomy for an Anatomy Information System
Niggemann, Jörg M.; Gebert, Andreas; Schulz, Stefan
2008-01-01
Objective Existing neuroanatomical ontologies, databases and information systems, such as the Foundational Model of Anatomy (FMA), represent outgoing connections from brain structures, but cannot represent the “internal wiring” of structures and as such, cannot distinguish between different independent connections from the same structure. Thus, a fundamental aspect of Neuroanatomy, the functional pathways and functional systems of the brain such as the pupillary light reflex system, is not adequately represented. This article identifies underlying anatomical objects which are the source of independent connections (collections of neurons) and uses these as basic building blocks to construct a model of functional neuroanatomy and its functional pathways. Design The basic representational elements of the model are unnamed groups of neurons or groups of neuron segments. These groups, their relations to each other, and the relations to the objects of macroscopic anatomy are defined. The resulting model can be incorporated into the FMA. Measurements The capabilities of the presented model are compared to the FMA and the Brain Architecture Management System (BAMS). Results Internal wiring as well as functional pathways can correctly be represented and tracked. Conclusion This model bridges the gap between representations of single neurons and their parts on the one hand and representations of spatial brain structures and areas on the other hand. It is capable of drawing correct inferences on pathways in a nervous system. The object and relation definitions are related to the Open Biomedical Ontology effort and its relation ontology, so that this model can be further developed into an ontology of neuronal functional systems. PMID:18579841
Integration of a clinical trial database with a PACS
NASA Astrophysics Data System (ADS)
van Herk, M.
2014-03-01
Many clinical trials use Electronic Case Report Forms (ECRF), e.g., from OpenClinica. Trial data is augmented if DICOM scans, dose cubes, etc. from the Picture Archiving and Communication System (PACS) are included for data mining. Unfortunately, there is as yet no structured way to collect DICOM objects in trial databases. In this paper, we obtain a tight integration of ECRF and PACS using open source software. Methods: DICOM identifiers for selected images/series/studies are stored in associated ECRF events (e.g., baseline) as follows: 1) JavaScript added to OpenClinica communicates using HTML with a gateway server inside the hospitals firewall; 2) On this gateway, an open source DICOM server runs scripts to query and select the data, returning anonymized identifiers; 3) The scripts then collects, anonymizes, zips and transmits selected data to a central trial server; 4) Here data is stored in a DICOM archive which allows authorized ECRF users to view and download the anonymous images associated with each event. Results: All integration scripts are open source. The PACS administrator configures the anonymization script and decides to use the gateway in passive (receiving) mode or in an active mode going out to the PACS to gather data. Our ECRF centric approach supports automatic data mining by iterating over the cases in the ECRF database, providing the identifiers to load images and the clinical data to correlate with image analysis results. Conclusions: Using open source software and web technology, a tight integration has been achieved between PACS and ECRF.
MetaboLights: An Open-Access Database Repository for Metabolomics Data.
Kale, Namrata S; Haug, Kenneth; Conesa, Pablo; Jayseelan, Kalaivani; Moreno, Pablo; Rocca-Serra, Philippe; Nainala, Venkata Chandrasekhar; Spicer, Rachel A; Williams, Mark; Li, Xuefei; Salek, Reza M; Griffin, Julian L; Steinbeck, Christoph
2016-03-24
MetaboLights is the first general purpose, open-access database repository for cross-platform and cross-species metabolomics research at the European Bioinformatics Institute (EMBL-EBI). Based upon the open-source ISA framework, MetaboLights provides Metabolomics Standard Initiative (MSI) compliant metadata and raw experimental data associated with metabolomics experiments. Users can upload their study datasets into the MetaboLights Repository. These studies are then automatically assigned a stable and unique identifier (e.g., MTBLS1) that can be used for publication reference. The MetaboLights Reference Layer associates metabolites with metabolomics studies in the archive and is extensively annotated with data fields such as structural and chemical information, NMR and MS spectra, target species, metabolic pathways, and reactions. The database is manually curated with no specific release schedules. MetaboLights is also recommended by journals for metabolomics data deposition. This unit provides a guide to using MetaboLights, downloading experimental data, and depositing metabolomics datasets using user-friendly submission tools. Copyright © 2016 John Wiley & Sons, Inc.
CartograTree: connecting tree genomes, phenotypes and environment.
Vasquez-Gross, Hans A; Yu, John J; Figueroa, Ben; Gessler, Damian D G; Neale, David B; Wegrzyn, Jill L
2013-05-01
Today, researchers spend a tremendous amount of time gathering, formatting, filtering and visualizing data collected from disparate sources. Under the umbrella of forest tree biology, we seek to provide a platform and leverage modern technologies to connect biotic and abiotic data. Our goal is to provide an integrated web-based workspace that connects environmental, genomic and phenotypic data via geo-referenced coordinates. Here, we connect the genomic query web-based workspace, DiversiTree and a novel geographical interface called CartograTree to data housed on the TreeGenes database. To accomplish this goal, we implemented Simple Semantic Web Architecture and Protocol to enable the primary genomics database, TreeGenes, to communicate with semantic web services regardless of platform or back-end technologies. The novelty of CartograTree lies in the interactive workspace that allows for geographical visualization and engagement of high performance computing (HPC) resources. The application provides a unique tool set to facilitate research on the ecology, physiology and evolution of forest tree species. CartograTree can be accessed at: http://dendrome.ucdavis.edu/cartogratree. © 2013 Blackwell Publishing Ltd.
Bailey, Sarah F; Scheible, Melissa K; Williams, Christopher; Silva, Deborah S B S; Hoggan, Marina; Eichman, Christopher; Faith, Seth A
2017-11-01
Next-generation Sequencing (NGS) is a rapidly evolving technology with demonstrated benefits for forensic genetic applications, and the strategies to analyze and manage the massive NGS datasets are currently in development. Here, the computing, data storage, connectivity, and security resources of the Cloud were evaluated as a model for forensic laboratory systems that produce NGS data. A complete front-to-end Cloud system was developed to upload, process, and interpret raw NGS data using a web browser dashboard. The system was extensible, demonstrating analysis capabilities of autosomal and Y-STRs from a variety of NGS instrumentation (Illumina MiniSeq and MiSeq, and Oxford Nanopore MinION). NGS data for STRs were concordant with standard reference materials previously characterized with capillary electrophoresis and Sanger sequencing. The computing power of the Cloud was implemented with on-demand auto-scaling to allow multiple file analysis in tandem. The system was designed to store resulting data in a relational database, amenable to downstream sample interpretations and databasing applications following the most recent guidelines in nomenclature for sequenced alleles. Lastly, a multi-layered Cloud security architecture was tested and showed that industry standards for securing data and computing resources were readily applied to the NGS system without disadvantageous effects for bioinformatic analysis, connectivity or data storage/retrieval. The results of this study demonstrate the feasibility of using Cloud-based systems for secured NGS data analysis, storage, databasing, and multi-user distributed connectivity. Copyright © 2017 Elsevier B.V. All rights reserved.
Analysis and Design of Connections, Openings and Attachments for Protective Construction
1989-10-01
precast connection details were subjected to cyclic simulated earthquake loads . The detail... column and beam flexural steel. At the onset of flexural yield under cyclical loading , crack sizes at the face of the joint increase and reinforcement... beam / column connections may be a necessity and can be placed without a great deal of difficulty. However, their placement in slab/wall connections
Defining Openness: Updating the Concept of "Open" for a Connected World
ERIC Educational Resources Information Center
McAndrew, Patrick
2010-01-01
The release of free resources by the education sector has led to reconsideration of how the open approach implied by Open Educational Resources (OER) impacts on the educator and the learner. However this work has tended to consider the replication of standard campus based approaches and the characteristics of content that will encourage other…
Update on Geothermal Direct-Use Installations in the United States
Beckers, Koenraad F.; Snyder, Diana M.; Young, Katherine R.
2017-03-02
An updated database of geothermal direct-use systems in the U.S. has been compiled and analyzed, building upon the Oregon Institute of Technology (OIT) Geo-Heat Center direct-use database. Types of direct-use applications examined include hot springs resorts and pools, aquaculture farms, greenhouses, and district heating systems, among others; power-generating facilities and ground-source heat pumps were excluded. Where possible, the current operation status, open and close dates, well data, and other technical data were obtained for each entry. The database contains 545 installations, of which 407 are open, 108 are closed, and 30 have an unknown status. A report is also included which details and analyzes current geothermal direct-use installations and barriers to further implementation.
Chang, Meiping; Smith, Sarah; Thorpe, Andrew; Barratt, Michael J; Karim, Farzana
2010-09-16
We have previously used the rat 4 day Complete Freund's Adjuvant (CFA) model to screen compounds with potential to reduce osteoarthritic pain. The aim of this study was to identify genes altered in this model of osteoarthritic pain and use this information to infer analgesic potential of compounds based on their own gene expression profiles using the Connectivity Map approach. Using microarrays, we identified differentially expressed genes in L4 and L5 dorsal root ganglia (DRG) from rats that had received intraplantar CFA for 4 days compared to matched, untreated control animals. Analysis of these data indicated that the two groups were distinguishable by differences in genes important in immune responses, nerve growth and regeneration. This list of differentially expressed genes defined a "CFA signature". We used the Connectivity Map approach to identify pharmacologic agents in the Broad Institute Build02 database that had gene expression signatures that were inversely related ('negatively connected') with our CFA signature. To test the predictive nature of the Connectivity Map methodology, we tested phenoxybenzamine (an alpha adrenergic receptor antagonist) - one of the most negatively connected compounds identified in this database - for analgesic activity in the CFA model. Our results indicate that at 10 mg/kg, phenoxybenzamine demonstrated analgesia comparable to that of Naproxen in this model. Evaluation of phenoxybenzamine-induced analgesia in the current study lends support to the utility of the Connectivity Map approach for identifying compounds with analgesic properties in the CFA model.
Designing an Algorithm to Preserve Privacy for Medical Record Linkage With Error-Prone Data
Pal, Doyel; Chen, Tingting; Khethavath, Praveen
2014-01-01
Background Linking medical records across different medical service providers is important to the enhancement of health care quality and public health surveillance. In records linkage, protecting the patients’ privacy is a primary requirement. In real-world health care databases, records may well contain errors due to various reasons such as typos. Linking the error-prone data and preserving data privacy at the same time are very difficult. Existing privacy preserving solutions for this problem are only restricted to textual data. Objective To enable different medical service providers to link their error-prone data in a private way, our aim was to provide a holistic solution by designing and developing a medical record linkage system for medical service providers. Methods To initiate a record linkage, one provider selects one of its collaborators in the Connection Management Module, chooses some attributes of the database to be matched, and establishes the connection with the collaborator after the negotiation. In the Data Matching Module, for error-free data, our solution offered two different choices for cryptographic schemes. For error-prone numerical data, we proposed a newly designed privacy preserving linking algorithm named the Error-Tolerant Linking Algorithm, that allows the error-prone data to be correctly matched if the distance between the two records is below a threshold. Results We designed and developed a comprehensive and user-friendly software system that provides privacy preserving record linkage functions for medical service providers, which meets the regulation of Health Insurance Portability and Accountability Act. It does not require a third party and it is secure in that neither entity can learn the records in the other’s database. Moreover, our novel Error-Tolerant Linking Algorithm implemented in this software can work well with error-prone numerical data. We theoretically proved the correctness and security of our Error-Tolerant Linking Algorithm. We have also fully implemented the software. The experimental results showed that it is reliable and efficient. The design of our software is open so that the existing textual matching methods can be easily integrated into the system. Conclusions Designing algorithms to enable medical records linkage for error-prone numerical data and protect data privacy at the same time is difficult. Our proposed solution does not need a trusted third party and is secure in that in the linking process, neither entity can learn the records in the other’s database. PMID:25600786
O'Neill, M A; Hilgetag, C C
2001-08-29
Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user-defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI-C under the POSIX.1 standard and is to a great extent architecture- and operating-system independent. The software is supported by systems libraries that allow multi-threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open-source software under the GNU license agreement.
O'Neill, M A; Hilgetag, C C
2001-01-01
Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user-defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI-C under the POSIX.1 standard and is to a great extent architecture- and operating-system independent. The software is supported by systems libraries that allow multi-threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open-source software under the GNU license agreement. PMID:11545702
CJEP will offer open science badges.
Pexman, Penny M
2017-03-01
This editorial announces the decision of the Canadian Journal of Experimental Psychology (CJEP) to offer Open Science Framework (OSF) Badges. The Centre for Open Science provides tools to facilitate open science practices. These include the OSF badges. The badges acknowledge papers that meet standards for openness of data, methods, or research process. They are now described in the CJEP Submission Guidelines, and are provided in the editorial. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Assessing habitat connectivity for ground-dwelling animals in an urban environment.
Braaker, S; Moretti, M; Boesch, R; Ghazoul, J; Obrist, M K; Bontadina, F
To ensure viable species populations in fragmented landscapes, individuals must be able to move between suitable habitat patches. Despite the increased interest in biodiversity assessment in urban environments, the ecological relevance of habitat connectivity in highly fragmented landscapes remains largely unknown. The first step to understanding the role of habitat connectivity in urban ecology is the challenging task of assessing connectivity in the complex patchwork of contrasting habitats that is found in cities. We developed a data-based framework, minimizing the use of subjective assumptions, to assess habitat connectivity that consists of the following sequential steps: (1) identification of habitat preference based on empirical habitat-use data; (2) derivation of habitat resistance surfaces evaluating various transformation functions; (3) modeling of different connectivity maps with electrical circuit theory (Circuitscape), a method considering all possible pathways across the landscape simultaneously; and (4) identification of the best connectivity map with information-theoretic model selection. We applied this analytical framework to assess habitat connectivity for the European hedgehog Erinaceus europaeus, a model species for ground-dwelling animals, in the city of Zurich, Switzerland, using GPS track points from 40 individuals. The best model revealed spatially explicit connectivity “pinch points,” as well as multiple habitat connections. Cross-validation indicated the general validity of the selected connectivity model. The results show that both habitat connectivity and habitat quality affect the movement of urban hedgehogs (relative importance of the two variables was 19.2% and 80.8%, respectively), and are thus both relevant for predicting urban animal movements. Our study demonstrates that even in the complex habitat patchwork of cities, habitat connectivity plays a major role for ground-dwelling animal movement. Data-based habitat connectivity maps can thus serve as an important tool for city planners to identify habitat corridors and plan appropriate management and conservation measures for urban animals. The analytical framework we describe to model such connectivity maps is generally applicable to different types of habitat-use data and can be adapted to the movement scale of the focal species. It also allows evaluation of the impact of future landscape changes or management scenarios on habitat connectivity in urban landscapes.
NASA Astrophysics Data System (ADS)
Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.
2012-12-01
At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.
Hermetic electronics package with dual-sided electrical feedthrough configuration
Shah, Kedar G.; Pannu, Satinderpall S.
2016-11-22
A hermetic electronics package includes a metal case with opposing first and second open ends, with each end connected to a first feedthrough construction and a second feedthrough construction. Each feedthrough contruction has an electrically insulating substrate and an array of electrically conductive feedthroughs extending therethrough, with the electrically insulating substrates connected to the opposing first and second open ends, respectively, of the metal case so as to form a hermetically sealed enclosure. A set of electronic components are located within the hermetically sealed enclosure and are operably connected to the feedthroughs of the first and second feedthrough constructions so as to electrically communicate outside the package from opposite sides of the package.
Estimating migratory connectivity of birds when re-encounter probabilities are heterogeneous
Cohen, Emily B; Hostetler, Jeffrey A; Royle, J Andrew; Marra, Peter P
2014-01-01
Understanding the biology and conducting effective conservation of migratory species requires an understanding of migratory connectivity – the geographic linkages of populations between stages of the annual cycle. Unfortunately, for most species, we are lacking such information. The North American Bird Banding Laboratory (BBL) houses an extensive database of marking, recaptures and recoveries, and such data could provide migratory connectivity information for many species. To date, however, few species have been analyzed for migratory connectivity largely because heterogeneous re-encounter probabilities make interpretation problematic. We accounted for regional variation in re-encounter probabilities by borrowing information across species and by using effort covariates on recapture and recovery probabilities in a multistate capture–recapture and recovery model. The effort covariates were derived from recaptures and recoveries of species within the same regions. We estimated the migratory connectivity for three tern species breeding in North America and over-wintering in the tropics, common (Sterna hirundo), roseate (Sterna dougallii), and Caspian terns (Hydroprogne caspia). For western breeding terns, model-derived estimates of migratory connectivity differed considerably from those derived directly from the proportions of re-encounters. Conversely, for eastern breeding terns, estimates were merely refined by the inclusion of re-encounter probabilities. In general, eastern breeding terns were strongly connected to eastern South America, and western breeding terns were strongly linked to the more western parts of the nonbreeding range under both models. Through simulation, we found this approach is likely useful for many species in the BBL database, although precision improved with higher re-encounter probabilities and stronger migratory connectivity. We describe an approach to deal with the inherent biases in BBL banding and re-encounter data to demonstrate that this large dataset is a valuable source of information about the migratory connectivity of the birds of North America. PMID:24967083
Estimating migratory connectivity of birds when re-encounter probabilities are heterogeneous
Cohen, Emily B.; Hostelter, Jeffrey A.; Royle, J. Andrew; Marra, Peter P.
2014-01-01
Understanding the biology and conducting effective conservation of migratory species requires an understanding of migratory connectivity – the geographic linkages of populations between stages of the annual cycle. Unfortunately, for most species, we are lacking such information. The North American Bird Banding Laboratory (BBL) houses an extensive database of marking, recaptures and recoveries, and such data could provide migratory connectivity information for many species. To date, however, few species have been analyzed for migratory connectivity largely because heterogeneous re-encounter probabilities make interpretation problematic. We accounted for regional variation in re-encounter probabilities by borrowing information across species and by using effort covariates on recapture and recovery probabilities in a multistate capture–recapture and recovery model. The effort covariates were derived from recaptures and recoveries of species within the same regions. We estimated the migratory connectivity for three tern species breeding in North America and over-wintering in the tropics, common (Sterna hirundo), roseate (Sterna dougallii), and Caspian terns (Hydroprogne caspia). For western breeding terns, model-derived estimates of migratory connectivity differed considerably from those derived directly from the proportions of re-encounters. Conversely, for eastern breeding terns, estimates were merely refined by the inclusion of re-encounter probabilities. In general, eastern breeding terns were strongly connected to eastern South America, and western breeding terns were strongly linked to the more western parts of the nonbreeding range under both models. Through simulation, we found this approach is likely useful for many species in the BBL database, although precision improved with higher re-encounter probabilities and stronger migratory connectivity. We describe an approach to deal with the inherent biases in BBL banding and re-encounter data to demonstrate that this large dataset is a valuable source of information about the migratory connectivity of the birds of North America.
Geology of Point Reyes National Seashore and vicinity, California: a digital database
Clark, Jospeh C.; Brabb, Earl E.
1997-01-01
This Open-File report is a digital geologic map database. This pamphlet serves to introduce and describe the digital data. There is no paper map included in the Open-File report. The report does include, however, a PostScript plot file containing an image of the geologic map sheet with explanation, as well as the accompanying text describing the geology of the area. For those interested in a paper plot of information contained in the database or in obtaining the PostScript plot files, please see the section entitled 'For Those Who Aren't Familiar With Digital Geologic Map Databases' below. This digital map database, compiled from previously published and unpublished data and new mapping by the authors, represents the general distribution of surficial deposits and rock units in Point Reyes and surrounding areas. Together with the accompanying text file (pr-geo.txt or pr-geo.ps), it provides current information on the stratigraphy and structural geology of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:48,000 or smaller.
Engineering Review Information System
NASA Technical Reports Server (NTRS)
Grems, III, Edward G. (Inventor); Henze, James E. (Inventor); Bixby, Jonathan A. (Inventor); Roberts, Mark (Inventor); Mann, Thomas (Inventor)
2015-01-01
A disciplinal engineering review computer information system and method by defining a database of disciplinal engineering review process entities for an enterprise engineering program, opening a computer supported engineering item based upon the defined disciplinal engineering review process entities, managing a review of the opened engineering item according to the defined disciplinal engineering review process entities, and closing the opened engineering item according to the opened engineering item review.
Towards G2G: Systems of Technology Database Systems
NASA Technical Reports Server (NTRS)
Maluf, David A.; Bell, David
2005-01-01
We present an approach and methodology for developing Government-to-Government (G2G) Systems of Technology Database Systems. G2G will deliver technologies for distributed and remote integration of technology data for internal use in analysis and planning as well as for external communications. G2G enables NASA managers, engineers, operational teams and information systems to "compose" technology roadmaps and plans by selecting, combining, extending, specializing and modifying components of technology database systems. G2G will interoperate information and knowledge that is distributed across organizational entities involved that is ideal for NASA future Exploration Enterprise. Key contributions of the G2G system will include the creation of an integrated approach to sustain effective management of technology investments that supports the ability of various technology database systems to be independently managed. The integration technology will comply with emerging open standards. Applications can thus be customized for local needs while enabling an integrated management of technology approach that serves the global needs of NASA. The G2G capabilities will use NASA s breakthrough in database "composition" and integration technology, will use and advance emerging open standards, and will use commercial information technologies to enable effective System of Technology Database systems.
Intelligent printing system with AMPAC: boot program for printing machine with AMPAC
NASA Astrophysics Data System (ADS)
Yuasa, Tomonori; Mishina, Hiromichi
2000-12-01
The database AMPAC proposes the simple and unified format to describe single parameter of whole field of design, production and management. The database described by the format can be used commonly in any field connected by the network production system, since the description accepts any parameter in any fields and is field independent definition.
Egbring, Marco; Kullak-Ublick, Gerd A; Russmann, Stefan
2010-01-01
To develop a software solution that supports management and clinical review of patient data from electronic medical records databases or claims databases for pharmacoepidemiological drug safety studies. We used open source software to build a data management system and an internet application with a Flex client on a Java application server with a MySQL database backend. The application is hosted on Amazon Elastic Compute Cloud. This solution named Phynx supports data management, Web-based display of electronic patient information, and interactive review of patient-level information in the individual clinical context. This system was applied to a dataset from the UK General Practice Research Database (GPRD). Our solution can be setup and customized with limited programming resources, and there is almost no extra cost for software. Access times are short, the displayed information is structured in chronological order and visually attractive, and selected information such as drug exposure can be blinded. External experts can review patient profiles and save evaluations and comments via a common Web browser. Phynx provides a flexible and economical solution for patient-level review of electronic medical information from databases considering the individual clinical context. It can therefore make an important contribution to an efficient validation of outcome assessment in drug safety database studies.
Evaluation of NoSQL databases for DIRAC monitoring and beyond
NASA Astrophysics Data System (ADS)
Mathe, Z.; Casajus Ramo, A.; Stagni, F.; Tomassetti, L.
2015-12-01
Nowadays, many database systems are available but they may not be optimized for storing time series data. Monitoring DIRAC jobs would be better done using a database optimised for storing time series data. So far it was done using a MySQL database, which is not well suited for such an application. Therefore alternatives have been investigated. Choosing an appropriate database for storing huge amounts of time series data is not trivial as one must take into account different aspects such as manageability, scalability and extensibility. We compared the performance of Elasticsearch, OpenTSDB (based on HBase) and InfluxDB NoSQL databases, using the same set of machines and the same data. We also evaluated the effort required for maintaining them. Using the LHCb Workload Management System (WMS), based on DIRAC as a use case we set up a new monitoring system, in parallel with the current MySQL system, and we stored the same data into the databases under test. We evaluated Grafana (for OpenTSDB) and Kibana (for ElasticSearch) metrics and graph editors for creating dashboards, in order to have a clear picture on the usability of each candidate. In this paper we present the results of this study and the performance of the selected technology. We also give an outlook of other potential applications of NoSQL databases within the DIRAC project.
2001-10-25
within one of the programmes sponsored by the European Commission.The system mainly consists of a shared care database in which each groups of...care database in which each community facility, or group of facilities, is supported by a local area network (LAN). Each of these LANs is connected over...functions. The software is layered, so that the client application is not affected by how the servers are implemented or which database system they use
Overview of open resources to support automated structure verification and elucidation
Cheminformatics methods form an essential basis for providing analytical scientists with access to data, algorithms and workflows. There are an increasing number of free online databases (compound databases, spectral libraries, data repositories) and a rich collection of software...
NASA Astrophysics Data System (ADS)
Anderson, R. B.; Clegg, S. M.; Graff, T.; Morris, R. V.; Laura, J.
2015-06-01
We describe plans to generate a database of LIBS spectra of planetary analog materials and develop free, open-source software to enable the planetary community to analyze LIBS (and other spectral) data.
Ying, Xiao-Ming; Jiang, Yong-Liang; Xu, Peng; Wang, Peng; Zhu, Bo; Guo, Shao-Qing
2016-08-25
To conduct a meta analysis of studies comparing theapeutic effect and safety of microendoscopic discectomy to conventional open discectomy in the treatment of lumbar disc herniation in China. A systematic literature retrieval was conducted in the Chinese Bio medicine Database, CNKI database, Chongqin VIP database and Wangfang database. The statistical analysis was performed using a RevMan 4.2 software. The comparison included excellent rate, operation times, blood loss, periods of bed rest and resuming daily activities, hospital stay or hospital stay after surgery, and complications of microendoscopic discectomy versus conventional open discectomy. The search yielded 20 reports, which included 2 957 cases treated by microendoscopic discectomy and 2 130 cases treated by conventional open discectomy. There were 12, 11, 7, 5, 4 and 4 reports which had comparison of operation times, blood loss, period of bed rest, periods of resuming daily activities, hospital stay and hospital stay after surgery respectively. Complications were mentioned in 10 reports. Compared to patients treated by open discectomy, patients treated by microendoscopic discectomy had a higher excellent rates [OR=1.29, 95%CI (1.03, 1.62)], less blood loss[OR=-63.67, 95%CI (-86.78, -40.55)], less period of bed rest[OR=-15.33, 95%CI (-17.76, -12.90)], less period of resumption of daily activities [OR=-24.41, 95%CI (-36.86, -11.96)], less hospital stay [OR=-5.00, 95%CI (-6.94, -3.06)] or hospital stay after surgery [OR=-7.47, 95%CI (-9.17, -5.77) respectively. However, incidence of complications and operation times were proved no significant different between microendoscopic discectomy and open discectomy. Microendoscopic discectomy and conventional open discectomy in treatment of lumbar disc herniation are both safe, effective; incidence of complications are nearly. Patients with lumbar disc herniation treated by microendoscopic discectomy have fewer blood loss, shorter periods of bed rest and hospital stay, and resume daily activities faster. Techniques are selected according to indications, microendoscopic discectomy should be carried out when conjunct indications occur.
Tuned intake air system for a rotary engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corbett, W.D.
This patent describes a rotary internal combustion engine for an outboard board motor. It comprises a plenum chamber attached to the rear of the engine; and the plenum chamber including an inner wall attached to the exhaust manifold; an inlet conduit connecting the cooling air exit passage and the inlet air opening; an outlet conduit connecting the outlet air opening and the combustion air inlet; and the outlet conduit terminating in a combustion air outlet in the inner wall of the plenum chamber.
[The future of clinical laboratory database management system].
Kambe, M; Imidy, D; Matsubara, A; Sugimoto, Y
1999-09-01
To assess the present status of the clinical laboratory database management system, the difference between the Clinical Laboratory Information System and Clinical Laboratory System was explained in this study. Although three kinds of database management systems (DBMS) were shown including the relational model, tree model and network model, the relational model was found to be the best DBMS for the clinical laboratory database based on our experience and developments of some clinical laboratory expert systems. As a future clinical laboratory database management system, the IC card system connected to an automatic chemical analyzer was proposed for personal health data management and a microscope/video system was proposed for dynamic data management of leukocytes or bacteria.
MIP models for connected facility location: A theoretical and computational study☆
Gollowitzer, Stefan; Ljubić, Ivana
2011-01-01
This article comprises the first theoretical and computational study on mixed integer programming (MIP) models for the connected facility location problem (ConFL). ConFL combines facility location and Steiner trees: given a set of customers, a set of potential facility locations and some inter-connection nodes, ConFL searches for the minimum-cost way of assigning each customer to exactly one open facility, and connecting the open facilities via a Steiner tree. The costs needed for building the Steiner tree, facility opening costs and the assignment costs need to be minimized. We model ConFL using seven compact and three mixed integer programming formulations of exponential size. We also show how to transform ConFL into the Steiner arborescence problem. A full hierarchy between the models is provided. For two exponential size models we develop a branch-and-cut algorithm. An extensive computational study is based on two benchmark sets of randomly generated instances with up to 1300 nodes and 115,000 edges. We empirically compare the presented models with respect to the quality of obtained bounds and the corresponding running time. We report optimal values for all but 16 instances for which the obtained gaps are below 0.6%. PMID:25009366
49 CFR 178.44 - Specification 3HT seamless steel cylinders for aircraft use.
Code of Federal Regulations, 2013 CFR
2013-10-01
... cylinder. (d) Manufacture. Cylinders must be manufactured using equipment and processes adequate to ensure.... (h) Openings in cylinders and connections (valves, fuse plugs, etc.) for those openings. Threads conforming to the following are required on openings: (1) Threads must be clean cut, even, without cracks...
49 CFR 178.44 - Specification 3HT seamless steel cylinders for aircraft use.
Code of Federal Regulations, 2012 CFR
2012-10-01
... cylinder. (d) Manufacture. Cylinders must be manufactured using equipment and processes adequate to ensure.... (h) Openings in cylinders and connections (valves, fuse plugs, etc.) for those openings. Threads conforming to the following are required on openings: (1) Threads must be clean cut, even, without cracks...
49 CFR 178.44 - Specification 3HT seamless steel cylinders for aircraft use.
Code of Federal Regulations, 2014 CFR
2014-10-01
... cylinder. (d) Manufacture. Cylinders must be manufactured using equipment and processes adequate to ensure.... (h) Openings in cylinders and connections (valves, fuse plugs, etc.) for those openings. Threads conforming to the following are required on openings: (1) Threads must be clean cut, even, without cracks...
Openness in Adoption: Exploring Family Connections.
ERIC Educational Resources Information Center
Grotevant, Harold D.; McRoy, Ruth G.
Noting social and demographic changes provoking a trend toward openness in adoption practice, this book presents the findings from a nationwide study examining the impact of variations in openness in adoption on participants in the adoptive kinship network: adopted children, adoptive parents, and the children's birth parents. The first chapter of…
OrChem - An open source chemistry search engine for Oracle(R).
Rijnbeek, Mark; Steinbeck, Christoph
2009-10-22
Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net.
The MIntAct project—IntAct as a common curation platform for 11 molecular interaction databases
Orchard, Sandra; Ammari, Mais; Aranda, Bruno; Breuza, Lionel; Briganti, Leonardo; Broackes-Carter, Fiona; Campbell, Nancy H.; Chavali, Gayatri; Chen, Carol; del-Toro, Noemi; Duesbury, Margaret; Dumousseau, Marine; Galeota, Eugenia; Hinz, Ursula; Iannuccelli, Marta; Jagannathan, Sruthi; Jimenez, Rafael; Khadake, Jyoti; Lagreid, Astrid; Licata, Luana; Lovering, Ruth C.; Meldal, Birgit; Melidoni, Anna N.; Milagros, Mila; Peluso, Daniele; Perfetto, Livia; Porras, Pablo; Raghunath, Arathi; Ricard-Blum, Sylvie; Roechert, Bernd; Stutz, Andre; Tognolli, Michael; van Roey, Kim; Cesareni, Gianni; Hermjakob, Henning
2014-01-01
IntAct (freely available at http://www.ebi.ac.uk/intact) is an open-source, open data molecular interaction database populated by data either curated from the literature or from direct data depositions. IntAct has developed a sophisticated web-based curation tool, capable of supporting both IMEx- and MIMIx-level curation. This tool is now utilized by multiple additional curation teams, all of whom annotate data directly into the IntAct database. Members of the IntAct team supply appropriate levels of training, perform quality control on entries and take responsibility for long-term data maintenance. Recently, the MINT and IntAct databases decided to merge their separate efforts to make optimal use of limited developer resources and maximize the curation output. All data manually curated by the MINT curators have been moved into the IntAct database at EMBL-EBI and are merged with the existing IntAct dataset. Both IntAct and MINT are active contributors to the IMEx consortium (http://www.imexconsortium.org). PMID:24234451
NASA Technical Reports Server (NTRS)
Cotter, Gladys A.
1993-01-01
Foreign competitors are challenging the world leadership of the U.S. aerospace industry, and increasingly tight budgets everywhere make international cooperation in aerospace science necessary. The NASA STI Program has as part of its mission to support NASA R&D, and to that end has developed a knowledge base of aerospace-related information known as the NASA Aerospace Database. The NASA STI Program is already involved in international cooperation with NATO/AGARD/TIP, CENDI, ICSU/ICSTI, and the U.S. Japan Committee on STI. With the new more open political climate, the perceived dearth of foreign information in the NASA Aerospace Database, and the development of the ESA database and DELURA, the German databases, the NASA STI Program is responding by sponsoring workshops on foreign acquisitions and by increasing its cooperation with international partners and with other U.S. agencies. The STI Program looks to the future of improved database access through networking and a GUI; new media; optical disk, video, and full text; and a Technology Focus Group that will keep the NASA STI Program current with technology.
Shao, Jiashen; Chang, Hengrui; Zhu, Yanbin; Chen, Wei; Zheng, Zhanle; Zhang, Huixin; Zhang, Yingze
2017-05-01
This study aimed to quantitatively summarize the risk factors associated with surgical site infection after open reduction and internal fixation of tibial plateau fracture. Medline, Embase, CNKI, Wanfang database and Cochrane central database were searched for relevant original studies from database inception to October 2016. Eligible studies had to meet quality assessment criteria according to the Newcastle-Ottawa Scale, and had to evaluate the risk factors for surgical site infection after open reduction and internal fixation of tibial plateau fracture. Stata 11.0 software was used for this meta-analysis. Eight studies involving 2214 cases of tibial plateau fracture treated by open reduction and internal fixation and 219 cases of surgical site infection were included in this meta-analysis. The following parameters were identified as significant risk factors for surgical site infection after open reduction and internal fixation of tibial plateau fracture (p < 0.05): open fracture (OR 3.78; 95% CI 2.71-5.27), compartment syndrome (OR 3.53; 95% CI 2.13-5.86), operative time (OR 2.15; 95% CI 1.53-3.02), tobacco use (OR 2.13; 95% CI 1.13-3.99), and external fixation (OR 2.07; 95% CI 1.05-4.09). Other factors, including male sex, were not identified as risk factors for surgical site infection. Patients with the abovementioned medical conditions are at risk of surgical site infection after open reduction and internal fixation of tibial plateau fracture. Surgeons should be cognizant of these risks and give relevant preoperative advice. Copyright © 2017. Published by Elsevier Ltd.
CoCoMac 2.0 and the future of tract-tracing databases
Bakker, Rembrandt; Wachtler, Thomas; Diesmann, Markus
2012-01-01
The CoCoMac database contains the results of several hundred published axonal tract-tracing studies in the macaque monkey brain. The combined results are used for constructing the macaque macro-connectome. Here we discuss the redevelopment of CoCoMac and compare it to six connectome-related projects: two online resources that provide full access to raw tracing data in rodents, a connectome viewer for advanced 3D graphics, a partial but highly detailed rat connectome, a brain data management system that generates custom connectivity matrices, and a software package that covers the complete pipeline from connectivity data to large-scale brain simulations. The second edition of CoCoMac features many enhancements over the original. For example, a search wizard is provided for full access to all tables and their nested dependencies. Connectivity matrices can be computed on demand in a user-selected nomenclature. A new data entry system is available as a preview, and is to become a generic solution for community-driven data entry in manually collated databases. We conclude with the question whether neuronal tracing will remain the gold standard to uncover the wiring of brains, thereby highlighting developments in human connectome construction, tracer substances, polarized light imaging, and serial block-face scanning electron microscopy. PMID:23293600
CoCoMac 2.0 and the future of tract-tracing databases.
Bakker, Rembrandt; Wachtler, Thomas; Diesmann, Markus
2012-01-01
The CoCoMac database contains the results of several hundred published axonal tract-tracing studies in the macaque monkey brain. The combined results are used for constructing the macaque macro-connectome. Here we discuss the redevelopment of CoCoMac and compare it to six connectome-related projects: two online resources that provide full access to raw tracing data in rodents, a connectome viewer for advanced 3D graphics, a partial but highly detailed rat connectome, a brain data management system that generates custom connectivity matrices, and a software package that covers the complete pipeline from connectivity data to large-scale brain simulations. The second edition of CoCoMac features many enhancements over the original. For example, a search wizard is provided for full access to all tables and their nested dependencies. Connectivity matrices can be computed on demand in a user-selected nomenclature. A new data entry system is available as a preview, and is to become a generic solution for community-driven data entry in manually collated databases. We conclude with the question whether neuronal tracing will remain the gold standard to uncover the wiring of brains, thereby highlighting developments in human connectome construction, tracer substances, polarized light imaging, and serial block-face scanning electron microscopy.
Bezgin, Gleb; Reid, Andrew T; Schubert, Dirk; Kötter, Rolf
2009-01-01
Brain atlases are widely used in experimental neuroscience as tools for locating and targeting specific brain structures. Delineated structures in a given atlas, however, are often difficult to interpret and to interface with database systems that supply additional information using hierarchically organized vocabularies (ontologies). Here we discuss the concept of volume-to-ontology mapping in the context of macroscopical brain structures. We present Java tools with which we have implemented this concept for retrieval of mapping and connectivity data on the macaque brain from the CoCoMac database in connection with an electronic version of "The Rhesus Monkey Brain in Stereotaxic Coordinates" authored by George Paxinos and colleagues. The software, including our manually drawn monkey brain template, can be downloaded freely under the GNU General Public License. It adds value to the printed atlas and has a wider (neuro-)informatics application since it can read appropriately annotated data from delineated sections of other species and organs, and turn them into 3D registered stacks. The tools provide additional features, including visualization and analysis of connectivity data, volume and centre-of-mass estimates, and graphical manipulation of entire structures, which are potentially useful for a range of research and teaching applications.
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Jonestrask, L.; Tauxe, L.; Constable, C.
2016-12-01
The Magnetics Information Consortium (https://earthref.org/MagIC/) develops and maintains a database and web application for supporting the paleo-, geo-, and rock magnetic scientific community. Historically, this objective has been met with an Oracle database and a Perl web application at the San Diego Supercomputer Center (SDSC). The Oracle Enterprise Cluster at SDSC, however, was decommissioned in July of 2016 and the cost for MagIC to continue using Oracle became prohibitive. This provided MagIC with a unique opportunity to reexamine the entire technology stack and data model. MagIC has developed an open-source web application using the Meteor (http://meteor.com) framework and a MongoDB database. The simplicity of the open-source full-stack framework that Meteor provides has improved MagIC's development pace and the increased flexibility of the data schema in MongoDB encouraged the reorganization of the MagIC Data Model. As a result of incorporating actively developed open-source projects into the technology stack, MagIC has benefited from their vibrant software development communities. This has translated into a more modern web application that has significantly improved the user experience for the paleo-, geo-, and rock magnetic scientific community.
KID Project: an internet-based digital video atlas of capsule endoscopy for research purposes.
Koulaouzidis, Anastasios; Iakovidis, Dimitris K; Yung, Diana E; Rondonotti, Emanuele; Kopylov, Uri; Plevris, John N; Toth, Ervin; Eliakim, Abraham; Wurm Johansson, Gabrielle; Marlicz, Wojciech; Mavrogenis, Georgios; Nemeth, Artur; Thorlacius, Henrik; Tontini, Gian Eugenio
2017-06-01
Capsule endoscopy (CE) has revolutionized small-bowel (SB) investigation. Computational methods can enhance diagnostic yield (DY); however, incorporating machine learning algorithms (MLAs) into CE reading is difficult as large amounts of image annotations are required for training. Current databases lack graphic annotations of pathologies and cannot be used. A novel database, KID, aims to provide a reference for research and development of medical decision support systems (MDSS) for CE. Open-source software was used for the KID database. Clinicians contribute anonymized, annotated CE images and videos. Graphic annotations are supported by an open-access annotation tool (Ratsnake). We detail an experiment based on the KID database, examining differences in SB lesion measurement between human readers and a MLA. The Jaccard Index (JI) was used to evaluate similarity between annotations by the MLA and human readers. The MLA performed best in measuring lymphangiectasias with a JI of 81 ± 6 %. The other lesion types were: angioectasias (JI 64 ± 11 %), aphthae (JI 64 ± 8 %), chylous cysts (JI 70 ± 14 %), polypoid lesions (JI 75 ± 21 %), and ulcers (JI 56 ± 9 %). MLA can perform as well as human readers in the measurement of SB angioectasias in white light (WL). Automated lesion measurement is therefore feasible. KID is currently the only open-source CE database developed specifically to aid development of MDSS. Our experiment demonstrates this potential.
Federated querying architecture with clinical & translational health IT application.
Livne, Oren E; Schultz, N Dustin; Narus, Scott P
2011-10-01
We present a software architecture that federates data from multiple heterogeneous health informatics data sources owned by multiple organizations. The architecture builds upon state-of-the-art open-source Java and XML frameworks in innovative ways. It consists of (a) federated query engine, which manages federated queries and result set aggregation via a patient identification service; and (b) data source facades, which translate the physical data models into a common model on-the-fly and handle large result set streaming. System modules are connected via reusable Apache Camel integration routes and deployed to an OSGi enterprise service bus. We present an application of our architecture that allows users to construct queries via the i2b2 web front-end, and federates patient data from the University of Utah Enterprise Data Warehouse and the Utah Population database. Our system can be easily adopted, extended and integrated with existing SOA Healthcare and HL7 frameworks such as i2b2 and caGrid.
Method and apparatus for electrokinetic transport
NASA Technical Reports Server (NTRS)
James, Patrick Ismail (Inventor); Stejic, George (Inventor)
2012-01-01
Controlled electrokinetic transport of constituents of liquid media can be achieved by connecting at least two volumes containing liquid media with at least one dielectric medium with opposing dielectric surfaces in direct contact with said liquid media, and establishing at least one conduit across said dielectric medium, with a conduit inner surface surrounding a conduit volume and at least a first opening and a second opening opposite to the first opening. The conduit is arranged to connect two volumes containing liquid media and includes a set of at least three electrodes positioned in proximity of the inner conduit surface. A power supply is arranged to deliver energy to the electrodes such that time-varying potentials inside the conduit volume are established, where the superposition of said potentials represents at least one controllable traveling potential well that can travel between the opposing conduit openings.
A World Wide Web (WWW) server database engine for an organelle database, MitoDat.
Lemkin, P F; Chipperfield, M; Merril, C; Zullo, S
1996-03-01
We describe a simple database search engine "dbEngine" which may be used to quickly create a searchable database on a World Wide Web (WWW) server. Data may be prepared from spreadsheet programs (such as Excel, etc.) or from tables exported from relationship database systems. This Common Gateway Interface (CGI-BIN) program is used with a WWW server such as available commercially, or from National Center for Supercomputer Algorithms (NCSA) or CERN. Its capabilities include: (i) searching records by combinations of terms connected with ANDs or ORs; (ii) returning search results as hypertext links to other WWW database servers; (iii) mapping lists of literature reference identifiers to the full references; (iv) creating bidirectional hypertext links between pictures and the database. DbEngine has been used to support the MitoDat database (Mendelian and non-Mendelian inheritance associated with the Mitochondrion) on the WWW.
Knapp, Jenny; Damy, Sylvie; Brillaud, Jonathan; Tissot, Jean-Daniel; Navion, Jérémy; Mélior, Raphael; Afonso, Eve; Hormaz, Vanessa; Gottstein, Bruno; Umhang, Gérald; Casulli, Adriano; Dadeau, Frédéric; Millon, Laurence; Raoul, Francis
2017-01-01
Evolution and dispersion history on Earth of organisms can best be studied through biological markers in molecular epidemiological studies. The biological diversity of the cestode Echinococcus multilocularis was investigated in different cladistic approaches. First the morphological aspects were explored in connection with its ecology. More recently, molecular aspects were investigated to better understand the nature of the variations observed among isolates. The study of the tandemly repeated multilocus microsatellite EmsB allowed us to attain a high genetic diversity level where other classic markers have failed. Since 2006, EmsB data have been collected on specimens from various endemic foci of the parasite in Europe (in historic and newly endemic areas), Asia (China, Japan and Kyrgyzstan), and North America (Canada and Alaska). Biological data on the isolates and metadata were also recorded (e.g. host, geographical location, EmsB analysis, citation in the literature). In order to make available the data set of 1,166 isolates from classic and aberrant domestic and wild animal hosts (larval lesions and adult worms) and from human origin, an open web access interface, developed in PHP, and connected to a PostgreSQL database, was developed in the EmsB Website for the Echinococcus Typing (EWET) project. It allows researchers to access data collection, perform genetic analyses online (e.g. defining the genetic distance between their own samples and the samples in the database), consult distribution maps of EmsB profiles, and record and share their new EmsB genotyping data. In order to standardize the EmsB analyses performed in the different laboratories throughout the world, a calibrator was developed. The final aim of this project was to gather and arrange available data to permit to better understand the dispersion and transmission patterns of the parasite among definitive and intermediate hosts, in order to organize control strategies on the ground.
Damy, Sylvie; Brillaud, Jonathan; Tissot, Jean-Daniel; Navion, Jérémy; Mélior, Raphael; Afonso, Eve; Hormaz, Vanessa; Gottstein, Bruno; Umhang, Gérald; Casulli, Adriano; Dadeau, Frédéric; Millon, Laurence; Raoul, Francis
2017-01-01
Evolution and dispersion history on Earth of organisms can best be studied through biological markers in molecular epidemiological studies. The biological diversity of the cestode Echinococcus multilocularis was investigated in different cladistic approaches. First the morphological aspects were explored in connection with its ecology. More recently, molecular aspects were investigated to better understand the nature of the variations observed among isolates. The study of the tandemly repeated multilocus microsatellite EmsB allowed us to attain a high genetic diversity level where other classic markers have failed. Since 2006, EmsB data have been collected on specimens from various endemic foci of the parasite in Europe (in historic and newly endemic areas), Asia (China, Japan and Kyrgyzstan), and North America (Canada and Alaska). Biological data on the isolates and metadata were also recorded (e.g. host, geographical location, EmsB analysis, citation in the literature). In order to make available the data set of 1,166 isolates from classic and aberrant domestic and wild animal hosts (larval lesions and adult worms) and from human origin, an open web access interface, developed in PHP, and connected to a PostgreSQL database, was developed in the EmsB Website for the Echinococcus Typing (EWET) project. It allows researchers to access data collection, perform genetic analyses online (e.g. defining the genetic distance between their own samples and the samples in the database), consult distribution maps of EmsB profiles, and record and share their new EmsB genotyping data. In order to standardize the EmsB analyses performed in the different laboratories throughout the world, a calibrator was developed. The final aim of this project was to gather and arrange available data to permit to better understand the dispersion and transmission patterns of the parasite among definitive and intermediate hosts, in order to organize control strategies on the ground. PMID:28972978
NASA Astrophysics Data System (ADS)
Waki, Masaki; Uruno, Shigenori; Ohashi, Hiroyuki; Manabe, Tetsuya; Azuma, Yuji
We propose an optical fiber connection navigation system that uses visible light communication for an integrated distribution module in a central office. The system realizes an accurate database, requires less skilled work to operate and eliminates human error. This system can achieve a working time reduction of up to 88.0% compared with the conventional work without human error for the connection/removal of optical fiber cords, and is economical as regards installation and operation.
Garcia-Lastra, Juan M.; De La Torre, Gema; Himpsel, F. J.; Rubio, Angel
2015-01-01
An extensive database of spectroscopic properties of molecules from ab initio calculations is used to design molecular complexes for use in tandem solar cells that convert two photons into a single electron–hole pair, thereby increasing the output voltage while covering a wider spectral range. Three different architectures are considered: the first two involve a complex consisting of two dye molecules with appropriately matched frontier orbitals, connected by a molecular diode. Optimized combinations of dye molecules are determined by taking advantage of our computational database of the structural and energetic properties of several thousand porphyrin dyes. The third design is a molecular analogy of the intermediate band solar cell, and involves a single dye molecule with strong intersystem crossing to ensure a long lifetime of the intermediate state. Based on the calculated energy levels and molecular orbitals, energy diagrams are presented for the individual steps in the operation of such tandem solar cells. We find that theoretical open circuit voltages of up to 1.8 V can be achieved using these tandem designs. Questions about the practical implementation of prototypical devices, such as the synthesis of the tandem molecules and potential loss mechanisms, are addressed. PMID:29142685
Enhancing Chemical Inventory Management in Laboratory through a Mobile-Based QR Code Tag
NASA Astrophysics Data System (ADS)
Shukran, M. A. M.; Ishak, M. S.; Abdullah, M. N.
2017-08-01
The demand for a greater inventory management system which can provide a lot of useful information from a single scan has made laboratory inventory management using barcode technology more difficult. Since the barcode technology lacks the ability to overcome the problem and is not capable of providing information needed to manage the chemicals in the laboratory, thus employing a QR code technology is the best solution. In this research, the main idea is to develop a standalone application running with its own database that is periodically synchronized with the inventory software hosted by the computer and connected to a specialized network as well. The first process required to establish this centralized system is to determine all inventory available in the chemical laboratory by referring to the documented data in order to develop the database. Several customization and enhancement were made to the open source QR code technology to ensure the developed application is dedicated for its main purposes. As the end of the research, it was proven that the system is able to track the position of all inventory and showing real time information about the scanned chemical labels. This paper intends to give an overview about the QR tag inventory system that was developed and its implementation at the National Defence University of Malaysia’s (NDUM) chemical laboratory.
Automated bond order assignment as an optimization problem.
Dehof, Anna Katharina; Rurainski, Alexander; Bui, Quang Bao Anh; Böcker, Sebastian; Lenhof, Hans-Peter; Hildebrandt, Andreas
2011-03-01
Numerous applications in Computational Biology process molecular structures and hence strongly rely not only on correct atomic coordinates but also on correct bond order information. For proteins and nucleic acids, bond orders can be easily deduced but this does not hold for other types of molecules like ligands. For ligands, bond order information is not always provided in molecular databases and thus a variety of approaches tackling this problem have been developed. In this work, we extend an ansatz proposed by Wang et al. that assigns connectivity-based penalty scores and tries to heuristically approximate its optimum. In this work, we present three efficient and exact solvers for the problem replacing the heuristic approximation scheme of the original approach: an A*, an ILP and an fixed-parameter approach (FPT) approach. We implemented and evaluated the original implementation, our A*, ILP and FPT formulation on the MMFF94 validation suite and the KEGG Drug database. We show the benefit of computing exact solutions of the penalty minimization problem and the additional gain when computing all optimal (or even suboptimal) solutions. We close with a detailed comparison of our methods. The A* and ILP solution are integrated into the open-source C++ LGPL library BALL and the molecular visualization and modelling tool BALLView and can be downloaded from our homepage www.ball-project.org. The FPT implementation can be downloaded from http://bio.informatik.uni-jena.de/software/.
Enhancing AstroInformatics and Science Discovery from Data in Journal Articles
NASA Astrophysics Data System (ADS)
Mazzarella, Joseph
2011-05-01
Traditional methods of publishing scientific data and metadata in journal articles are in need of major upgrades to reach the full potential of astronomical databases and astroinformatics techniques to facilitate semi-automated, and eventually autonomous, methods of science discovery. I will review a growing collaboration involving the NASA/IPAC Extragalactic Database (NED), the Astrophysics Data System (ADS), the Virtual Astronomical Observatory (VAO), the AAS Journals and IOP, and the Data Conservancy that is aimed toward transforming the methodology used to publish, capture and link data associated with astrophysics journal articles. We are planning a web-based workflow to assist astronomers during the publication of journal articles. The primary goals are to facilitate the application of structure and standards to (meta)data, reduce errors, remove ambiguities in the identification of astrophysical objects and regions of sky, capture and preserve the images and spectral data files used to make plots, and accelerate the ingestion of the data into relevant repositories, search engines and integration services. The outcome of this community wide effort will address a recent public policy mandate to publish scientific data in open formats to allow reproducibility of results and to facilitate new discoveries. Equally important, this work has the potential to usher in a new wave of science discovery based on seamless connectivity between data relationships that are continuously growing in size and complexity, and increasingly sophisticated data visualization and analysis applications.
MARC and Relational Databases.
ERIC Educational Resources Information Center
Llorens, Jose; Trenor, Asuncion
1993-01-01
Discusses the use of MARC format in relational databases and addresses problems of incompatibilities. A solution is presented that is in accordance with Open Systems Interconnection (OSI) standards and is based on experiences at the library of the Universidad Politecnica de Valencia (Spain). (four references) (EA)
Riff, Andrew J; Frank, Rachel M; Sumner, Shelby; Friel, Nicole; Bach, Bernard R; Verma, Nikhil N; Romeo, Anthony A
2017-12-01
Arthroscopic stabilization is the most broadly used surgical procedure in the United States for management of recurrent shoulder instability. Latarjet coracoid transfer has been considered a salvage surgical procedure for failed arthroscopic repairs or cases of significant glenoid bone loss; however, with recent literature suggesting reduced risk of recurrent instability with Latarjet, several surgeons have advocated its broader utilization as a primary operation for treatment of shoulder instability. To determine trends in shoulder stabilization techniques used in the United States. Cross-sectional study. A retrospective analysis of a publicly available national insurance database was performed to identify shoulder stabilization procedures performed over 9 years (2007-2015). The following Current Procedural Terminology codes were searched: 29806 (arthroscopic stabilization), 23455 (open capsulolabral repair), 23466 (open capsular shift), 23462 (Latarjet coracoid transfer), and 23460 (open anterior capsulorrhaphy with other bone block augmentation). Outcomes of interest included (1) trends in the use of each technique throughout the study interval, (2) age and sex distributions of patients undergoing each technique, and (3) regional predilections for the use of each technique. Arthroscopic stabilization was the most broadly used shoulder stabilization procedure in the database (87%), followed by open Bankart (7%), Latarjet (3.2%), open capsular shift (2.6%), and alternative bone block procedure (0.8%). Throughout the study period, the incidence of arthroscopic stabilization and Latarjet increased (8% and 15% per year, respectively); the incidence of open capsular shift remained relatively constant; and the incidence of open Bankart decreased (9% per year). Arthroscopic stabilization, open Bankart, and Latarjet each had similar sex-based distributions (roughly 70% male), while open capsular shift and alternative bone block were relatively more common in females (54% and 50% male, respectively). The incidence of arthroscopic stabilization and Latarjet were greatest in the South and lowest in the Northeast. Arthroscopic stabilization remains the most commonly utilized stabilization technique in the United States. The use of the Latarjet procedure is steadily increasing and now rivals open Bankart stabilization among the most commonly used open stabilization techniques.
EuroPineDB: a high-coverage web database for maritime pine transcriptome
2011-01-01
Background Pinus pinaster is an economically and ecologically important species that is becoming a woody gymnosperm model. Its enormous genome size makes whole-genome sequencing approaches are hard to apply. Therefore, the expressed portion of the genome has to be characterised and the results and annotations have to be stored in dedicated databases. Description EuroPineDB is the largest sequence collection available for a single pine species, Pinus pinaster (maritime pine), since it comprises 951 641 raw sequence reads obtained from non-normalised cDNA libraries and high-throughput sequencing from adult (xylem, phloem, roots, stem, needles, cones, strobili) and embryonic (germinated embryos, buds, callus) maritime pine tissues. Using open-source tools, sequences were optimally pre-processed, assembled, and extensively annotated (GO, EC and KEGG terms, descriptions, SNPs, SSRs, ORFs and InterPro codes). As a result, a 10.5× P. pinaster genome was covered and assembled in 55 322 UniGenes. A total of 32 919 (59.5%) of P. pinaster UniGenes were annotated with at least one description, revealing at least 18 466 different genes. The complete database, which is designed to be scalable, maintainable, and expandable, is freely available at: http://www.scbi.uma.es/pindb/. It can be retrieved by gene libraries, pine species, annotations, UniGenes and microarrays (i.e., the sequences are distributed in two-colour microarrays; this is the only conifer database that provides this information) and will be periodically updated. Small assemblies can be viewed using a dedicated visualisation tool that connects them with SNPs. Any sequence or annotation set shown on-screen can be downloaded. Retrieval mechanisms for sequences and gene annotations are provided. Conclusions The EuroPineDB with its integrated information can be used to reveal new knowledge, offers an easy-to-use collection of information to directly support experimental work (including microarray hybridisation), and provides deeper knowledge on the maritime pine transcriptome. PMID:21762488
[National Database of Genotypes--ethical and legal issues].
Franková, Vera; Tesínová, Jolana; Brdicka, Radim
2011-01-01
National Database of Genotypes--ethical and legal issues The aim of the project National Database of Genotypes is to outline structure and rules for the database operation collecting information about genotypes of individual persons. The database should be used entirely for health care. Its purpose is to enable physicians to gain quick and easy access to the information about persons requiring specialized care due to their genetic constitution. In the future, another introduction of new genetic tests into the clinical practice can be expected thus the database of genotypes facilitates substantial financial savings by exclusion of duplicates of the expensive genetic testing. Ethical questions connected with the creating and functioning of such database concern mainly privacy protection, confidentiality of personal sensitive data, protection of database from misuse, consent with participation and public interests. Due to necessity of correct interpretation by qualified professional (= clinical geneticist), particular categorization of genetic data within the database is discussed. The function of proposed database has to be governed in concordance with the Czech legislation together with solving ethical problems.
SQLGEN: a framework for rapid client-server database application development.
Nadkarni, P M; Cheung, K H
1995-12-01
SQLGEN is a framework for rapid client-server relational database application development. It relies on an active data dictionary on the client machine that stores metadata on one or more database servers to which the client may be connected. The dictionary generates dynamic Structured Query Language (SQL) to perform common database operations; it also stores information about the access rights of the user at log-in time, which is used to partially self-configure the behavior of the client to disable inappropriate user actions. SQLGEN uses a microcomputer database as the client to store metadata in relational form, to transiently capture server data in tables, and to allow rapid application prototyping followed by porting to client-server mode with modest effort. SQLGEN is currently used in several production biomedical databases.
IJ-OpenCV: Combining ImageJ and OpenCV for processing images in biomedicine.
Domínguez, César; Heras, Jónathan; Pascual, Vico
2017-05-01
The effective processing of biomedical images usually requires the interoperability of diverse software tools that have different aims but are complementary. The goal of this work is to develop a bridge to connect two of those tools: ImageJ, a program for image analysis in life sciences, and OpenCV, a computer vision and machine learning library. Based on a thorough analysis of ImageJ and OpenCV, we detected the features of these systems that could be enhanced, and developed a library to combine both tools, taking advantage of the strengths of each system. The library was implemented on top of the SciJava converter framework. We also provide a methodology to use this library. We have developed the publicly available library IJ-OpenCV that can be employed to create applications combining features from both ImageJ and OpenCV. From the perspective of ImageJ developers, they can use IJ-OpenCV to easily create plugins that use any functionality provided by the OpenCV library and explore different alternatives. From the perspective of OpenCV developers, this library provides a link to the ImageJ graphical user interface and all its features to handle regions of interest. The IJ-OpenCV library bridges the gap between ImageJ and OpenCV, allowing the connection and the cooperation of these two systems. Copyright © 2017 Elsevier Ltd. All rights reserved.
PubMedPortable: A Framework for Supporting the Development of Text Mining Applications.
Döring, Kersten; Grüning, Björn A; Telukunta, Kiran K; Thomas, Philippe; Günther, Stefan
2016-01-01
Information extraction from biomedical literature is continuously growing in scope and importance. Many tools exist that perform named entity recognition, e.g. of proteins, chemical compounds, and diseases. Furthermore, several approaches deal with the extraction of relations between identified entities. The BioCreative community supports these developments with yearly open challenges, which led to a standardised XML text annotation format called BioC. PubMed provides access to the largest open biomedical literature repository, but there is no unified way of connecting its data to natural language processing tools. Therefore, an appropriate data environment is needed as a basis to combine different software solutions and to develop customised text mining applications. PubMedPortable builds a relational database and a full text index on PubMed citations. It can be applied either to the complete PubMed data set or an arbitrary subset of downloaded PubMed XML files. The software provides the infrastructure to combine stand-alone applications by exporting different data formats, e.g. BioC. The presented workflows show how to use PubMedPortable to retrieve, store, and analyse a disease-specific data set. The provided use cases are well documented in the PubMedPortable wiki. The open-source software library is small, easy to use, and scalable to the user's system requirements. It is freely available for Linux on the web at https://github.com/KerstenDoering/PubMedPortable and for other operating systems as a virtual container. The approach was tested extensively and applied successfully in several projects.
PubMedPortable: A Framework for Supporting the Development of Text Mining Applications
Döring, Kersten; Grüning, Björn A.; Telukunta, Kiran K.; Thomas, Philippe; Günther, Stefan
2016-01-01
Information extraction from biomedical literature is continuously growing in scope and importance. Many tools exist that perform named entity recognition, e.g. of proteins, chemical compounds, and diseases. Furthermore, several approaches deal with the extraction of relations between identified entities. The BioCreative community supports these developments with yearly open challenges, which led to a standardised XML text annotation format called BioC. PubMed provides access to the largest open biomedical literature repository, but there is no unified way of connecting its data to natural language processing tools. Therefore, an appropriate data environment is needed as a basis to combine different software solutions and to develop customised text mining applications. PubMedPortable builds a relational database and a full text index on PubMed citations. It can be applied either to the complete PubMed data set or an arbitrary subset of downloaded PubMed XML files. The software provides the infrastructure to combine stand-alone applications by exporting different data formats, e.g. BioC. The presented workflows show how to use PubMedPortable to retrieve, store, and analyse a disease-specific data set. The provided use cases are well documented in the PubMedPortable wiki. The open-source software library is small, easy to use, and scalable to the user’s system requirements. It is freely available for Linux on the web at https://github.com/KerstenDoering/PubMedPortable and for other operating systems as a virtual container. The approach was tested extensively and applied successfully in several projects. PMID:27706202
Meiler, Arno; Klinger, Claudia; Kaufmann, Michael
2012-09-08
The COG database is the most popular collection of orthologous proteins from many different completely sequenced microbial genomes. Per definition, a cluster of orthologous groups (COG) within this database exclusively contains proteins that most likely achieve the same cellular function. Recently, the COG database was extended by assigning to every protein both the corresponding amino acid and its encoding nucleotide sequence resulting in the NUCOCOG database. This extended version of the COG database is a valuable resource connecting sequence features with the functionality of the respective proteins. Here we present ANCAC, a web tool and MySQL database for the analysis of amino acid, nucleotide, and codon frequencies in COGs on the basis of freely definable phylogenetic patterns. We demonstrate the usefulness of ANCAC by analyzing amino acid frequencies, codon usage, and GC-content in a species- or function-specific context. With respect to amino acids we, at least in part, confirm the cognate bias hypothesis by using ANCAC's NUCOCOG dataset as the largest one available for that purpose thus far. Using the NUCOCOG datasets, ANCAC connects taxonomic, amino acid, and nucleotide sequence information with the functional classification via COGs and provides a GUI for flexible mining for sequence-bias. Thereby, to our knowledge, it is the only tool for the analysis of sequence composition in the light of physiological roles and phylogenetic context without requirement of substantial programming-skills.
2012-01-01
Background The COG database is the most popular collection of orthologous proteins from many different completely sequenced microbial genomes. Per definition, a cluster of orthologous groups (COG) within this database exclusively contains proteins that most likely achieve the same cellular function. Recently, the COG database was extended by assigning to every protein both the corresponding amino acid and its encoding nucleotide sequence resulting in the NUCOCOG database. This extended version of the COG database is a valuable resource connecting sequence features with the functionality of the respective proteins. Results Here we present ANCAC, a web tool and MySQL database for the analysis of amino acid, nucleotide, and codon frequencies in COGs on the basis of freely definable phylogenetic patterns. We demonstrate the usefulness of ANCAC by analyzing amino acid frequencies, codon usage, and GC-content in a species- or function-specific context. With respect to amino acids we, at least in part, confirm the cognate bias hypothesis by using ANCAC’s NUCOCOG dataset as the largest one available for that purpose thus far. Conclusions Using the NUCOCOG datasets, ANCAC connects taxonomic, amino acid, and nucleotide sequence information with the functional classification via COGs and provides a GUI for flexible mining for sequence-bias. Thereby, to our knowledge, it is the only tool for the analysis of sequence composition in the light of physiological roles and phylogenetic context without requirement of substantial programming-skills. PMID:22958836
EuCliD (European Clinical Database): a database comparing different realities.
Marcelli, D; Kirchgessner, J; Amato, C; Steil, H; Mitteregger, A; Moscardò, V; Carioni, C; Orlandini, G; Gatti, E
2001-01-01
Quality and variability of dialysis practice are generally gaining more and more importance. Fresenius Medical Care (FMC), as provider of dialysis, has the duty to continuously monitor and guarantee the quality of care delivered to patients treated in its European dialysis units. Accordingly, a new clinical database called EuCliD has been developed. It is a multilingual and fully codified database, using as far as possible international standard coding tables. EuCliD collects and handles sensitive medical patient data, fully assuring confidentiality. The Infrastructure: a Domino server is installed in each country connected to EuCliD. All the centres belonging to a country are connected via modem to the country server. All the Domino Servers are connected via Wide Area Network to the Head Quarter Server in Bad Homburg (Germany). Inside each country server only anonymous data related to that particular country are available. The only place where all the anonymous data are available is the Head Quarter Server. The data collection is strongly supported in each country by "key-persons" with solid relationships to their respective national dialysis units. The quality of the data in EuCliD is ensured at different levels. At the end of January 2001, more than 11,000 patients treated in 135 centres located in 7 countries are already included in the system. FMC has put the patient care at the centre of its activities for many years and now is able to provide transparency to the community (Authorities, Nephrologists, Patients.....) thus demonstrating the quality of the service.
Jairam, Vikram; Yu, James B
2016-01-01
To use the Centers for Medicare and Medicaid Services Open Payments database to characterize payments made to radiation oncologists and compare their payment profile with that of medical and surgical oncologists. The June 2015 release of the Open Payments database was accessed, containing all payments made to physicians in 2014. The general payments dataset was used for analysis. Data on payments made to medical, surgical, and radiation oncologists was obtained and compared. Within radiation oncology, data regarding payment category, sponsorship, and geographic distribution were identified. Basic statistics including mean, median, range, and sum were calculated by provider and by transaction. Among the 3 oncologic specialties, radiation oncology had the smallest proportion (58%) of compensated physicians and the lowest mean ($1620) and median ($112) payment per provider. Surgical oncology had the highest proportion (84%) of compensated physicians, whereas medical oncology had the highest mean ($6371) and median ($448) payment per physician. Within radiation oncology, nonconsulting services accounted for the most money to physicians ($1,042,556), whereas the majority of the sponsors were medical device companies (52%). Radiation oncologists in the West accepted the most money ($2,041,603) of any US Census region. Radiation oncologists in 2014 received a large number of payments from industry, although less than their medical or surgical counterparts. As the Open Payments database continues to be improved, it remains to be seen whether this information will be used by patients to inform choice of providers or by lawmakers to enact policy regulating physician-industry relationships. Copyright © 2016 Elsevier Inc. All rights reserved.
Van Berkel, Gary J.; Kertesz, Vilmos
2016-11-15
An “Open Access”-like mass spectrometric platform to fully utilize the simplicity of the manual open port sampling interface for rapid characterization of unprocessed samples by liquid introduction atmospheric pressure ionization mass spectrometry has been lacking. The in-house developed integrated software with a simple, small and relatively low-cost mass spectrometry system introduced here fills this void. Software was developed to operate the mass spectrometer, to collect and process mass spectrometric data files, to build a database and to classify samples using such a database. These tasks were accomplished via the vendorprovided software libraries. Sample classification based on spectral comparison utilized themore » spectral contrast angle method. As a result, using the developed software platform near real-time sample classification is exemplified using a series of commercially available blue ink rollerball pens and vegetable oils. In the case of the inks, full scan positive and negative ion ESI mass spectra were both used for database generation and sample classification. For the vegetable oils, full scan positive ion mode APCI mass spectra were recorded. The overall accuracy of the employed spectral contrast angle statistical model was 95.3% and 98% in case of the inks and oils, respectively, using leave-one-out cross-validation. In conclusion, this work illustrates that an open port sampling interface/mass spectrometer combination, with appropriate instrument control and data processing software, is a viable direct liquid extraction sampling and analysis system suitable for the non-expert user and near real-time sample classification via database matching.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Berkel, Gary J.; Kertesz, Vilmos
An “Open Access”-like mass spectrometric platform to fully utilize the simplicity of the manual open port sampling interface for rapid characterization of unprocessed samples by liquid introduction atmospheric pressure ionization mass spectrometry has been lacking. The in-house developed integrated software with a simple, small and relatively low-cost mass spectrometry system introduced here fills this void. Software was developed to operate the mass spectrometer, to collect and process mass spectrometric data files, to build a database and to classify samples using such a database. These tasks were accomplished via the vendorprovided software libraries. Sample classification based on spectral comparison utilized themore » spectral contrast angle method. As a result, using the developed software platform near real-time sample classification is exemplified using a series of commercially available blue ink rollerball pens and vegetable oils. In the case of the inks, full scan positive and negative ion ESI mass spectra were both used for database generation and sample classification. For the vegetable oils, full scan positive ion mode APCI mass spectra were recorded. The overall accuracy of the employed spectral contrast angle statistical model was 95.3% and 98% in case of the inks and oils, respectively, using leave-one-out cross-validation. In conclusion, this work illustrates that an open port sampling interface/mass spectrometer combination, with appropriate instrument control and data processing software, is a viable direct liquid extraction sampling and analysis system suitable for the non-expert user and near real-time sample classification via database matching.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-29
... requirements for the agency (DHS) to respect individuals' rights to control their information in possession of... Database System of Records is a repository of information held by DHS in connection with its several and.... The DHS/ALL-030 Use of Terrorist Screening Database System of Records contains information that is...
49 CFR 192.155 - Welded branch connections.
Code of Federal Regulations, 2010 CFR
2010-10-01
... not reduced, taking into account the stresses in the remaining pipe wall due to the opening in the pipe or header, the shear stresses produced by the pressure acting on the area of the branch opening...
49 CFR 192.155 - Welded branch connections.
Code of Federal Regulations, 2012 CFR
2012-10-01
... not reduced, taking into account the stresses in the remaining pipe wall due to the opening in the pipe or header, the shear stresses produced by the pressure acting on the area of the branch opening...
49 CFR 192.155 - Welded branch connections.
Code of Federal Regulations, 2013 CFR
2013-10-01
... not reduced, taking into account the stresses in the remaining pipe wall due to the opening in the pipe or header, the shear stresses produced by the pressure acting on the area of the branch opening...
49 CFR 192.155 - Welded branch connections.
Code of Federal Regulations, 2011 CFR
2011-10-01
... not reduced, taking into account the stresses in the remaining pipe wall due to the opening in the pipe or header, the shear stresses produced by the pressure acting on the area of the branch opening...
Silva-Lopes, Victor W; Monteiro-Leal, Luiz H
2003-07-01
The development of new technology and the possibility of fast information delivery by either Internet or Intranet connections are changing education. Microanatomy education depends basically on the correct interpretation of microscopy images by students. Modern microscopes coupled to computers enable the presentation of these images in a digital form by creating image databases. However, the access to this new technology is restricted entirely to those living in cities and towns with an Information Technology (IT) infrastructure. This study describes the creation of a free Internet histology database composed by high-quality images and also presents an inexpensive way to supply it to a greater number of students through Internet/Intranet connections. By using state-of-the-art scientific instruments, we developed a Web page (http://www2.uerj.br/~micron/atlas/atlasenglish/index.htm) that, in association with a multimedia microscopy laboratory, intends to help in the reduction of the IT educational gap between developed and underdeveloped regions. Copyright 2003 Wiley-Liss, Inc.
Hirata, Yasutaka; Hirahara, Norimichi; Murakami, Arata; Motomura, Noboru; Miyata, Hiroaki; Takamoto, Shinichi
2018-01-01
We analyzed the mortality and morbidity of congenital heart surgery in Japan using the Japan Cardiovascular Surgery Database (JCVSD). Data regarding congenital heart surgery performed between January 2013 and December 2014 were obtained from JCVSD. The 20 most frequent procedures were selected and the mortality rates and major morbidities were analyzed. The mortality rates of atrial septal defect repair and ventricular septal defect repair were less than 1%, and the mortality rates of tetralogy of Fallot repair, complete atrioventricular septal defect repair, bidirectional Glenn, and total cavopulmonary connection were less than 2%. The mortality rates of the Norwood procedure and total anomalous pulmonary venous connection repair were more than 10%. The rates of unplanned reoperation, pacemaker implantation, chylothorax, deep sternal infection, phrenic nerve injury, and neurological deficit were shown for each procedure. Using JCVSD, the national data for congenital heart surgery, including postoperative complications, were analyzed. Further improvements of the database and feedback for clinical practice are required.
National Institute of Standards and Technology Data Gateway
SRD 115 Hydrocarbon Spectral Database (Web, free access) All of the rotational spectral lines observed and reported in the open literature for 91 hydrocarbon molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.
National Institute of Standards and Technology Data Gateway
SRD 114 Diatomic Spectral Database (Web, free access) All of the rotational spectral lines observed and reported in the open literature for 121 diatomic molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty, and reference are given for each transition reported.
National Institute of Standards and Technology Data Gateway
SRD 117 Triatomic Spectral Database (Web, free access) All of the rotational spectral lines observed and reported in the open literature for 55 triatomic molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.
ERIC Educational Resources Information Center
Medeiros Vieira, Leandro Mauricio; Ferasso, Marcos; Schröeder, Christine da Silva
2014-01-01
This theoretical essay is a learning approach reflexion on Howard Gardner's Theory of Multiple Intelligences and the possibilities provided by the education model known as open and distance learning. Open and distance learning can revolutionize traditional pedagogical practice, meeting the needs of those who have different forms of cognitive…
Vacuum-Gauge Connection For Shipping Container
NASA Technical Reports Server (NTRS)
Henry, Robert H.
1990-01-01
External connector enables measurement of vacuum in stored part. Remote-readout connector added to shipping container and connected to thermo-couple vacuum gauge in vacuum-insulated cryogenic line packed in container. Enables monitoring of condition of vacuum without opening container.
Brohée, Sylvain; Barriot, Roland; Moreau, Yves
2010-09-01
In recent years, the number of knowledge bases developed using Wiki technology has exploded. Unfortunately, next to their numerous advantages, classical Wikis present a critical limitation: the invaluable knowledge they gather is represented as free text, which hinders their computational exploitation. This is in sharp contrast with the current practice for biological databases where the data is made available in a structured way. Here, we present WikiOpener an extension for the classical MediaWiki engine that augments Wiki pages by allowing on-the-fly querying and formatting resources external to the Wiki. Those resources may provide data extracted from databases or DAS tracks, or even results returned by local or remote bioinformatics analysis tools. This also implies that structured data can be edited via dedicated forms. Hence, this generic resource combines the structure of biological databases with the flexibility of collaborative Wikis. The source code and its documentation are freely available on the MediaWiki website: http://www.mediawiki.org/wiki/Extension:WikiOpener.
An Open-source Toolbox for Analysing and Processing PhysioNet Databases in MATLAB and Octave.
Silva, Ikaro; Moody, George B
The WaveForm DataBase (WFDB) Toolbox for MATLAB/Octave enables integrated access to PhysioNet's software and databases. Using the WFDB Toolbox for MATLAB/Octave, users have access to over 50 physiological databases in PhysioNet. The toolbox provides access over 4 TB of biomedical signals including ECG, EEG, EMG, and PLETH. Additionally, most signals are accompanied by metadata such as medical annotations of clinical events: arrhythmias, sleep stages, seizures, hypotensive episodes, etc. Users of this toolbox should easily be able to reproduce, validate, and compare results published based on PhysioNet's software and databases.
VarioML framework for comprehensive variation data representation and exchange.
Byrne, Myles; Fokkema, Ivo Fac; Lancaster, Owen; Adamusiak, Tomasz; Ahonen-Bishopp, Anni; Atlan, David; Béroud, Christophe; Cornell, Michael; Dalgleish, Raymond; Devereau, Andrew; Patrinos, George P; Swertz, Morris A; Taschner, Peter Em; Thorisson, Gudmundur A; Vihinen, Mauno; Brookes, Anthony J; Muilu, Juha
2012-10-03
Sharing of data about variation and the associated phenotypes is a critical need, yet variant information can be arbitrarily complex, making a single standard vocabulary elusive and re-formatting difficult. Complex standards have proven too time-consuming to implement. The GEN2PHEN project addressed these difficulties by developing a comprehensive data model for capturing biomedical observations, Observ-OM, and building the VarioML format around it. VarioML pairs a simplified open specification for describing variants, with a toolkit for adapting the specification into one's own research workflow. Straightforward variant data can be captured, federated, and exchanged with no overhead; more complex data can be described, without loss of compatibility. The open specification enables push-button submission to gene variant databases (LSDBs) e.g., the Leiden Open Variation Database, using the Cafe Variome data publishing service, while VarioML bidirectionally transforms data between XML and web-application code formats, opening up new possibilities for open source web applications building on shared data. A Java implementation toolkit makes VarioML easily integrated into biomedical applications. VarioML is designed primarily for LSDB data submission and transfer scenarios, but can also be used as a standard variation data format for JSON and XML document databases and user interface components. VarioML is a set of tools and practices improving the availability, quality, and comprehensibility of human variation information. It enables researchers, diagnostic laboratories, and clinics to share that information with ease, clarity, and without ambiguity.
VarioML framework for comprehensive variation data representation and exchange
2012-01-01
Background Sharing of data about variation and the associated phenotypes is a critical need, yet variant information can be arbitrarily complex, making a single standard vocabulary elusive and re-formatting difficult. Complex standards have proven too time-consuming to implement. Results The GEN2PHEN project addressed these difficulties by developing a comprehensive data model for capturing biomedical observations, Observ-OM, and building the VarioML format around it. VarioML pairs a simplified open specification for describing variants, with a toolkit for adapting the specification into one's own research workflow. Straightforward variant data can be captured, federated, and exchanged with no overhead; more complex data can be described, without loss of compatibility. The open specification enables push-button submission to gene variant databases (LSDBs) e.g., the Leiden Open Variation Database, using the Cafe Variome data publishing service, while VarioML bidirectionally transforms data between XML and web-application code formats, opening up new possibilities for open source web applications building on shared data. A Java implementation toolkit makes VarioML easily integrated into biomedical applications. VarioML is designed primarily for LSDB data submission and transfer scenarios, but can also be used as a standard variation data format for JSON and XML document databases and user interface components. Conclusions VarioML is a set of tools and practices improving the availability, quality, and comprehensibility of human variation information. It enables researchers, diagnostic laboratories, and clinics to share that information with ease, clarity, and without ambiguity. PMID:23031277
Development of ISO connection-oriented/correctionless gateways
NASA Technical Reports Server (NTRS)
Landweber, Lawrence H.
1991-01-01
The project had two goals, establishment of a gateway between French and U.S. academic networks and studies of issues related to the development of ISO connection-oriented/connectionless (CO/CL) gateways. The first component involved installation of a 56K bps line between Princeton Univ. and INRIA in France. The end-points of these lines were connected by Vitalink link level bridges. The Princeton end was then connected to the NSFNET via the John Von Neumann Supercomputer Center. The French end was connected to Transpac, the French X.25 public data network and to the French IP research internet. U.S. users may communicate with users of the French internet by e-mail and may access computational and data resources in France by use of remote login and file transfer. The connection to Transpac enables U.S. users to access the SIMBAD astronomical database outside of Paris. Access to this database from the U.S. can be via TCP/IP or DECNET (via a DECNET to TCP/IP gateway) protocols utilizing a TCP/IP to X.25 gateway developed and operated by INRIA. The second component of the project involved experiments aimed at understanding the issues involved is ISO CO/CL gateways. An experimental gateway was developed at Wisconsin and a preliminary report was prepared. Because of the need to devote most resources to the first component of the project, work in this area did not go beyond development of a prototype gateway.
Schacht Hansen, M; Dørup, J
2001-01-01
The Wireless Application Protocol technology implemented in newer mobile phones has built-in facilities for handling much of the information processing needed in clinical work. To test a practical approach we ported a relational database of the Danish pharmaceutical catalogue to Wireless Application Protocol using open source freeware at all steps. We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language script for easy update of the database. Data were distributed in 35 interrelated tables. Each pharmaceutical brand name was given its own card with links to general information about the drug, active substances, contraindications etc. Access was available through 1) browsing therapeutic groups and 2) searching for a brand name. The database interface was programmed in the server-side scripting language PHP3. A free, open source Wireless Application Protocol gateway to a pharmaceutical catalogue was established to allow dial-in access independent of commercial Wireless Application Protocol service providers. The application was tested on the Nokia 7110 and Ericsson R320s cellular phones. We have demonstrated that Wireless Application Protocol-based access to a dynamic clinical database can be established using open source freeware. The project opens perspectives for a further integration of Wireless Application Protocol phone functions in clinical information processing: Global System for Mobile communication telephony for bilateral communication, asynchronous unilateral communication via e-mail and Short Message Service, built-in calculator, calendar, personal organizer, phone number catalogue and Dictaphone function via answering machine technology. An independent Wireless Application Protocol gateway may be placed within hospital firewalls, which may be an advantage with respect to security. However, if Wireless Application Protocol phones are to become effective tools for physicians, special attention must be paid to the limitations of the devices. Input tools of Wireless Application Protocol phones should be improved, for instance by increased use of speech control.
Hansen, Michael Schacht
2001-01-01
Background The Wireless Application Protocol technology implemented in newer mobile phones has built-in facilities for handling much of the information processing needed in clinical work. Objectives To test a practical approach we ported a relational database of the Danish pharmaceutical catalogue to Wireless Application Protocol using open source freeware at all steps. Methods We used Apache 1.3 web software on a Linux server. Data containing the Danish pharmaceutical catalogue were imported from an ASCII file into a MySQL 3.22.32 database using a Practical Extraction and Report Language script for easy update of the database. Data were distributed in 35 interrelated tables. Each pharmaceutical brand name was given its own card with links to general information about the drug, active substances, contraindications etc. Access was available through 1) browsing therapeutic groups and 2) searching for a brand name. The database interface was programmed in the server-side scripting language PHP3. Results A free, open source Wireless Application Protocol gateway to a pharmaceutical catalogue was established to allow dial-in access independent of commercial Wireless Application Protocol service providers. The application was tested on the Nokia 7110 and Ericsson R320s cellular phones. Conclusions We have demonstrated that Wireless Application Protocol-based access to a dynamic clinical database can be established using open source freeware. The project opens perspectives for a further integration of Wireless Application Protocol phone functions in clinical information processing: Global System for Mobile communication telephony for bilateral communication, asynchronous unilateral communication via e-mail and Short Message Service, built-in calculator, calendar, personal organizer, phone number catalogue and Dictaphone function via answering machine technology. An independent Wireless Application Protocol gateway may be placed within hospital firewalls, which may be an advantage with respect to security. However, if Wireless Application Protocol phones are to become effective tools for physicians, special attention must be paid to the limitations of the devices. Input tools of Wireless Application Protocol phones should be improved, for instance by increased use of speech control. PMID:11720946
Translation from the collaborative OSM database to cartography
NASA Astrophysics Data System (ADS)
Hayat, Flora
2018-05-01
The OpenStreetMap (OSM) database includes original items very useful for geographical analysis and for creating thematic maps. Contributors record in the open database various themes regarding amenities, leisure, transports, buildings and boundaries. The Michelin mapping department develops map prototypes to test the feasibility of mapping based on OSM. To translate the OSM database structure into a database structure fitted with Michelin graphic guidelines a research project is in development. It aims at defining the right structure for the Michelin uses. The research project relies on the analysis of semantic and geometric heterogeneities in OSM data. In that order, Michelin implements methods to transform the input geographical database into a cartographic image dedicated for specific uses (routing and tourist maps). The paper focuses on the mapping tools available to produce a personalised spatial database. Based on processed data, paper and Web maps can be displayed. Two prototypes are described in this article: a vector tile web map and a mapping method to produce paper maps on a regional scale. The vector tile mapping method offers an easy navigation within the map and within graphic and thematic guide- lines. Paper maps can be partly automatically drawn. The drawing automation and data management are part of the mapping creation as well as the final hand-drawing phase. Both prototypes have been set up using the OSM technical ecosystem.
Performance Comparison of SDN Solutions for Switching Dedicated Long-Haul Connections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S
2016-01-01
We consider scenarios with two sites connected over a dedicated, long-haul connection that must quickly fail-over in response to degradations in host-to-host application performance. We present two methods for path fail-over using OpenFlowenabled switches: (a) a light-weight method that utilizes host scripts to monitor the application performance and dpctl API for switching, and (b) a generic method that uses two OpenDaylight (ODL) controllers and REST interfaces. The restoration dynamics of the application contain significant statistical variations due to the controllers, north interfaces and switches; in addition, the variety of vendor implementations further complicates the choice between different solutions. We presentmore » the impulse-response method to estimate the regressions of performance parameters, which enables a rigorous and objective comparison of different solutions. We describe testing results of the two methods, using TCP throughput and connection rtt as main parameters, over a testbed consisting of HP and Cisco switches connected over longhaul connections emulated in hardware by ANUE devices. The combination of analytical and experimental results demonstrates that dpctl method responds seconds faster than ODL method on average, while both methods restore TCP throughput.« less
Connectivity of Streams and Wetlands to Downstream Waters ...
The U.S. Environmental Protection Agency's (USEPA) Office of Research and Development has finalized the report Connectivity of Streams and Wetlands to Downstream Waters: A Review and Synthesis of the Scientific Evidence. The report reviews more than 1,200 peer-reviewed publications and summarizes current scientific understanding about the connectivity and mechanisms by which streams and wetlands, singly or in aggregate, affect the physical, chemical, and biological integrity of downstream waters. The focus of the report is on surface and shallow subsurface connections by which small or temporary streams, nontidal wetlands, and open waters affect larger waters such as rivers, lakes, reservoirs, and estuaries. This report represents the state-of-the-science on the connectivity and isolation of waters in the United States. It makes five major conclusions, summarized below, that are drawn from a broad range of peer reviewed scientific literature. The scientific literature unequivocally demonstrates that streams, regardless of their size or frequency of flow, are connected to downstream waters and strongly influence their function. The scientific literature clearly shows that wetlands and open waters in riparian areas (transitional areas between terrestrial and aquatic ecosystems) and floodplains are physically, chemically, and biologically integrated with rivers via functions that improve downstream water quality. These system
West end view, "Boat Shop" open doorway, Facility 7 ...
West end view, "Boat Shop" - open doorway, Facility 7 connected on the left, view facing east-southeast - U.S. Naval Base, Pearl Harbor, Boat Shop, Seventh Street near Avenue E, Pearl City, Honolulu County, HI
Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro
2011-07-01
Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.
JASPAR 2010: the greatly expanded open-access database of transcription factor binding profiles
Portales-Casamar, Elodie; Thongjuea, Supat; Kwon, Andrew T.; Arenillas, David; Zhao, Xiaobei; Valen, Eivind; Yusuf, Dimas; Lenhard, Boris; Wasserman, Wyeth W.; Sandelin, Albin
2010-01-01
JASPAR (http://jaspar.genereg.net) is the leading open-access database of matrix profiles describing the DNA-binding patterns of transcription factors (TFs) and other proteins interacting with DNA in a sequence-specific manner. Its fourth major release is the largest expansion of the core database to date: the database now holds 457 non-redundant, curated profiles. The new entries include the first batch of profiles derived from ChIP-seq and ChIP-chip whole-genome binding experiments, and 177 yeast TF binding profiles. The introduction of a yeast division brings the convenience of JASPAR to an active research community. As binding models are refined by newer data, the JASPAR database now uses versioning of matrices: in this release, 12% of the older models were updated to improved versions. Classification of TF families has been improved by adopting a new DNA-binding domain nomenclature. A curated catalog of mammalian TFs is provided, extending the use of the JASPAR profiles to additional TFs belonging to the same structural family. The changes in the database set the system ready for more rapid acquisition of new high-throughput data sources. Additionally, three new special collections provide matrix profile data produced by recent alternative high-throughput approaches. PMID:19906716
Türei, Dénes; Papp, Diána; Fazekas, Dávid; Földvári-Nagy, László; Módos, Dezső; Lenti, Katalin; Csermely, Péter; Korcsmáros, Tamás
2013-01-01
NRF2 is the master transcriptional regulator of oxidative and xenobiotic stress responses. NRF2 has important roles in carcinogenesis, inflammation, and neurodegenerative diseases. We developed an online resource, NRF2-ome, to provide an integrated and systems-level database for NRF2. The database contains manually curated and predicted interactions of NRF2 as well as data from external interaction databases. We integrated NRF2 interactome with NRF2 target genes, NRF2 regulating TFs, and miRNAs. We connected NRF2-ome to signaling pathways to allow mapping upstream NRF2 regulatory components that could directly or indirectly influence NRF2 activity totaling 35,967 protein-protein and signaling interactions. The user-friendly website allows researchers without computational background to search, browse, and download the database. The database can be downloaded in SQL, CSV, BioPAX, SBML, PSI-MI, and in a Cytoscape CYS file formats. We illustrated the applicability of the website by suggesting a posttranscriptional negative feedback of NRF2 by MAFG protein and raised the possibility of a connection between NRF2 and the JAK/STAT pathway through STAT1 and STAT3. NRF2-ome can also be used as an evaluation tool to help researchers and drug developers to understand the hidden regulatory mechanisms in the complex network of NRF2.
Process evaluation distributed system
NASA Technical Reports Server (NTRS)
Moffatt, Christopher L. (Inventor)
2006-01-01
The distributed system includes a database server, an administration module, a process evaluation module, and a data display module. The administration module is in communication with the database server for providing observation criteria information to the database server. The process evaluation module is in communication with the database server for obtaining the observation criteria information from the database server and collecting process data based on the observation criteria information. The process evaluation module utilizes a personal digital assistant (PDA). A data display module in communication with the database server, including a website for viewing collected process data in a desired metrics form, the data display module also for providing desired editing and modification of the collected process data. The connectivity established by the database server to the administration module, the process evaluation module, and the data display module, minimizes the requirement for manual input of the collected process data.
Unlocking Index Animalium: From paper slips to bytes and bits
Pilsk, Suzanne C.; Kalfatovic, Martin R.; Richard, Joel M.
2016-01-01
Abstract In 1996 Smithsonian Libraries (SIL) embarked on the digitization of its collections. By 1999, a full-scale digitization center was in place and rare volumes from the natural history collections, often of high illustrative value, were the focus for the first years of the program. The resulting beautiful books made available for online display were successful to a certain extent, but it soon became clear that the data locked within the texts needed to be converted to more usable and re-purposable form via digitization methods that went beyond simple page imaging and included text conversion elements. Library staff met with researchers from the taxonomic community to understand their path to the literature and identified tools (indexes and bibliographies) used to connect to the library holdings. The traditional library metadata describing the titles, which made them easily retrievable from the shelves of libraries, was not meeting the needs of the researcher looking for more detailed and granular data within the texts. The result was to identify proper print tools that could potential assist researchers in digital form. This paper outlines the project undertaken to convert Charles Davies Sherborn’s Index Animalium into a tool to connect researchers to the library holdings: from a print index to a database to eventually a dataset. Sherborn’s microcitation of a species name and his bibliographies help bridge the gap between taxonomist and literature holdings of libraries. In 2004, SIL received funding from the Smithsonian’s Atherton Seidell Endowment to create an online version of Sherborn’s Index Animalium. The initial project was to digitize the page images and re-key the data into a simple data structure. As the project evolved, a more complex database was developed which enabled quality field searching to retrieve species names and to search the bibliography. Problems with inconsistent abbreviations and styling of his bibliographies made the parsing of the data difficult. Coinciding with the development of the Biodiversity Heritage Library (BHL) in 2005, it became obvious there was a need to integrate the database converted Index Animalium, BHL’s scanned taxonomic literature, and taxonomic intelligence (the algorithmic identification of binomial, Latinate name-strings). The challenges of working with legacy taxonomic citation, computer matching algorithms, and making connections have brought us to today’s goal of making Sherborn available and linked to other datasets. Partnering with others to allow machine-to-machine communications the data is being examined for possible transformation into RDF markup and meeting the standards of Linked Open Data. SIL staff have partnered with Thomson Reuters and the Global Names Initiative to further enhance the Index Animalium data set. Thomson Reuters’ staff is now working on integrating the species microcitation and species name in the ION: Index to Organism Names project; Richard Pyle (The Bishop Museum) is also working on further parsing of the text. The Index Animalium collaborative project’s ultimate goal is to successful have researchers go seamlessly from the species name in either ION or the scanned pages of Index Animalium to the digitized original description in BHL - connecting taxonomic researchers to original authored species descriptions with just a click. PMID:26877657
Unlocking Index Animalium: From paper slips to bytes and bits.
Pilsk, Suzanne C; Kalfatovic, Martin R; Richard, Joel M
2016-01-01
In 1996 Smithsonian Libraries (SIL) embarked on the digitization of its collections. By 1999, a full-scale digitization center was in place and rare volumes from the natural history collections, often of high illustrative value, were the focus for the first years of the program. The resulting beautiful books made available for online display were successful to a certain extent, but it soon became clear that the data locked within the texts needed to be converted to more usable and re-purposable form via digitization methods that went beyond simple page imaging and included text conversion elements. Library staff met with researchers from the taxonomic community to understand their path to the literature and identified tools (indexes and bibliographies) used to connect to the library holdings. The traditional library metadata describing the titles, which made them easily retrievable from the shelves of libraries, was not meeting the needs of the researcher looking for more detailed and granular data within the texts. The result was to identify proper print tools that could potential assist researchers in digital form. This paper outlines the project undertaken to convert Charles Davies Sherborn's Index Animalium into a tool to connect researchers to the library holdings: from a print index to a database to eventually a dataset. Sherborn's microcitation of a species name and his bibliographies help bridge the gap between taxonomist and literature holdings of libraries. In 2004, SIL received funding from the Smithsonian's Atherton Seidell Endowment to create an online version of Sherborn's Index Animalium. The initial project was to digitize the page images and re-key the data into a simple data structure. As the project evolved, a more complex database was developed which enabled quality field searching to retrieve species names and to search the bibliography. Problems with inconsistent abbreviations and styling of his bibliographies made the parsing of the data difficult. Coinciding with the development of the Biodiversity Heritage Library (BHL) in 2005, it became obvious there was a need to integrate the database converted Index Animalium, BHL's scanned taxonomic literature, and taxonomic intelligence (the algorithmic identification of binomial, Latinate name-strings). The challenges of working with legacy taxonomic citation, computer matching algorithms, and making connections have brought us to today's goal of making Sherborn available and linked to other datasets. Partnering with others to allow machine-to-machine communications the data is being examined for possible transformation into RDF markup and meeting the standards of Linked Open Data. SIL staff have partnered with Thomson Reuters and the Global Names Initiative to further enhance the Index Animalium data set. Thomson Reuters' staff is now working on integrating the species microcitation and species name in the ION: Index to Organism Names project; Richard Pyle (The Bishop Museum) is also working on further parsing of the text. The Index Animalium collaborative project's ultimate goal is to successful have researchers go seamlessly from the species name in either ION or the scanned pages of Index Animalium to the digitized original description in BHL - connecting taxonomic researchers to original authored species descriptions with just a click.
Convertible socket for pressure gauge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bissell, R.D.
1990-01-01
This patent describes a pressure gauge having a case in which is disposed a Bourdon tube and a base socket connected to the Bourdon tube for placing the tube in pressure communication with a fluid pressure source. Base socket has a rearward face and a bottom face with respect to the gauge adjacent openings defined through the case and an internal passage communication with the tube. It includes means for connecting a source of fluid pressure to the socket selectively through one of the case openings to the bottom face or the rearward face.
REFLECTOR CONTROL OF A BOILING-WATER REACTOR
Treshow, M.
1962-05-22
A line connecting the reactor with a spent steam condenser contains a valve set to open when the pressure in the reactor exceeds a predetermined value and an orifice on the upstream side of the valve. Another line connects the reflector with this line between the orifice and the valve. An excess steam pressure causes the valve to open and the flow of steam through the line draws water out of the reflector. Provision is also made for adding water to the reflector when the steam pressure drops. (AEC)
Bellucci, Luca; Angeli, Lucilla; Tafi, Andrea; Radi, Marco; Botta, Maurizio
2013-12-23
Targeted molecular dynamics (TMD) simulations allowed for identifying the chemical/structural features of the nucleotide-competitive HIV-1 inhibitor DAVP-1, which is responsible for the disruption of the T-shape motif between Try183 and Trp229 of the reverse transcriptase (RT). DAVP-1 promoted the opening of a connection "gate" between allosteric and catalytic sites of HIV-1 RT, thus explaining its peculiar mechanism of action and providing useful insights to develop novel nucleotide-competitive RT inhibitors.
OPEN PROBLEM: Turbulence transition in pipe flow: some open questions
NASA Astrophysics Data System (ADS)
Eckhardt, Bruno
2008-01-01
The transition to turbulence in pipe flow is a longstanding problem in fluid dynamics. In contrast to many other transitions it is not connected with linear instabilities of the laminar profile and hence follows a different route. Experimental and numerical studies within the last few years have revealed many unexpected connections to the nonlinear dynamics of strange saddles and have considerably improved our understanding of this transition. The text summarizes some of these insights and points to some outstanding problems in areas where valuable contributions from nonlinear dynamics can be expected.
Do well-connected landscapes promote road-related mortality?
Grilo, C.; Ascensao, F.; Santos-Reis, M.; Bissonette, J.A.
2011-01-01
Cost surface (CS) models have emerged as a useful tool to examine the interactions between landscapes patterns and wildlife at large-scale extents. This approach is particularly relevant to guide conservation planning for species that show vulnerability to road networks in human-dominated landscapes. In this study, we measured the functional connectivity of the landscape in southern Portugal and examined how it may be related to stone marten road mortality risk. We addressed three questions: (1) How different levels of landscape connectivity influence stone marten occurrence in montado patches? (2) Is there any relation between montado patches connectivity and stone marten road mortality risk? (3) If so, which road-related features might be responsible for the species' high road mortality? We developed a series of connectivity models using CS scenarios with different resistance values given to each vegetation cover type to reflect different resistance to species movement. Our models showed that the likelihood of occurrence of stone marten decreased with distance to source areas, meaning continuous montado. Open areas and riparian areas within open area matrices entailed increased costs. We found higher stone marten mortality on roads in well-connected areas. Road sinuosity was an important factor influencing the mortality in those areas. This result challenges the way that connectivity and its relation to mortality has been generally regarded. Clearly, landscape connectivity and road-related mortality are not independent. ?? 2010 Springer-Verlag.
NASA Astrophysics Data System (ADS)
Lill, Adrian Wilfred Thomas; Schallenberg, Marc; Lal, Aparna; Savage, Candida; Closs, Gerard Patrick
2013-08-01
Morphometric and physicochemical variables are key determinants of biotic community structure in estuaries and are influenced by changes to estuary mouth state (open/closed). This study examined and compared the consequences of intermittent connection to the ocean on environmental gradients among estuaries; specifically, how estuary morphology and hydrology relate to physical connection to the sea, and the influence of this relationship on the physicochemical environment. By sampling 20 estuaries across New Zealand and using historical aerial photographs, a continuous index of estuarine connection to the ocean was developed and independently validated using berm elevation derived from Airborne Laser Scanning (ALS) data. Using published literature, this index was compared to equivalent indices in South Africa and Australia. A clear relationship between connections to the ocean, freshwater flow and productivity indices underlie the environmental differences between permanently open and intermittently closed estuaries. Consistent patterns across the Southern Hemisphere, albeit with regional variations in estuarine characteristics, suggest that remote sensing is useful for predicting the physicochemical environment of small estuaries across regions. Principal components analysis for Otago estuaries showed that 40% of measured variation in the environment could be attributed to the gradient of relative connectivity (EOI), or isolation (berm elevation) to the ocean. Evaluating these relationships is central to understanding how global and local environmental changes may affect estuarine connectivity regimes and, ultimately, the functioning of estuarine ecosystems.
Catalog of open clusters and associated interstellar matter
NASA Technical Reports Server (NTRS)
Leisawitz, David
1988-01-01
The Catalog of Open Clusters and Associated Interstellar Matter summarizes observations of 128 open clusters and their associated ionized, atomic, and molecular iinterstellar matter. Cluster sizes, distances, radial velocities, ages, and masses, and the radial velocities and masses of associated interstellar medium components, are given. The database contains information from approximately 400 references published in the scientific literature before 1988.
Looking toward the Future: A Case Study of Open Source Software in the Humanities
ERIC Educational Resources Information Center
Quamen, Harvey
2006-01-01
In this article Harvey Quamen examines how the philosophy of open source software might be of particular benefit to humanities scholars in the near future--particularly for academic journals with limited financial resources. To this end he provides a case study in which he describes his use of open source technology (MySQL database software and…
The Role of Free/Libre and Open Source Software in Learning Health Systems.
Paton, C; Karopka, T
2017-08-01
Objective: To give an overview of the role of Free/Libre and Open Source Software (FLOSS) in the context of secondary use of patient data to enable Learning Health Systems (LHSs). Methods: We conducted an environmental scan of the academic and grey literature utilising the MedFLOSS database of open source systems in healthcare to inform a discussion of the role of open source in developing LHSs that reuse patient data for research and quality improvement. Results: A wide range of FLOSS is identified that contributes to the information technology (IT) infrastructure of LHSs including operating systems, databases, frameworks, interoperability software, and mobile and web apps. The recent literature around the development and use of key clinical data management tools is also reviewed. Conclusions: FLOSS already plays a critical role in modern health IT infrastructure for the collection, storage, and analysis of patient data. The nature of FLOSS systems to be collaborative, modular, and modifiable may make open source approaches appropriate for building the digital infrastructure for a LHS. Georg Thieme Verlag KG Stuttgart.
Design of web platform for science and engineering in the model of open market
NASA Astrophysics Data System (ADS)
Demichev, A. P.; Kryukov, A. P.
2016-09-01
This paper presents a design and operation algorithms of a web-platform for convenient, secure and effective remote interaction on the principles of the open market of users and providers of scientific application software and databases.
KID Project: an internet-based digital video atlas of capsule endoscopy for research purposes
Koulaouzidis, Anastasios; Iakovidis, Dimitris K.; Yung, Diana E.; Rondonotti, Emanuele; Kopylov, Uri; Plevris, John N.; Toth, Ervin; Eliakim, Abraham; Wurm Johansson, Gabrielle; Marlicz, Wojciech; Mavrogenis, Georgios; Nemeth, Artur; Thorlacius, Henrik; Tontini, Gian Eugenio
2017-01-01
Background and aims Capsule endoscopy (CE) has revolutionized small-bowel (SB) investigation. Computational methods can enhance diagnostic yield (DY); however, incorporating machine learning algorithms (MLAs) into CE reading is difficult as large amounts of image annotations are required for training. Current databases lack graphic annotations of pathologies and cannot be used. A novel database, KID, aims to provide a reference for research and development of medical decision support systems (MDSS) for CE. Methods Open-source software was used for the KID database. Clinicians contribute anonymized, annotated CE images and videos. Graphic annotations are supported by an open-access annotation tool (Ratsnake). We detail an experiment based on the KID database, examining differences in SB lesion measurement between human readers and a MLA. The Jaccard Index (JI) was used to evaluate similarity between annotations by the MLA and human readers. Results The MLA performed best in measuring lymphangiectasias with a JI of 81 ± 6 %. The other lesion types were: angioectasias (JI 64 ± 11 %), aphthae (JI 64 ± 8 %), chylous cysts (JI 70 ± 14 %), polypoid lesions (JI 75 ± 21 %), and ulcers (JI 56 ± 9 %). Conclusion MLA can perform as well as human readers in the measurement of SB angioectasias in white light (WL). Automated lesion measurement is therefore feasible. KID is currently the only open-source CE database developed specifically to aid development of MDSS. Our experiment demonstrates this potential. PMID:28580415
Fonseca, Carissa G; Backhaus, Michael; Bluemke, David A; Britten, Randall D; Chung, Jae Do; Cowan, Brett R; Dinov, Ivo D; Finn, J Paul; Hunter, Peter J; Kadish, Alan H; Lee, Daniel C; Lima, Joao A C; Medrano-Gracia, Pau; Shivkumar, Kalyanam; Suinesiaputra, Avan; Tao, Wenchao; Young, Alistair A
2011-08-15
Integrative mathematical and statistical models of cardiac anatomy and physiology can play a vital role in understanding cardiac disease phenotype and planning therapeutic strategies. However, the accuracy and predictive power of such models is dependent upon the breadth and depth of noninvasive imaging datasets. The Cardiac Atlas Project (CAP) has established a large-scale database of cardiac imaging examinations and associated clinical data in order to develop a shareable, web-accessible, structural and functional atlas of the normal and pathological heart for clinical, research and educational purposes. A goal of CAP is to facilitate collaborative statistical analysis of regional heart shape and wall motion and characterize cardiac function among and within population groups. Three main open-source software components were developed: (i) a database with web-interface; (ii) a modeling client for 3D + time visualization and parametric description of shape and motion; and (iii) open data formats for semantic characterization of models and annotations. The database was implemented using a three-tier architecture utilizing MySQL, JBoss and Dcm4chee, in compliance with the DICOM standard to provide compatibility with existing clinical networks and devices. Parts of Dcm4chee were extended to access image specific attributes as search parameters. To date, approximately 3000 de-identified cardiac imaging examinations are available in the database. All software components developed by the CAP are open source and are freely available under the Mozilla Public License Version 1.1 (http://www.mozilla.org/MPL/MPL-1.1.txt). http://www.cardiacatlas.org a.young@auckland.ac.nz Supplementary data are available at Bioinformatics online.
O'Reilly, Christian; Gosselin, Nadia; Carrier, Julie; Nielsen, Tore
2014-12-01
Manual processing of sleep recordings is extremely time-consuming. Efforts to automate this process have shown promising results, but automatic systems are generally evaluated on private databases, not allowing accurate cross-validation with other systems. In lacking a common benchmark, the relative performances of different systems are not compared easily and advances are compromised. To address this fundamental methodological impediment to sleep study, we propose an open-access database of polysomnographic biosignals. To build this database, whole-night recordings from 200 participants [97 males (aged 42.9 ± 19.8 years) and 103 females (aged 38.3 ± 18.9 years); age range: 18-76 years] were pooled from eight different research protocols performed in three different hospital-based sleep laboratories. All recordings feature a sampling frequency of 256 Hz and an electroencephalography (EEG) montage of 4-20 channels plus standard electro-oculography (EOG), electromyography (EMG), electrocardiography (ECG) and respiratory signals. Access to the database can be obtained through the Montreal Archive of Sleep Studies (MASS) website (http://www.ceams-carsm.ca/en/MASS), and requires only affiliation with a research institution and prior approval by the applicant's local ethical review board. Providing the research community with access to this free and open sleep database is expected to facilitate the development and cross-validation of sleep analysis automation systems. It is also expected that such a shared resource will be a catalyst for cross-centre collaborations on difficult topics such as improving inter-rater agreement on sleep stage scoring. © 2014 European Sleep Research Society.
An open access thyroid ultrasound image database
NASA Astrophysics Data System (ADS)
Pedraza, Lina; Vargas, Carlos; Narváez, Fabián.; Durán, Oscar; Muñoz, Emma; Romero, Eduardo
2015-01-01
Computer aided diagnosis systems (CAD) have been developed to assist radiologists in the detection and diagnosis of abnormalities and a large number of pattern recognition techniques have been proposed to obtain a second opinion. Most of these strategies have been evaluated using different datasets making their performance incomparable. In this work, an open access database of thyroid ultrasound images is presented. The dataset consists of a set of B-mode Ultrasound images, including a complete annotation and diagnostic description of suspicious thyroid lesions by expert radiologists. Several types of lesions as thyroiditis, cystic nodules, adenomas and thyroid cancers were included while an accurate lesion delineation is provided in XML format. The diagnostic description of malignant lesions was confirmed by biopsy. The proposed new database is expected to be a resource for the community to assess different CAD systems.
The IntAct molecular interaction database in 2012
Kerrien, Samuel; Aranda, Bruno; Breuza, Lionel; Bridge, Alan; Broackes-Carter, Fiona; Chen, Carol; Duesbury, Margaret; Dumousseau, Marine; Feuermann, Marc; Hinz, Ursula; Jandrasits, Christine; Jimenez, Rafael C.; Khadake, Jyoti; Mahadevan, Usha; Masson, Patrick; Pedruzzi, Ivo; Pfeiffenberger, Eric; Porras, Pablo; Raghunath, Arathi; Roechert, Bernd; Orchard, Sandra; Hermjakob, Henning
2012-01-01
IntAct is an open-source, open data molecular interaction database populated by data either curated from the literature or from direct data depositions. Two levels of curation are now available within the database, with both IMEx-level annotation and less detailed MIMIx-compatible entries currently supported. As from September 2011, IntAct contains approximately 275 000 curated binary interaction evidences from over 5000 publications. The IntAct website has been improved to enhance the search process and in particular the graphical display of the results. New data download formats are also available, which will facilitate the inclusion of IntAct's data in the Semantic Web. IntAct is an active contributor to the IMEx consortium (http://www.imexconsortium.org). IntAct source code and data are freely available at http://www.ebi.ac.uk/intact. PMID:22121220
[The database server for the medical bibliography database at Charles University].
Vejvalka, J; Rojíková, V; Ulrych, O; Vorísek, M
1998-01-01
In the medical community, bibliographic databases are widely accepted as a most important source of information both for theoretical and clinical disciplines. To improve access to medical bibliographic databases at Charles University, a database server (ERL by Silver Platter) was set up at the 2nd Faculty of Medicine in Prague. The server, accessible by Internet 24 hours/7 days, hosts now 14 years' MEDLINE and 10 years' EMBASE Paediatrics. Two different strategies are available for connecting to the server: a specialized client program that communicates over the Internet (suitable for professional searching) and a web-based access that requires no specialized software (except the WWW browser) on the client side. The server is now offered to academic community to host further databases, possibly subscribed by consortia whose individual members would not subscribe them by themselves.
A Case Study in Software Adaptation
2002-01-01
1 A Case Study in Software Adaptation Giuseppe Valetto Telecom Italia Lab Via Reiss Romoli 274 10148, Turin, Italy +39 011 2288788...configuration of the service; monitoring of database connectivity from within the service; monitoring of crashes and shutdowns of IM servers; monitoring of...of the IM server all share a relational database and a common runtime state repository, which make up the backend tier, and allow replicas to
A chronological framework connecting the early Upper Palaeolithic across the Central Asian piedmont.
Fitzsimmons, Kathryn E; Iovita, Radu; Sprafke, Tobias; Glantz, Michelle; Talamo, Sahra; Horton, Katharine; Beeton, Tyler; Alipova, Saya; Bekseitov, Galymzhan; Ospanov, Yerbolat; Deom, Jean-Marc; Sala, Renato; Taimagambetov, Zhaken
2017-12-01
Central Asia has delivered significant paleoanthropological discoveries in the past few years. New genetic data indicate that at least two archaic human species met and interbred with anatomically modern humans as they arrived into northern Central Asia. However, data are limited: known archaeological sites with lithic assemblages generally lack human fossils, and consequently identifying the archaeological signatures of different human groups, and the timing of their occupation, remains elusive. Reliable chronologic data from sites in the region, crucial to our understanding of the timing and duration of interactions between different human species, are rare. Here we present chronologies for two open air Middle to Upper Palaeolithic (UP) sequences from the Tien Shan piedmont in southeast Kazakhstan, Maibulak and Valikhanova, which bridge southern and northern Central Asia. The chronologies, based on both quartz optically stimulated luminescence (OSL) and polymineral post-infrared infrared luminescence (pIR-IRSL) protocols, demonstrate that technological developments at the two sites differ substantially over the ∼47-19 ka time span. Some of the innovations typically associated with the earliest UP in the Altai or other parts of northeast Asia are also present in the Tien Shan piedmont. We caution against making assumptions about the directionality of spread of these technologies until a larger, better defined database of transitional sites in the region is available. Connections between the timing of occupation of regions, living area setting and paleoenvironmental conditions, while providing hypotheses worth exploring, remain inconclusive. We cautiously suggest a trend towards increasing occupation of open air sites across the Central Asian piedmont after ∼40 ka, corresponding to more humid climatic conditions which nevertheless included pulses of dust deposition. Human occupation persisted into the Last Glacial Maximum, despite cooler, and possibly drier, conditions. Our results thus provide additional data to substantiate arguments for occupation of Central Asia. Copyright © 2017 Elsevier Ltd. All rights reserved.
MedlinePlus FAQ: How Often MedlinePlus is Updated
... System Pharmacists is updated monthly. Natural Medicines Comprehensive Database Consumer Version is updated quarterly. Medical Encyclopedia: Updated monthly. ... Guidelines Viewers & Players MedlinePlus Connect for ...
Eronen, Lauri; Toivonen, Hannu
2012-06-06
Biological databases contain large amounts of data concerning the functions and associations of genes and proteins. Integration of data from several such databases into a single repository can aid the discovery of previously unknown connections spanning multiple types of relationships and databases. Biomine is a system that integrates cross-references from several biological databases into a graph model with multiple types of edges, such as protein interactions, gene-disease associations and gene ontology annotations. Edges are weighted based on their type, reliability, and informativeness. We present Biomine and evaluate its performance in link prediction, where the goal is to predict pairs of nodes that will be connected in the future, based on current data. In particular, we formulate protein interaction prediction and disease gene prioritization tasks as instances of link prediction. The predictions are based on a proximity measure computed on the integrated graph. We consider and experiment with several such measures, and perform a parameter optimization procedure where different edge types are weighted to optimize link prediction accuracy. We also propose a novel method for disease-gene prioritization, defined as finding a subset of candidate genes that cluster together in the graph. We experimentally evaluate Biomine by predicting future annotations in the source databases and prioritizing lists of putative disease genes. The experimental results show that Biomine has strong potential for predicting links when a set of selected candidate links is available. The predictions obtained using the entire Biomine dataset are shown to clearly outperform ones obtained using any single source of data alone, when different types of links are suitably weighted. In the gene prioritization task, an established reference set of disease-associated genes is useful, but the results show that under favorable conditions, Biomine can also perform well when no such information is available.The Biomine system is a proof of concept. Its current version contains 1.1 million entities and 8.1 million relations between them, with focus on human genetics. Some of its functionalities are available in a public query interface at http://biomine.cs.helsinki.fi, allowing searching for and visualizing connections between given biological entities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Y; Medin, P; Yordy, J
2014-06-01
Purpose: To present a strategy to integrate the imaging database of a VERO unit with a treatment management system (TMS) to improve clinical workflow and consolidate image data to facilitate clinical quality control and documentation. Methods: A VERO unit is equipped with both kV and MV imaging capabilities for IGRT treatments. It has its own imaging database behind a firewall. It has been a challenge to transfer images on this unit to a TMS in a radiation therapy clinic so that registered images can be reviewed remotely with an approval or rejection record. In this study, a software system, iPump-VERO,more » was developed to connect VERO and a TMS in our clinic. The patient database folder on the VERO unit was mapped to a read-only folder on a file server outside VERO firewall. The application runs on a regular computer with the read access to the patient database folder. It finds the latest registered images and fuses them in one of six predefined patterns before sends them via DICOM connection to the TMS. The residual image registration errors will be overlaid on the fused image to facilitate image review. Results: The fused images of either registered kV planar images or CBCT images are fully DICOM compatible. A sentinel module is built to sense new registered images with negligible computing resources from the VERO ExacTrac imaging computer. It takes a few seconds to fuse registered images and send them to the TMS. The whole process is automated without any human intervention. Conclusion: Transferring images in DICOM connection is the easiest way to consolidate images of various sources in your TMS. Technically the attending does not have to go to the VERO treatment console to review image registration prior delivery. It is a useful tool for a busy clinic with a VERO unit.« less
Weiner, Joseph A; Cook, Ralph W; Hashmi, Sohaib; Schallmo, Michael S; Chun, Danielle S; Barth, Kathryn A; Singh, Sameer K; Patel, Alpesh A; Hsu, Wellington K
2017-09-15
A retrospective review of Centers for Medicare and Medicaid Services Database. Utilizing Open Payments data, we aimed to determine the prevalence of industry payments to orthopedic and neurospine surgeons, report the magnitude of those relationships, and help outline the surgeon demographic factors associated with industry relationships. Previous Open Payments data revealed that orthopedic surgeons receive the highest value of industry payments. No study has investigated the financial relationship between spine surgeons and industry using the most recent release of Open Payments data. A database of 5898 spine surgeons in the United States was derived from the Open Payments website. Demographic data were collected, including the type of residency training, years of experience, practice setting, type of medical degree, place of training, gender, and region of practice. Multivariate generalized linear mixed models were utilized to determine the relationship between demographics and industry payments. A total of 5898 spine surgeons met inclusion criteria. About 91.6% of surgeons reported at least one financial relationship with industry. The median total value of payments was $994.07. Surgeons receiving over $1,000,000 from industry during the reporting period represented 6.6% of the database and accounted for 83.5% of the total value exchanged. Orthopedic training (P < 0.001), academic practice setting (P < 0.0001), male gender (P < 0.0001), and West or South region of practice (P < 0.0001) were associated with industry payments. Linear regression analysis revealed a strong inverse relationship between years of experience and number of payments from industry (r = -0.967, P < 0.0001). Financial relationships between spine surgeons and industry are highly prevalent. Surgeon demographics have a significant association with industry-surgeon financial relationships. Our reported value of payments did not include ownership or research payments and thus likely underestimates the magnitude of these financial relationships. 3.
Analysis of Cisco Open Network Environment (ONE) OpenFlow Controller Implementation
2014-08-01
Software - Defined Networking ( SDN ), when fully realized, offer many improvements over the current rigid and...functionalities like handshake, connection setup, switch management, and security. 15. SUBJECT TERMS OpenFlow, software - defined networking , Cisco ONE, SDN ...innovating packet-forwarding technologies. Network device roles are strictly defined with little or no flexibility. In Software - Defined Networks ( SDNs ),
16 CFR 1211.14 - Instruction manual.
Code of Federal Regulations, 2011 CFR
2011-01-01
... opener. 4. Where possible, install door opener 7 feet or more above the floor. For products requiring an emergency release, mount the emergency release 6 feet above the floor. 5. Do not connect opener to source of... height of 5 feet so small children cannot reach it, and (c) away from all moving parts of the door. 7...
... and into a large blood vessel called the aorta. The aortic valve separates the heart and aorta. The aortic valve opens so blood can flow ... to be able to see your heart and aorta. You may need to be connected to a ...
OrChem - An open source chemistry search engine for Oracle®
2009-01-01
Background Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Results Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. Availability OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net. PMID:20298521
The ChEMBL database as linked open data
2013-01-01
Background Making data available as Linked Data using Resource Description Framework (RDF) promotes integration with other web resources. RDF documents can natively link to related data, and others can link back using Uniform Resource Identifiers (URIs). RDF makes the data machine-readable and uses extensible vocabularies for additional information, making it easier to scale up inference and data analysis. Results This paper describes recent developments in an ongoing project converting data from the ChEMBL database into RDF triples. Relative to earlier versions, this updated version of ChEMBL-RDF uses recently introduced ontologies, including CHEMINF and CiTO; exposes more information from the database; and is now available as dereferencable, linked data. To demonstrate these new features, we present novel use cases showing further integration with other web resources, including Bio2RDF, Chem2Bio2RDF, and ChemSpider, and showing the use of standard ontologies for querying. Conclusions We have illustrated the advantages of using open standards and ontologies to link the ChEMBL database to other databases. Using those links and the knowledge encoded in standards and ontologies, the ChEMBL-RDF resource creates a foundation for integrated semantic web cheminformatics applications, such as the presented decision support. PMID:23657106
Database technology and the management of multimedia data in the Mirror project
NASA Astrophysics Data System (ADS)
de Vries, Arjen P.; Blanken, H. M.
1998-10-01
Multimedia digital libraries require an open distributed architecture instead of a monolithic database system. In the Mirror project, we use the Monet extensible database kernel to manage different representation of multimedia objects. To maintain independence between content, meta-data, and the creation of meta-data, we allow distribution of data and operations using CORBA. This open architecture introduces new problems for data access. From an end user's perspective, the problem is how to search the available representations to fulfill an actual information need; the conceptual gap between human perceptual processes and the meta-data is too large. From a system's perspective, several representations of the data may semantically overlap or be irrelevant. We address these problems with an iterative query process and active user participating through relevance feedback. A retrieval model based on inference networks assists the user with query formulation. The integration of this model into the database design has two advantages. First, the user can query both the logical and the content structure of multimedia objects. Second, the use of different data models in the logical and the physical database design provides data independence and allows algebraic query optimization. We illustrate query processing with a music retrieval application.
IAS telecommunication infrastructure and value added network services provided by IASNET
NASA Astrophysics Data System (ADS)
Smirnov, Oleg L.; Marchenko, Sergei
The topology of a packet switching network for the Soviet National Centre for Automated Data Exchange with Foreign Computer Networks and Databanks (NCADE) based on a design by the Institute for Automated Systems (IAS) is discussed. NCADE has partners all over the world: it is linked to East European countries via telephone lines while satellites are used for communication with remote partners, such as Cuba, Mongolia, and Vietnam. Moreover, there is a connection to the Austrian, British, Canadian, Finnish, French, U.S. and other western networks through which users can have access to databases on each network. At the same time, NCADE provides western customers with access to more than 70 Soviet databases. Software and hardware of IASNET use data exchange recommendations agreed with the International Standard Organization (ISO) and International Telegraph and Telephone Consultative Committee (CCITT). Technical parameters of IASNET are compatible with the majority of foreign networks such as DATAPAK, TRANSPAC, TELENET, and others. By means of IASNET, the NCADE provides connection of Soviet and foreign users to information and computer centers around the world on the basis of the CCITT X.25 and X.75 recommendations. Any information resources of IASNET and value added network services, such as computer teleconferences, E-mail, information retrieval system, intelligent support of access to databanks and databases, and others are discussed. The topology of the ACADEMNET connected to IASNET over an X.25 gateway is also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lacy, Benjamin Paul; Ziminsky, Willy Steve; Johnson, Thomas Edward
A turbomachine includes a compressor, a combustor operatively connected to the compressor, an end cover mounted to the combustor, and an injection nozzle assembly operatively connected to the combustor. The injection nozzle assembly includes a cap member having a first surface that extends to a second surface. The cap member further includes a plurality of openings. A plurality of bundled mini-tube assemblies are detachably mounted in the plurality of openings in the cap member. Each of the plurality of bundled mini-tube assemblies includes a main body section having a first end section and a second end section. A fluid plenummore » is arranged within the main body section. A plurality of tubes extend between the first and second end sections. Each of the plurality of tubes is fluidly connected to the fluid plenum.« less
NASA Astrophysics Data System (ADS)
Loveless, R.; Erhard, P.; Ficenec, J.; Gather, K.; Heath, G.; Iacovacci, M.; Kehres, J.; Mobayyen, M.; Notz, D.; Orr, R.; Orr, R.; Sephton, A.; Stroili, R.; Tokushuku, K.; Vogel, W.; Whitmore, J.; Wiggers, L.
1989-12-01
The ZEUS collaboration is building a system to monitor, control and document the hardware of the ZEUS detector. This system is based on a network of VAX computers and microprocessors connected via ethernet. The database for the hardware values will be ADAMO tables; the ethernet connection will be DECNET, TCP/IP, or RPC. Most of the documentation will also be kept in ADAMO tables for easy access by users.
Miles, Alistair; Zhao, Jun; Klyne, Graham; White-Cooper, Helen; Shotton, David
2010-10-01
Integrating heterogeneous data across distributed sources is a major requirement for in silico bioinformatics supporting translational research. For example, genome-scale data on patterns of gene expression in the fruit fly Drosophila melanogaster are widely used in functional genomic studies in many organisms to inform candidate gene selection and validate experimental results. However, current data integration solutions tend to be heavy weight, and require significant initial and ongoing investment of effort. Development of a common Web-based data integration infrastructure (a.k.a. data web), using Semantic Web standards, promises to alleviate these difficulties, but little is known about the feasibility, costs, risks or practical means of migrating to such an infrastructure. We describe the development of OpenFlyData, a proof-of-concept system integrating gene expression data on D. melanogaster, combining Semantic Web standards with light-weight approaches to Web programming based on Web 2.0 design patterns. To support researchers designing and validating functional genomic studies, OpenFlyData includes user-facing search applications providing intuitive access to and comparison of gene expression data from FlyAtlas, the BDGP in situ database, and FlyTED, using data from FlyBase to expand and disambiguate gene names. OpenFlyData's services are also openly accessible, and are available for reuse by other bioinformaticians and application developers. Semi-automated methods and tools were developed to support labour- and knowledge-intensive tasks involved in deploying SPARQL services. These include methods for generating ontologies and relational-to-RDF mappings for relational databases, which we illustrate using the FlyBase Chado database schema; and methods for mapping gene identifiers between databases. The advantages of using Semantic Web standards for biomedical data integration are discussed, as are open issues. In particular, although the performance of open source SPARQL implementations is sufficient to query gene expression data directly from user-facing applications such as Web-based data fusions (a.k.a. mashups), we found open SPARQL endpoints to be vulnerable to denial-of-service-type problems, which must be mitigated to ensure reliability of services based on this standard. These results are relevant to data integration activities in translational bioinformatics. The gene expression search applications and SPARQL endpoints developed for OpenFlyData are deployed at http://openflydata.org. FlyUI, a library of JavaScript widgets providing re-usable user-interface components for Drosophila gene expression data, is available at http://flyui.googlecode.com. Software and ontologies to support transformation of data from FlyBase, FlyAtlas, BDGP and FlyTED to RDF are available at http://openflydata.googlecode.com. SPARQLite, an implementation of the SPARQL protocol, is available at http://sparqlite.googlecode.com. All software is provided under the GPL version 3 open source license.
... to Main Content Two Ways to Explore Toxic Chemicals in Your Community TOXMAP classic provides an Advanced ... group of TOXNET databases related to toxicology, hazardous chemicals, environmental health, and toxic releases. Connect with Us ...
ERIC Educational Resources Information Center
Gogia, Laura Park
2016-01-01
Virginia Commonwealth University (VCU) is implementing a large scale exploration of digital pedagogies, including connected learning and open education, in an effort to promote digital fluency and integrative thinking among students. The purpose of this study was to develop a classroom assessment toolkit for faculty who wish to document student…
Data Architecture in an Open Systems Environment.
ERIC Educational Resources Information Center
Bernbom, Gerald; Cromwell, Dennis
1993-01-01
The conceptual basis for structured data architecture, and its integration with open systems technology at Indiana University, are described. Key strategic goals guiding these efforts are discussed: commitment to improved data access; migration to relational database technology, and deployment of a high-speed, multiprotocol network; and…
A reference system for animal biometrics: application to the northern leopard frog
Petrovska-Delacretaz, D.; Edwards, A.; Chiasson, J.; Chollet, G.; Pilliod, D.S.
2014-01-01
Reference systems and public databases are available for human biometrics, but to our knowledge nothing is available for animal biometrics. This is surprising because animals are not required to give their agreement to be in a database. This paper proposes a reference system and database for the northern leopard frog (Lithobates pipiens). Both are available for reproducible experiments. Results of both open set and closed set experiments are given.
2011-06-01
From above the cut-out, connect the braided plastic hose from the pressure relief valves near the generators to the plastic T-coupling. The... braided plastic hose that emerges from the rear of the QuadCon is connected to the water reservoir below the cut-out. Add approximately 60 liters of...above the generators. Open both valves on either side of the connection. Connect the steel- braided flexible hose from the evaporator suction line
An extended OpenSim knee model for analysis of strains of connective tissues.
Marieswaran, M; Sikidar, Arnab; Goel, Anu; Joshi, Deepak; Kalyanasundaram, Dinesh
2018-04-17
OpenSim musculoskeletal models provide an accurate simulation environment that eases limitations of in vivo and in vitro studies. In this work, a biomechanical knee model was formulated with femoral articular cartilages and menisci along with 25 connective tissue bundles representing ligaments and capsules. The strain patterns of the connective tissues in the presence of femoral articular cartilage and menisci in the OpenSim knee model was probed in a first of its kind study. The effect of knee flexion (0°-120°), knee rotation (- 40° to 30°) and knee adduction (- 15° to 15°) on the anterior cruciate, posterior cruciate, medial collateral, lateral collateral ligaments and other connective tissues were studied by passive simulation. Further, a new parameter for assessment of strain namely, the differential inter-bundle strain of the connective tissues were analyzed to provide new insights for injury kinematics. ACL, PCL, LCL and PL was observed to follow a parabolic strain pattern during flexion while MCL represented linear strain patterns. All connective tissues showed non-symmetric parabolic strain variation during rotation. During adduction, the strain variation was linear for the knee bundles except for FL, PFL and TL. Strains higher than 0.1 were observed in most of the bundles during lateral rotation followed by abduction, medial rotation and adduction. In the case of flexion, highest strains were observed in aACL and aPCL. A combination of strains at a flexion of 0° with medial rotation of 30° or a flexion of 80° with rotation of 30° are evaluated as rupture-prone kinematics.
Van Berkel, Gary J; Kertesz, Vilmos
2017-02-15
An "Open Access"-like mass spectrometric platform to fully utilize the simplicity of the manual open port sampling interface for rapid characterization of unprocessed samples by liquid introduction atmospheric pressure ionization mass spectrometry has been lacking. The in-house developed integrated software with a simple, small and relatively low-cost mass spectrometry system introduced here fills this void. Software was developed to operate the mass spectrometer, to collect and process mass spectrometric data files, to build a database and to classify samples using such a database. These tasks were accomplished via the vendor-provided software libraries. Sample classification based on spectral comparison utilized the spectral contrast angle method. Using the developed software platform near real-time sample classification is exemplified using a series of commercially available blue ink rollerball pens and vegetable oils. In the case of the inks, full scan positive and negative ion ESI mass spectra were both used for database generation and sample classification. For the vegetable oils, full scan positive ion mode APCI mass spectra were recorded. The overall accuracy of the employed spectral contrast angle statistical model was 95.3% and 98% in case of the inks and oils, respectively, using leave-one-out cross-validation. This work illustrates that an open port sampling interface/mass spectrometer combination, with appropriate instrument control and data processing software, is a viable direct liquid extraction sampling and analysis system suitable for the non-expert user and near real-time sample classification via database matching. Published in 2016. This article is a U.S. Government work and is in the public domain in the USA. Published in 2016. This article is a U.S. Government work and is in the public domain in the USA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jairam, Vikram; Yu, James B., E-mail: james.b.yu@yale.edu
Purpose: To use the Centers for Medicare and Medicaid Services Open Payments database to characterize payments made to radiation oncologists and compare their payment profile with that of medical and surgical oncologists. Methods and Materials: The June 2015 release of the Open Payments database was accessed, containing all payments made to physicians in 2014. The general payments dataset was used for analysis. Data on payments made to medical, surgical, and radiation oncologists was obtained and compared. Within radiation oncology, data regarding payment category, sponsorship, and geographic distribution were identified. Basic statistics including mean, median, range, and sum were calculated by providermore » and by transaction. Results: Among the 3 oncologic specialties, radiation oncology had the smallest proportion (58%) of compensated physicians and the lowest mean ($1620) and median ($112) payment per provider. Surgical oncology had the highest proportion (84%) of compensated physicians, whereas medical oncology had the highest mean ($6371) and median ($448) payment per physician. Within radiation oncology, nonconsulting services accounted for the most money to physicians ($1,042,556), whereas the majority of the sponsors were medical device companies (52%). Radiation oncologists in the West accepted the most money ($2,041,603) of any US Census region. Conclusions: Radiation oncologists in 2014 received a large number of payments from industry, although less than their medical or surgical counterparts. As the Open Payments database continues to be improved, it remains to be seen whether this information will be used by patients to inform choice of providers or by lawmakers to enact policy regulating physician–industry relationships.« less
Brown, Ramsay A; Swanson, Larry W
2013-09-01
Systematic description and the unambiguous communication of findings and models remain among the unresolved fundamental challenges in systems neuroscience. No common descriptive frameworks exist to describe systematically the connective architecture of the nervous system, even at the grossest level of observation. Furthermore, the accelerating volume of novel data generated on neural connectivity outpaces the rate at which this data is curated into neuroinformatics databases to synthesize digitally systems-level insights from disjointed reports and observations. To help address these challenges, we propose the Neural Systems Language (NSyL). NSyL is a modeling language to be used by investigators to encode and communicate systematically reports of neural connectivity from neuroanatomy and brain imaging. NSyL engenders systematic description and communication of connectivity irrespective of the animal taxon described, experimental or observational technique implemented, or nomenclature referenced. As a language, NSyL is internally consistent, concise, and comprehensible to both humans and computers. NSyL is a promising development for systematizing the representation of neural architecture, effectively managing the increasing volume of data on neural connectivity and streamlining systems neuroscience research. Here we present similar precedent systems, how NSyL extends existing frameworks, and the reasoning behind NSyL's development. We explore NSyL's potential for balancing robustness and consistency in representation by encoding previously reported assertions of connectivity from the literature as examples. Finally, we propose and discuss the implications of a framework for how NSyL will be digitally implemented in the future to streamline curation of experimental results and bridge the gaps among anatomists, imagers, and neuroinformatics databases. Copyright © 2013 Wiley Periodicals, Inc.
Smith, Evan J; Kuang, Xiangyu; Pandarinath, Rajeev
2017-07-01
Tibial shaft fractures comprise a large portion of operatively treated long bone fractures, and present with the highest rate of open injuries. Intramedullary fixation has become the standard of care for both open and closed injuries. The rates of short term complications and hospital length of stay for open and closed fractures treated with intramedullary fixation is not fully known. Previous series on tibia fractures were performed at high volume centers, and data were not generalizable, further they did not report on length of stay and the impact of preoperative variables on infections, complications and reoperation. We used a large surgical database to compare these outcomes while adjusting for preoperative risk factors. Data were extracted from the ACS-NSQIP database from 2005 to 2014. Cases were identified based on CPT codes for intramedullary fixation and categorized as closed vs open based on ICD9 code. In addition to demographic and case data, primary analysis examined correlation between open and closed fracture status with infection, complications, reoperation and hospital length of stay. Secondary analysis examined preoperative variables including gender, race, age, BMI, and diabetes effect on outcomes. There were 272 cases identified. There were no significant demographic differences between open and closed tibia fracture cases. Open fracture status did not increase the rate of infection, 30day complications, reoperation, or length of stay. The only preoperative factor that correlated with length of stay was age. There was no correlation between BMI, presence of insulin dependent and nondependent diabetes, and any outcome measure. When considering the complication rates for open and closed tibial shaft fractures treated with intramedullary fixation, there is no difference between 30-day complication rate, length of stay, or return to the operating room. Our reported postoperative infection rates were comparable to previous series, adding validity to our results. The heterogeneity of the hospitals included in ACS-NSQIP database allow our data to be generalizable. These methods may underrepresent the true occurrence of infection as operatively treated tibia infections may present late, requiring late revision. Despite limitations, the data reflect on the current burden of managing these once devastating injuries. Copyright © 2017 Elsevier Ltd. All rights reserved.
Robinson, William P
2017-12-01
Ruptured abdominal aortic aneurysm is one of the most difficult clinical problems in surgical practice, with extraordinarily high morbidity and mortality. During the past 23 years, the literature has become replete with reports regarding ruptured endovascular aneurysm repair. A variety of study designs and databases have been utilized to compare ruptured endovascular aneurysm repair and open surgical repair for ruptured abdominal aortic aneurysm and studies of various designs from different databases have yielded vastly different conclusions. It therefore remains controversial whether ruptured endovascular aneurysm repair improves outcomes after ruptured abdominal aortic aneurysm in comparison to open surgical repair. The purpose of this article is to review the best available evidence comparing ruptured endovascular aneurysm repair and open surgical repair of ruptured abdominal aortic aneurysm, including single institution and multi-institutional retrospective observational studies, large national population-based studies, large national registries of prospectively collected data, and randomized controlled clinical trials. This article will analyze the study designs and databases utilized with their attendant strengths and weaknesses to understand the sometimes vastly different conclusions the studies have reached. This article will attempt to integrate the data to distill some of the lessons that have been learned regarding ruptured endovascular aneurysm repair and identify ongoing needs in this field. Copyright © 2017 Elsevier Inc. All rights reserved.
Performance evaluation of multi-channel wireless mesh networks with embedded systems.
Lam, Jun Huy; Lee, Sang-Gon; Tan, Whye Kit
2012-01-01
Many commercial wireless mesh network (WMN) products are available in the marketplace with their own proprietary standards, but interoperability among the different vendors is not possible. Open source communities have their own WMN implementation in accordance with the IEEE 802.11s draft standard, Linux open80211s project and FreeBSD WMN implementation. While some studies have focused on the test bed of WMNs based on the open80211s project, none are based on the FreeBSD. In this paper, we built an embedded system using the FreeBSD WMN implementation that utilizes two channels and evaluated its performance. This implementation allows the legacy system to connect to the WMN independent of the type of platform and distributes the load between the two non-overlapping channels. One channel is used for the backhaul connection and the other one is used to connect to the stations to wireless mesh network. By using the power efficient 802.11 technology, this device can also be used as a gateway for the wireless sensor network (WSN).
Using OpenSSH to secure mobile LAN network traffic
NASA Astrophysics Data System (ADS)
Luu, Brian B.; Gopaul, Richard D.
2002-08-01
Mobile Internet Protocol (IP) Local Area Network (LAN) is a technique, developed by the U.S. Army Research Laboratory, which allows a LAN to be IP mobile when attaching to a foreign IP-based network and using this network as a means to retain connectivity to its home network. In this paper, we describe a technique that uses Open Secure Shell (OpenSSH) software to ensure secure, encrypted transmission of a mobile LAN's network traffic. Whenever a mobile LAN, implemented with Mobile IP LAN, moves to a foreign network, its gateway (router) obtains an IP address from the new network. IP tunnels, using IP encapsulation, are then established from the gateway through the foreign network to a home agent on its home network. These tunnels provide a virtual two-way connection to the home network for the mobile LAN as if the LAN were connected directly to its home network. Hence, when IP mobile, a mobile LAN's tunneled network traffic must traverse one or more foreign networks that may not be trusted. This traffic could be subject to eavesdropping, interception, modification, or redirection by malicious nodes in these foreign networks. To protect network traffic passing through the tunnels, OpenSSH is used as a means of encryption because it prevents surveillance, modification, and redirection of mobile LAN traffic passing across foreign networks. Since the software is found in the public domain, is available for most current operating systems, and is commonly used to provide secure network communications, OpenSSH is the software of choice.
Sward, Katherine A; Newth, Christopher JL; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael
2015-01-01
Objectives To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Material and Methods Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Results Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Conclusions Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. PMID:25796596
47 CFR 54.642 - Competitive bidding requirement and exemptions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... SERVICES (CONTINUED) UNIVERSAL SERVICE Universal Service Support for Health Care Providers Healthcare... process. (1) All entities participating in the Healthcare Connect Fund must conduct a fair and open... local requirements. (c) Cost-effective. For purposes of the Healthcare Connect Fund, “cost-effective” is...
47 CFR 54.642 - Competitive bidding requirement and exemptions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... SERVICES (CONTINUED) UNIVERSAL SERVICE Universal Service Support for Health Care Providers Healthcare... process. (1) All entities participating in the Healthcare Connect Fund must conduct a fair and open... local requirements. (c) Cost-effective. For purposes of the Healthcare Connect Fund, “cost-effective” is...
Rhizomatic Education and Deleuzian Theory
ERIC Educational Resources Information Center
Harris, David
2016-01-01
Personal experience participating in a Community Massive Open Online Course (cMOOC) discussing rhizomatic education ("Rhizo15") helped to ground some of the theoretical issues connected with trying to "apply" Deleuzian work to education. Notions of connectivity and community as a learning resource are compared with the more…
Flow compensating pressure regulator
NASA Technical Reports Server (NTRS)
Baehr, E. F. (Inventor)
1978-01-01
An apparatus for regulating pressure of treatment fluid during ophthalmic procedures is described. Flow sensing and pressure regulating diaphragms are used to modulate a flow control valve. The pressure regulating diaphragm is connected to the flow control valve to urge the valve to an open position due to pressure being applied to the diaphragm by bias means such as a spring. The flow sensing diaphragm is mechanically connected to the flow control valve and urges it to an opened position because of the differential pressure on the diaphragm generated by a flow of incoming treatment fluid through an orifice in the diaphragm. A bypass connection with a variable restriction is connected in parallel relationship to the orifice to provide for adjusting the sensitivity of the flow sensing diaphragm. A multiple lever linkage system is utilized between the center of the second diaphragm and the flow control valve to multiply the force applied to the valve by the other diaphragm and reverse the direction of the force.
OpenFluDB, a database for human and animal influenza virus
Liechti, Robin; Gleizes, Anne; Kuznetsov, Dmitry; Bougueleret, Lydie; Le Mercier, Philippe; Bairoch, Amos; Xenarios, Ioannis
2010-01-01
Although research on influenza lasted for more than 100 years, it is still one of the most prominent diseases causing half a million human deaths every year. With the recent observation of new highly pathogenic H5N1 and H7N7 strains, and the appearance of the influenza pandemic caused by the H1N1 swine-like lineage, a collaborative effort to share observations on the evolution of this virus in both animals and humans has been established. The OpenFlu database (OpenFluDB) is a part of this collaborative effort. It contains genomic and protein sequences, as well as epidemiological data from more than 27 000 isolates. The isolate annotations include virus type, host, geographical location and experimentally tested antiviral resistance. Putative enhanced pathogenicity as well as human adaptation propensity are computed from protein sequences. Each virus isolate can be associated with the laboratories that collected, sequenced and submitted it. Several analysis tools including multiple sequence alignment, phylogenetic analysis and sequence similarity maps enable rapid and efficient mining. The contents of OpenFluDB are supplied by direct user submission, as well as by a daily automatic procedure importing data from public repositories. Additionally, a simple mechanism facilitates the export of OpenFluDB records to GenBank. This resource has been successfully used to rapidly and widely distribute the sequences collected during the recent human swine flu outbreak and also as an exchange platform during the vaccine selection procedure. Database URL: http://openflu.vital-it.ch. PMID:20624713
ACToR Chemical Structure processing using Open Source ChemInformatics Libraries (FutureToxII)
ACToR (Aggregated Computational Toxicology Resource) is a centralized database repository developed by the National Center for Computational Toxicology (NCCT) at the U.S. Environmental Protection Agency (EPA). Free and open source tools were used to compile toxicity data from ove...
Open exchange of scientific knowledge and European copyright: The case of biodiversity information
Egloff, Willi; Patterson, David J.; Agosti, Donat; Hagedorn, Gregor
2014-01-01
Abstract Background. The 7th Framework Programme for Research and Technological Development is helping the European Union to prepare for an integrative system for intelligent management of biodiversity knowledge. The infrastructure that is envisaged and that will be further developed within the Programme “Horizon 2020” aims to provide open and free access to taxonomic information to anyone with a requirement for biodiversity data, without the need for individual consent of other persons or institutions. Open and free access to information will foster the re-use and improve the quality of data, will accelerate research, and will promote new types of research. Progress towards the goal of free and open access to content is hampered by numerous technical, economic, sociological, legal, and other factors. The present article addresses barriers to the open exchange of biodiversity knowledge that arise from European laws, in particular European legislation on copyright and database protection rights. We present a legal point of view as to what will be needed to bring distributed information together and facilitate its re-use by data mining, integration into semantic knowledge systems, and similar techniques. We address exceptions and limitations of copyright or database protection within Europe, and we point to the importance of data use agreements. We illustrate how exceptions and limitations have been transformed into national legislations within some European states to create inconsistencies that impede access to biodiversity information. Conclusions. The legal situation within the EU is unsatisfactory because there are inconsistencies among states that hamper the deployment of an open biodiversity knowledge management system. Scientists within the EU who work with copyright protected works or with protected databases have to be aware of regulations that vary from country to country. This is a major stumbling block to international collaboration and is an impediment to the open exchange of biodiversity knowledge. Such differences should be removed by unifying exceptions and limitations for research purposes in a binding, Europe-wide regulation. PMID:25009418
Database on veterinary clinical research in homeopathy.
Clausen, Jürgen; Albrecht, Henning
2010-07-01
The aim of the present report is to provide an overview of the first database on clinical research in veterinary homeopathy. Detailed searches in the database 'Veterinary Clinical Research-Database in Homeopathy' (http://www.carstens-stiftung.de/clinresvet/index.php). The database contains about 200 entries of randomised clinical trials, non-randomised clinical trials, observational studies, drug provings, case reports and case series. Twenty-two clinical fields are covered and eight different groups of species are included. The database is free of charge and open to all interested veterinarians and researchers. The database enables researchers and veterinarians, sceptics and supporters to get a quick overview of the status of veterinary clinical research in homeopathy and alleviates the preparation of systematical reviews or may stimulate reproductions or even new studies. 2010 Elsevier Ltd. All rights reserved.
Defining Openness: Updating the Concept of "Open" for a Connected World
ERIC Educational Resources Information Center
Brent, Isabelle; Gibbs, Graham R.; Gruszczynska, Anna Katarzyna
2012-01-01
The field of Social Research Methods is shared not only by the social sciences, but by many other disciplines. There is therefore enormous scope for the creation and re-use of open educational resources (OERs) in this area. However, our work with social scientists on a number of recent projects suggests that barriers exist to OER creation and use…
2016-03-01
science IT information technology JBOD just a bunch of disks JDBC java database connectivity xviii JPME Joint Professional Military Education JSO...Joint Service Officer JVM java virtual machine MPP massively parallel processing MPTE Manpower, Personnel, Training, and Education NAVMAC Navy...27 external database, whether it is MySQL , Oracle, DB2, or SQL Server (Teller, 2015). Connectors optimize the data transfer by obtaining metadata
Gainotti, Sabina; Torreri, Paola; Wang, Chiuhui Mary; Reihs, Robert; Mueller, Heimo; Heslop, Emma; Roos, Marco; Badowska, Dorota Mazena; de Paulis, Federico; Kodra, Yllka; Carta, Claudio; Martìn, Estrella Lopez; Miller, Vanessa Rangel; Filocamo, Mirella; Mora, Marina; Thompson, Mark; Rubinstein, Yaffa; Posada de la Paz, Manuel; Monaco, Lucia; Lochmüller, Hanns; Taruscio, Domenica
2018-05-01
In rare disease (RD) research, there is a huge need to systematically collect biomaterials, phenotypic, and genomic data in a standardized way and to make them findable, accessible, interoperable and reusable (FAIR). RD-Connect is a 6 years global infrastructure project initiated in November 2012 that links genomic data with patient registries, biobanks, and clinical bioinformatics tools to create a central research resource for RDs. Here, we present RD-Connect Registry & Biobank Finder, a tool that helps RD researchers to find RD biobanks and registries and provide information on the availability and accessibility of content in each database. The finder concentrates information that is currently sparse on different repositories (inventories, websites, scientific journals, technical reports, etc.), including aggregated data and metadata from participating databases. Aggregated data provided by the finder, if appropriately checked, can be used by researchers who are trying to estimate the prevalence of a RD, to organize a clinical trial on a RD, or to estimate the volume of patients seen by different clinical centers. The finder is also a portal to other RD-Connect tools, providing a link to the RD-Connect Sample Catalogue, a large inventory of RD biological samples available in participating biobanks for RD research. There are several kinds of users and potential uses for the RD-Connect Registry & Biobank Finder, including researchers collaborating with academia and the industry, dealing with the questions of basic, translational, and/or clinical research. As of November 2017, the finder is populated with aggregated data for 222 registries and 21 biobanks.
Definition and maintenance of a telemetry database dictionary
NASA Technical Reports Server (NTRS)
Knopf, William P. (Inventor)
2007-01-01
A telemetry dictionary database includes a component for receiving spreadsheet workbooks of telemetry data over a web-based interface from other computer devices. Another component routes the spreadsheet workbooks to a specified directory on the host processing device. A process then checks the received spreadsheet workbooks for errors, and if no errors are detected the spreadsheet workbooks are routed to another directory to await initiation of a remote database loading process. The loading process first converts the spreadsheet workbooks to comma separated value (CSV) files. Next, a network connection with the computer system that hosts the telemetry dictionary database is established and the CSV files are ported to the computer system that hosts the telemetry dictionary database. This is followed by a remote initiation of a database loading program. Upon completion of loading a flatfile generation program is manually initiated to generate a flatfile to be used in a mission operations environment by the core ground system.
The Confluence of GIS, Cloud and Open Source, Enabling Big Raster Data Applications
NASA Astrophysics Data System (ADS)
Plesea, L.; Emmart, C. B.; Boller, R. A.; Becker, P.; Baynes, K.
2016-12-01
The rapid evolution of available cloud services is profoundly changing the way applications are being developed and used. Massive object stores, service scalability, continuous integration are some of the most important cloud technology advances that directly influence science applications and GIS. At the same time, more and more scientists are using GIS platforms in their day to day research. Yet with new opportunities there are always some challenges. Given the large amount of data commonly required in science applications, usually large raster datasets, connectivity is one of the biggest problems. Connectivity has two aspects, one being the limited bandwidth and latency of the communication link due to the geographical location of the resources, the other one being the interoperability and intrinsic efficiency of the interface protocol used to connect. NASA and Esri are actively helping each other and collaborating on a few open source projects, aiming to provide some of the core technology components to directly address the GIS enabled data connectivity problems. Last year Esri contributed LERC, a very fast and efficient compression algorithm to the GDAL/MRF format, which itself is a NASA/Esri collaboration project. The MRF raster format has some cloud aware features that make it possible to build high performance web services on cloud platforms, as some of the Esri projects demonstrate. Currently, another NASA open source project, the high performance OnEarth WMTS server is being refactored and enhanced to better integrate with MRF, GDAL and Esri software. Taken together, the GDAL, MRF and OnEarth form the core of an open source CloudGIS toolkit that is already showing results. Since it is well integrated with GDAL, which is the most common interoperability component of GIS applications, this approach should improve the connectivity and performance of many science and GIS applications in the cloud.
Earth science big data at users' fingertips: the EarthServer Science Gateway Mobile
NASA Astrophysics Data System (ADS)
Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Fargetta, Marco; Pappalardo, Marco; Rundo, Francesco
2014-05-01
The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. In this contribution we will report on the EarthServer Science Gateway Mobile, an app for both iOS and Android-based devices that allows users to seamlessly access some of the EarthServer applications using SAML-based federated authentication and fine-grained authorisation mechanisms.
InChI in the wild: an assessment of InChIKey searching in Google
2013-01-01
While chemical databases can be queried using the InChI string and InChIKey (IK) the latter was designed for open-web searching. It is becoming increasingly effective for this since more sources enhance crawling of their websites by the Googlebot and consequent IK indexing. Searchers who use Google as an adjunct to database access may be less familiar with the advantages of using the IK as explored in this review. As an example, the IK for atorvastatin retrieves ~200 low-redundancy links from a Google search in 0.3 of a second. These include most major databases and a very low false-positive rate. Results encompass less familiar but potentially useful sources and can be extended to isomer capture by using just the skeleton layer of the IK. Google Advanced Search can be used to filter large result sets. Image searching with the IK is also effective and complementary to open-web queries. Results can be particularly useful for less-common structures as exemplified by a major metabolite of atorvastatin giving only three hits. Testing also demonstrated document-to-document and document-to-database joins via structure matching. The necessary generation of an IK from chemical names can be accomplished using open tools and resources for patents, papers, abstracts or other text sources. Active global sharing of local IK-linked information can be accomplished via surfacing in open laboratory notebooks, blogs, Twitter, figshare and other routes. While information-rich chemistry (e.g. approved drugs) can exhibit swamping and redundancy effects, the much smaller IK result sets for link-poor structures become a transformative first-pass option. The IK indexing has therefore turned Google into a de-facto open global chemical information hub by merging links to most significant sources, including over 50 million PubChem and ChemSpider records. The simplicity, specificity and speed of matching make it a useful option for biologists or others less familiar with chemical searching. However, compared to rigorously maintained major databases, users need to be circumspect about the consistency of Google results and provenance of retrieved links. In addition, community engagement may be necessary to ameliorate possible future degradation of utility. PMID:23399051
Java Web Simulation (JWS); a web based database of kinetic models.
Snoep, J L; Olivier, B G
2002-01-01
Software to make a database of kinetic models accessible via the internet has been developed and a core database has been set up at http://jjj.biochem.sun.ac.za/. This repository of models, available to everyone with internet access, opens a whole new way in which we can make our models public. Via the database, a user can change enzyme parameters and run time simulations or steady state analyses. The interface is user friendly and no additional software is necessary. The database currently contains 10 models, but since the generation of the program code to include new models has largely been automated the addition of new models is straightforward and people are invited to submit their models to be included in the database.
Using Large-Scale Databases in Evaluation: Advances, Opportunities, and Challenges
ERIC Educational Resources Information Center
Penuel, William R.; Means, Barbara
2011-01-01
Major advances in the number, capabilities, and quality of state, national, and transnational databases have opened up new opportunities for evaluators. Both large-scale data sets collected for administrative purposes and those collected by other researchers can provide data for a variety of evaluation-related activities. These include (a)…
High-throughput non-targeted analyses (NTA) rely on chemical reference databases for tentative identification of observed chemical features. Many of these databases and online resources incorporate chemical structure data not in a form that is readily observed by mass spectromet...
2012-11-27
with powerful analysis tools and an informatics approach leveraging best-of-breed NoSQL databases, in order to store, search and retrieve relevant...dictionaries, and JavaScript also has good support. The MongoDB project[15] was chosen as a scalable NoSQL data store for the cheminfor- matics components
NASA Astrophysics Data System (ADS)
Boeck, Lorenz R.; Katzy, Peter; Hasslberger, Josef; Kink, Andreas; Sattelmayer, Thomas
2016-09-01
An open-access online platform containing data from experiments on deflagration-to-detonation transition conducted at the Institute of Thermodynamics, Technical University of Munich, has been developed and is accessible at http://www.td.mw.tum.de/ddt. The database provides researchers working on explosion dynamics with data for theoretical analyses and for the validation of numerical simulations.
Ukrainian Database and Atlas of Light Curves of Artificial Space Objects
NASA Astrophysics Data System (ADS)
Koshkin, N.; Savanevich, V.; Pohorelov, A.; Shakun, L.; Zhukov, V.; Korobeynikova, E.; Strakhova, S.; Moskalenko, S.; Kashuba, V.; Krasnoshchokov, A.
This paper describes the Ukrainian database of long-term photometric observations of resident space objects (RSO). For the purpose of using this database for the outer space monitoring and space situational awareness (SSA) the open internet resource has been developed. The paper shows examples of using the Atlas of light curves of RSO's for analyzing the state of rotation around the center of mass of several active and non-functioning satellites in orbit.
Quality Attribute-Guided Evaluation of NoSQL Databases: A Case Study
2015-01-16
evaluations of NoSQL databases specifically, and big data systems in general, that have become apparent during our study. Keywords—NoSQL, distributed...technology, namely that of big data , software systems [1]. At the heart of big data systems are a collection of database technologies that are more...born organizations such as Google and Amazon [3][4], along with those of numerous other big data innovators, have created a variety of open source and
JBioWH: an open-source Java framework for bioinformatics data integration
Vera, Roberto; Perez-Riverol, Yasset; Perez, Sonia; Ligeti, Balázs; Kertész-Farkas, Attila; Pongor, Sándor
2013-01-01
The Java BioWareHouse (JBioWH) project is an open-source platform-independent programming framework that allows a user to build his/her own integrated database from the most popular data sources. JBioWH can be used for intensive querying of multiple data sources and the creation of streamlined task-specific data sets on local PCs. JBioWH is based on a MySQL relational database scheme and includes JAVA API parser functions for retrieving data from 20 public databases (e.g. NCBI, KEGG, etc.). It also includes a client desktop application for (non-programmer) users to query data. In addition, JBioWH can be tailored for use in specific circumstances, including the handling of massive queries for high-throughput analyses or CPU intensive calculations. The framework is provided with complete documentation and application examples and it can be downloaded from the Project Web site at http://code.google.com/p/jbiowh. A MySQL server is available for demonstration purposes at hydrax.icgeb.trieste.it:3307. Database URL: http://code.google.com/p/jbiowh PMID:23846595
JBioWH: an open-source Java framework for bioinformatics data integration.
Vera, Roberto; Perez-Riverol, Yasset; Perez, Sonia; Ligeti, Balázs; Kertész-Farkas, Attila; Pongor, Sándor
2013-01-01
The Java BioWareHouse (JBioWH) project is an open-source platform-independent programming framework that allows a user to build his/her own integrated database from the most popular data sources. JBioWH can be used for intensive querying of multiple data sources and the creation of streamlined task-specific data sets on local PCs. JBioWH is based on a MySQL relational database scheme and includes JAVA API parser functions for retrieving data from 20 public databases (e.g. NCBI, KEGG, etc.). It also includes a client desktop application for (non-programmer) users to query data. In addition, JBioWH can be tailored for use in specific circumstances, including the handling of massive queries for high-throughput analyses or CPU intensive calculations. The framework is provided with complete documentation and application examples and it can be downloaded from the Project Web site at http://code.google.com/p/jbiowh. A MySQL server is available for demonstration purposes at hydrax.icgeb.trieste.it:3307. Database URL: http://code.google.com/p/jbiowh.
Paganoni, Sabrina; Nicholson, Katharine; Chan, James; Shui, Amy; Schoenfeld, David; Sherman, Alexander; Berry, James; Cudkowicz, Merit; Atassi, Nazem
2018-03-01
Urate has been identified as a predictor of amyotrophic lateral sclerosis (ALS) survival in some but not all studies. Here we leverage the recent expansion of the Pooled Resource Open-Access ALS Clinical Trials (PRO-ACT) database to study the association between urate levels and ALS survival. Pooled data of 1,736 ALS participants from the PRO-ACT database were analyzed. Cox proportional hazards regression models were used to evaluate associations between urate levels at trial entry and survival. After adjustment for potential confounders (i.e., creatinine and body mass index), there was an 11% reduction in risk of reaching a survival endpoint during the study with each 1-mg/dL increase in uric acid levels (adjusted hazard ratio 0.89, 95% confidence interval 0.82-0.97, P < 0.01). Our pooled analysis provides further support for urate as a prognostic factor for survival in ALS and confirms the utility of the PRO-ACT database as a powerful resource for ALS epidemiological research. Muscle Nerve 57: 430-434, 2018. © 2017 Wiley Periodicals, Inc.
The Magnetic Connectivity of the Sun to the Heliosphere
NASA Technical Reports Server (NTRS)
Antiochos, S. K.
2010-01-01
A prime research focus of the upcoming Solar Probe Plus and Solar Orbiter missions will be to determine how the heliospheric magnetic field and plasma connect to the Sun's corona and photosphere. For much of the heliosphere this connection appears to be well understood. The quasi-steady fast wind emanates from so-called coronal holes, which appear dark in X-rays and are predominantly unipolar at the photosphere. However, the connection to the Sun of the slow, non-steady wind is far from understood and remains a major mystery. We review the existing theories for the sources of the nonsteady wind and demonstrate that they have difficulty accounting for both the observed composition of the wind and its large angular extent. A new theory is described in which this wind originates from the continuous opening and closing of narrow open field corridors in the corona, which gives rise to a web of separatrices (the S-Web) in the heliosphere. Note that in this theory the corona - heliosphere connection is intrinsically dynamic, at least, for this type of wind. We present numerical simulations of the model and describe observational tests. We discuss the implications of our results for the competing slow wind theories and for understanding the corona - heliosphere connection, in general.