Code of Federal Regulations, 2013 CFR
2013-10-01
... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...
Code of Federal Regulations, 2014 CFR
2014-10-01
... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...
Code of Federal Regulations, 2012 CFR
2012-10-01
... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...
Code of Federal Regulations, 2010 CFR
2010-10-01
... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...-compatible format. All databases must be supported with adequate documentation on data attributes, SQL...
A Tutorial in Creating Web-Enabled Databases with Inmagic DB/TextWorks through ODBC.
ERIC Educational Resources Information Center
Breeding, Marshall
2000-01-01
Explains how to create Web-enabled databases. Highlights include Inmagic's DB/Text WebPublisher product called DB/TextWorks; ODBC (Open Database Connectivity) drivers; Perl programming language; HTML coding; Structured Query Language (SQL); Common Gateway Interface (CGI) programming; and examples of HTML pages and Perl scripts. (LRW)
Comprehensive Routing Security Development and Deployment for the Internet
2015-02-01
feature enhancement and bug fixes. • MySQL : MySQL is a widely used and popular open source database package. It was chosen for database support in the...RPSTIR depends on several other open source packages. • MySQL : MySQL is used for the the local RPKI database cache. • OpenSSL: OpenSSL is used for...cryptographic libraries for X.509 certificates. • ODBC mySql Connector: ODBC (Open Database Connectivity) is a standard programming interface (API) for
Abstracting data warehousing issues in scientific research.
Tews, Cody; Bracio, Boris R
2002-01-01
This paper presents the design and implementation of the Idaho Biomedical Data Management System (IBDMS). This system preprocesses biomedical data from the IMPROVE (Improving Control of Patient Status in Critical Care) library via an Open Database Connectivity (ODBC) connection. The ODBC connection allows for local and remote simulations to access filtered, joined, and sorted data using the Structured Query Language (SQL). The tool is capable of providing an overview of available data in addition to user defined data subset for verification of models of the human respiratory system.
2010-09-01
5 2. SCIL Architecture ...............................................................................6 3. Assertions...137 x THIS PAGE INTENTIONALLY LEFT BLANK xi LIST OF FIGURES Figure 1. SCIL architecture...Database Connectivity LAN Local Area Network ODBC Open Database Connectivity SCIL Social-Cultural Content in Language UMD
Integrated Substrate and Thin Film Design Methods
1999-02-01
Proper Representation Once the required chemical databases had been converted to the Excel format, VBA macros were written to convert chemical...ternary systems databases were imported from MS Excel to MS Access to implement SQL queries. Further, this database was connected via an ODBC model, to the... VBA macro, corresponding to each of the elements A, B, and C, respectively. The B loop began with the next alphabetical choice of element symbols
Acoustic Metadata Management and Transparent Access to Networked Oceanographic Data Sets
2013-09-30
connectivity (ODBC) compliant data source for which drivers are available (e.g. MySQL , Oracle database, Postgres) can now be imported. Implementation...the possibility of speeding data transmission through compression (implemented) or the potential to use alternative data formats such as Java script
2009-07-01
data were recognized as being largely geospatial and thus a GIS was considered the most reasonable way to proceed. The Postgre suite of software also...for the ESRI (2009) geodatabase environment but is applicable for this Postgre -based system. We then introduce and discuss spatial reference...PostgreSQL database using a Postgre ODBC connection. This procedure identified 100 tables with 737 columns. This is after the removal of two
Web client and ODBC access to legacy database information: a low cost approach.
Sanders, N. W.; Mann, N. H.; Spengler, D. M.
1997-01-01
A new method has been developed for the Department of Orthopaedics of Vanderbilt University Medical Center to access departmental clinical data. Previously this data was stored only in the medical center's mainframe DB2 database, it is now additionally stored in a departmental SQL database. Access to this data is available via any ODBC compliant front-end or a web client. With a small budget and no full time staff, we were able to give our department on-line access to many years worth of patient data that was previously inaccessible. PMID:9357735
Biermann, Martin
2014-04-01
Clinical trials aiming for regulatory approval of a therapeutic agent must be conducted according to Good Clinical Practice (GCP). Clinical Data Management Systems (CDMS) are specialized software solutions geared toward GCP-trials. They are however less suited for data management in small non-GCP research projects. For use in researcher-initiated non-GCP studies, we developed a client-server database application based on the public domain CakePHP framework. The underlying MySQL database uses a simple data model based on only five data tables. The graphical user interface can be run in any web browser inside the hospital network. Data are validated upon entry. Data contained in external database systems can be imported interactively. Data are automatically anonymized on import, and the key lists identifying the subjects being logged to a restricted part of the database. Data analysis is performed by separate statistics and analysis software connecting to the database via a generic Open Database Connectivity (ODBC) interface. Since its first pilot implementation in 2011, the solution has been applied to seven different clinical research projects covering different clinical problems in different organ systems such as cancer of the thyroid and the prostate glands. This paper shows how the adoption of a generic web application framework is a feasible, flexible, low-cost, and user-friendly way of managing multidimensional research data in researcher-initiated non-GCP clinical projects. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Keshk, Sherif M A S; Ramadan, Ahmed M; Bondock, Samir
2015-08-20
The synthesis of two novel Schiff's bases (cellulose-2,3-bis-[(4-methylene-amino)-benzene-sulfonamide] (5) & cellulose-2,3-bis-[(4-methylene-amino)-N-(thiazol-2-yl)-benzenesulfonamide] (6) via condensation reactions of periodate oxidized developed bacterial cellulose ODBC (2) with sulfa drugs [sulfanilamide (3) & sulfathiazole (4)] was reported. The physicochemical characterization of the condensation products was performed using FTIR, (1)H NMR, (13)C NMR spectral analyses, X-ray diffraction and DTA. The ODBC exhibited the highest degree of oxidation based on the aldehyde group number percentage (82.9%), which confirms the highest reactivity of developed bacterial cellulose [DBC (1)]. The X-ray diffractograms indicated an increase in the interplanar distance of the cellulose Schiff base (6) compared to ODBC (2) due to sulfathiazole (4) inclusion between ODBC (2) sheets corresponding to the 1 1 0 plane. In addition, the aldehyde content of Schiff base (6) was (20.8%) much lower than that of Schiff base (5) (41.5%). These results confirmed the high affinity of sulfathiazole (4) to the ODBC (2) chain, and the substantial changes in the original properties of ODBC were due to these chemical modifications rather than the sulfanilamide (3). Copyright © 2015 Elsevier Ltd. All rights reserved.
Systematic plan of building Web geographic information system based on ActiveX control
NASA Astrophysics Data System (ADS)
Zhang, Xia; Li, Deren; Zhu, Xinyan; Chen, Nengcheng
2003-03-01
A systematic plan of building Web Geographic Information System (WebGIS) using ActiveX technology is proposed in this paper. In the proposed plan, ActiveX control technology is adopted in building client-side application, and two different schemas are introduced to implement communication between controls in users¡ browser and middle application server. One is based on Distribute Component Object Model (DCOM), the other is based on socket. In the former schema, middle service application is developed as a DCOM object that communicates with ActiveX control through Object Remote Procedure Call (ORPC) and accesses data in GIS Data Server through Open Database Connectivity (ODBC). In the latter, middle service application is developed using Java language. It communicates with ActiveX control through socket based on TCP/IP and accesses data in GIS Data Server through Java Database Connectivity (JDBC). The first one is usually developed using C/C++, and it is difficult to develop and deploy. The second one is relatively easy to develop, but its performance of data transfer relies on Web bandwidth. A sample application is developed using the latter schema. It is proved that the performance of the sample application is better than that of some other WebGIS applications in some degree.
Enhanced DIII-D Data Management Through a Relational Database
NASA Astrophysics Data System (ADS)
Burruss, J. R.; Peng, Q.; Schachter, J.; Schissel, D. P.; Terpstra, T. B.
2000-10-01
A relational database is being used to serve data about DIII-D experiments. The database is optimized for queries across multiple shots, allowing for rapid data mining by SQL-literate researchers. The relational database relates different experiments and datasets, thus providing a big picture of DIII-D operations. Users are encouraged to add their own tables to the database. Summary physics quantities about DIII-D discharges are collected and stored in the database automatically. Meta-data about code runs, MDSplus usage, and visualization tool usage are collected, stored in the database, and later analyzed to improve computing. Documentation on the database may be accessed through programming languages such as C, Java, and IDL, or through ODBC compliant applications such as Excel and Access. A database-driven web page also provides a convenient means for viewing database quantities through the World Wide Web. Demonstrations will be given at the poster.
A scalable database model for multiparametric time series: a volcano observatory case study
NASA Astrophysics Data System (ADS)
Montalto, Placido; Aliotta, Marco; Cassisi, Carmelo; Prestifilippo, Michele; Cannata, Andrea
2014-05-01
The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.
A multidisciplinary database for geophysical time series management
NASA Astrophysics Data System (ADS)
Montalto, P.; Aliotta, M.; Cassisi, C.; Prestifilippo, M.; Cannata, A.
2013-12-01
The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.
What's in a Name? Recent Key Projects of the Committee on Organization and Delivery of Burn Care.
Hickerson, William L; Ryan, Colleen M; Conlon, Kathe M; Harrington, David T; Foster, Kevin; Schwartz, Suzanne; Iyer, Narayan; Jeschke, Marc; Haller, Herbert L; Faucher, Lee D; Arnoldo, Brett D; Jeng, James C
2015-01-01
The Committee for the Organization and Delivery of Burn Care (ODBC) was charged by President Palmieri and the American Burn Association (ABA) Board of Directors with presenting a plenary session at the 45th Meeting of the ABA in Palm Springs, CA, in 2013. The objective of the plenary session was to inform the membership about the wide range of the activities performed by the ODBC committee. The hope was that this session would encourage active involvement within the ABA as a means to improve the delivery of future burn care. Selected current activities were summarized by key leaders of each project and highlighted in the plenary session. The history of the committee, current projects in disaster management, regionalization, best practice guidelines, federal partnerships, product development, new technologies, electronic medical records, and manpower issues in the burn workforce were summarized. The ODBC committee is a keystone committee of the ABA. It is tasked by the ABA leadership with addressing and leading progress in many areas that constitute current challenges in the delivery of burn care.
Ibmdbpy-spatial : An Open-source implementation of in-database geospatial analytics in Python
NASA Astrophysics Data System (ADS)
Roy, Avipsa; Fouché, Edouard; Rodriguez Morales, Rafael; Moehler, Gregor
2017-04-01
As the amount of spatial data acquired from several geodetic sources has grown over the years and as data infrastructure has become more powerful, the need for adoption of in-database analytic technology within geosciences has grown rapidly. In-database analytics on spatial data stored in a traditional enterprise data warehouse enables much faster retrieval and analysis for making better predictions about risks and opportunities, identifying trends and spot anomalies. Although there are a number of open-source spatial analysis libraries like geopandas and shapely available today, most of them have been restricted to manipulation and analysis of geometric objects with a dependency on GEOS and similar libraries. We present an open-source software package, written in Python, to fill the gap between spatial analysis and in-database analytics. Ibmdbpy-spatial provides a geospatial extension to the ibmdbpy package, implemented in 2015. It provides an interface for spatial data manipulation and access to in-database algorithms in IBM dashDB, a data warehouse platform with a spatial extender that runs as a service on IBM's cloud platform called Bluemix. Working in-database reduces the network overload, as the complete data need not be replicated into the user's local system altogether and only a subset of the entire dataset can be fetched into memory in a single instance. Ibmdbpy-spatial accelerates Python analytics by seamlessly pushing operations written in Python into the underlying database for execution using the dashDB spatial extender, thereby benefiting from in-database performance-enhancing features, such as columnar storage and parallel processing. The package is currently supported on Python versions from 2.7 up to 3.4. The basic architecture of the package consists of three main components - 1) a connection to the dashDB represented by the instance IdaDataBase, which uses a middleware API namely - pypyodbc or jaydebeapi to establish the database connection via ODBC or JDBC respectively, 2) an instance to represent the spatial data stored in the database as a dataframe in Python, called the IdaGeoDataFrame, with a specific geometry attribute which recognises a planar geometry column in dashDB and 3) Python wrappers for spatial functions like within, distance, area, buffer} and more which dashDB currently supports to make the querying process from Python much simpler for the users. The spatial functions translate well-known geopandas-like syntax into SQL queries utilising the database connection to perform spatial operations in-database and can operate on single geometries as well two different geometries from different IdaGeoDataFrames. The in-database queries strictly follow the standards of OpenGIS Implementation Specification for Geographic information - Simple feature access for SQL. The results of the operations obtained can thereby be accessed dynamically via interactive Jupyter notebooks from any system which supports Python, without any additional dependencies and can also be combined with other open source libraries such as matplotlib and folium in-built within Jupyter notebooks for visualization purposes. We built a use case to analyse crime hotspots in New York city to validate our implementation and visualized the results as a choropleth map for each borough.
Development of image and information management system for Korean standard brain
NASA Astrophysics Data System (ADS)
Chung, Soon Cheol; Choi, Do Young; Tack, Gye Rae; Sohn, Jin Hun
2004-04-01
The purpose of this study is to establish a reference for image acquisition for completing a standard brain for diverse Korean population, and to develop database management system that saves and manages acquired brain images and personal information of subjects. 3D MP-RAGE (Magnetization Prepared Rapid Gradient Echo) technique which has excellent Signal to Noise Ratio (SNR) and Contrast to Noise Ratio (CNR) as well as reduces image acquisition time was selected for anatomical image acquisition, and parameter values were obtained for the optimal image acquisition. Using these standards, image data of 121 young adults (early twenties) were obtained and stored in the system. System was designed to obtain, save, and manage not only anatomical image data but also subjects' basic demographic factors, medical history, handedness inventory, state-trait anxiety inventory, A-type personality inventory, self-assessment depression inventory, mini-mental state examination, intelligence test, and results of personality test via a survey questionnaire. Additionally this system was designed to have functions of saving, inserting, deleting, searching, and printing image data and personal information of subjects, and to have accessibility to them as well as automatic connection setup with ODBC. This newly developed system may have major contribution to the completion of a standard brain for diverse Korean population since it can save and manage their image data and personal information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheu, R; Ghafar, R; Powers, A
Purpose: Demonstrate the effectiveness of in-house software in ensuring EMR workflow efficiency and safety. Methods: A web-based dashboard system (WBDS) was developed to monitor clinical workflow in real time using web technology (WAMP) through ODBC (Open Database Connectivity). Within Mosaiq (Elekta Inc), operational workflow is driven and indicated by Quality Check Lists (QCLs), which is triggered by automation software IQ Scripts (Elekta Inc); QCLs rely on user completion to propagate. The WBDS retrieves data directly from the Mosaig SQL database and tracks clinical events in real time. For example, the necessity of a physics initial chart check can be determinedmore » by screening all patients on treatment who have received their first fraction and who have not yet had their first chart check. Monitoring similar “real” events with our in-house software creates a safety net as its propagation does not rely on individual users input. Results: The WBDS monitors the following: patient care workflow (initial consult to end of treatment), daily treatment consistency (scheduling, technique, charges), physics chart checks (initial, EOT, weekly), new starts, missing treatments (>3 warning/>5 fractions, action required), and machine overrides. The WBDS can be launched from any web browser which allows the end user complete transparency and timely information. Since the creation of the dashboards, workflow interruptions due to accidental deletion or completion of QCLs were eliminated. Additionally, all physics chart checks were completed timely. Prompt notifications of treatment record inconsistency and machine overrides have decreased the amount of time between occurrence and execution of corrective action. Conclusion: Our clinical workflow relies primarily on QCLs and IQ Scripts; however, this functionality is not the panacea of safety and efficiency. The WBDS creates a more thorough system of checks to provide a safer and near error-less working environment.« less
Resource Public Key Infrastructure Extension
2012-01-01
tests for checking compliance with the RFC 3779 extensions that are used in the RPKI. These tests also were used to identify an error in the OPENSSL ...rsync, OpenSSL , Cryptlib, and MySQL/ODBC. We assume that the adversaries can exploit any publicly known vulnerability in this software. • Server...NULL, set FLAG_NOCHAIN in Ctemp, defer verification. T = P Use OpenSSL to verify certificate chain S using trust anchor T, checking signature and
Quantifying Uncertainty in Expert Judgment: Initial Results
2013-03-01
lines of source code were added in . ---------- C++ = 32%; JavaScript = 29%; XML = 15%; C = 7%; CSS = 7%; Java = 5%; Oth- er = 5% LOC = 927,266...much total effort in person years has been spent on this project? CMU/SEI-2013-TR-001 | 33 5 MySQL , the most popular Open Source SQL...as MySQL , Oracle, PostgreSQL, MS SQL Server, ODBC, or Interbase. Features include email reminders, iCal/vCal import/export, re- mote subscriptions
CIS3/398: Implementation of a Web-Based Electronic Patient Record for Transplant Recipients
Fritsche, L; Lindemann, G; Schroeter, K; Schlaefer, A; Neumayer, H-H
1999-01-01
Introduction While the "Electronic patient record" (EPR) is a frequently quoted term in many areas of healthcare, only few working EPR-systems are available so far. To justify their use, EPRs must be able to store and display all kinds of medical information in a reliable, secure, time-saving, user-friendly way at an affordable price. Fields with patients who are attended to by a large number of medical specialists over a prolonged period of time are best suited to demonstrate the potential benefits of an EPR. The aim of our project was to investigate the feasibility of an EPR based solely on "of-the-shelf"-software and Internet-technology in the field of organ transplantation. Methods The EPR-system consists of three main elements: Data-storage facilities, a Web-server and a user-interface. Data are stored either in a relational database (Sybase Adaptive 11.5, Sybase Inc., CA) or in case of pictures (JPEG) and files in application formats (e. g. Word-Documents) on a Windows NT 4.0 Server (Microsoft Corp., WA). The entire communication of all data is handled by a Web-server (IIS 4.0, Microsoft) with an Active Server Pages extension. The database is accessed by ActiveX Data Objects via the ODBC-interface. The only software required on the user's computer is the Internet Explorer 4.01 (Microsoft), during the first use of the EPR, the ActiveX HTML Layout Control is automatically added. The user can access the EPR via Local or Wide Area Network or by dial-up connection. If the EPR is accessed from outside the firewall, all communication is encrypted (SSL 3.0, Netscape Comm. Corp., CA).The speed of the EPR-system was tested with 50 repeated measurements of the duration of two key-functions: 1) Display of all lab results for a given day and patient and 2) automatic composition of a letter containing diagnoses, medication, notes and lab results. For the test a 233 MHz Pentium II Processor with 10 Mbit/s Ethernet connection (ping-time below 10 ms) over 2 hubs to the server (400 MHz Pentium II, 256 MB RAM) was used. Results So far the EPR-system has been running for eight consecutive months and contains complete records of 673 transplant recipients with an average follow-up of 9.9 (SD :4.9) years and a total of 1.1 million lab values. Instruction to enable new users to perform basic operations took less than two hours in all cases. The average duration of laboratory access was 0.9 (SD:0.5) seconds, the automatic composition of a letter took 6.1 (SD:2.4) seconds. Apart from the database and Windows NT, all other components are available for free. The development of the EPR-system required less than two person-years. Conclusion Implementation of an Electronic patient record that meets the requirements of comprehensiveness, reliability, security, speed, user-friendliness and affordability using a combination of "of-the-shelf" software-products can be feasible, if the current state-of-the-art internet technology is applied.
DICOM image integration into an electronic medical record using thin viewing clients
NASA Astrophysics Data System (ADS)
Stewart, Brent K.; Langer, Steven G.; Taira, Ricky K.
1998-07-01
Purpose -- To integrate radiological DICOM images into our currently existing web-browsable Electronic Medical Record (MINDscape). Over the last five years the University of Washington has created a clinical data repository combining in a distributed relational database information from multiple departmental databases (MIND). A text-based view of this data called the Mini Medical Record (MMR) has been available for three years. MINDscape, unlike the text based MMR, provides a platform independent, web browser view of the MIND dataset that can easily be linked to other information resources on the network. We have now added the integration of radiological images into MINDscape through a DICOM webserver. Methods/New Work -- we have integrated a commercial webserver that acts as a DICOM Storage Class Provider to our, computed radiography (CR), computed tomography (CT), digital fluoroscopy (DF), magnetic resonance (MR) and ultrasound (US) scanning devices. These images can be accessed through CGI queries or by linking the image server database using ODBC or SQL gateways. This allows the use of dynamic HTML links to the images on the DICOM webserver from MINDscape, so that the radiology reports already resident in the MIND repository can be married with the associated images through the unique examination accession number generated by our Radiology Information System (RIS). The web browser plug-in used provides a wavelet decompression engine (up to 16-bits per pixel) and performs the following image manipulation functions: window/level, flip, invert, sort, rotate, zoom, cine-loop and save as JPEG. Results -- Radiological DICOM image sets (CR, CT, MR and US) are displayed with associated exam reports for referring physician and clinicians anywhere within the widespread academic medical center on PCs, Macs, X-terminals and Unix computers. This system is also being used for home teleradiology application. Conclusion -- Radiological DICOM images can be made available medical center wide to physicians quickly using low-cost and ubiquitous, thin client browsing technology and wavelet compression.
A framework for cross-observatory volcanological database management
NASA Astrophysics Data System (ADS)
Aliotta, Marco Antonio; Amore, Mauro; Cannavò, Flavio; Cassisi, Carmelo; D'Agostino, Marcello; Dolce, Mario; Mastrolia, Andrea; Mangiagli, Salvatore; Messina, Giuseppe; Montalto, Placido; Fabio Pisciotta, Antonino; Prestifilippo, Michele; Rossi, Massimo; Scarpato, Giovanni; Torrisi, Orazio
2017-04-01
In the last years, it has been clearly shown how the multiparametric approach is the winning strategy to investigate the complex dynamics of the volcanic systems. This involves the use of different sensor networks, each one dedicated to the acquisition of particular data useful for research and monitoring. The increasing interest devoted to the study of volcanological phenomena led the constitution of different research organizations or observatories, also relative to the same volcanoes, which acquire large amounts of data from sensor networks for the multiparametric monitoring. At INGV we developed a framework, hereinafter called TSDSystem (Time Series Database System), which allows to acquire data streams from several geophysical and geochemical permanent sensor networks (also represented by different data sources such as ASCII, ODBC, URL etc.), located on the main volcanic areas of Southern Italy, and relate them within a relational database management system. Furthermore, spatial data related to different dataset are managed using a GIS module for sharing and visualization purpose. The standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common space and time scale. In order to share data between INGV observatories, and also with Civil Protection, whose activity is related on the same volcanic districts, we designed a "Master View" system that, starting from the implementation of a number of instances of the TSDSystem framework (one for each observatory), makes possible the joint interrogation of data, both temporal and spatial, on instances located in different observatories, through the use of web services technology (RESTful, SOAP). Similarly, it provides metadata for equipment using standard schemas (such as FDSN StationXML). The "Master View" is also responsible for managing the data policy through a "who owns what" system, which allows you to associate viewing/download of spatial or time intervals to particular users or groups.
NASA Astrophysics Data System (ADS)
Mueller, Wolfgang; Mueller, Henning; Marchand-Maillet, Stephane; Pun, Thierry; Squire, David M.; Pecenovic, Zoran; Giess, Christoph; de Vries, Arjen P.
2000-10-01
While in the area of relational databases interoperability is ensured by common communication protocols (e.g. ODBC/JDBC using SQL), Content Based Image Retrieval Systems (CBIRS) and other multimedia retrieval systems are lacking both a common query language and a common communication protocol. Besides its obvious short term convenience, interoperability of systems is crucial for the exchange and analysis of user data. In this paper, we present and describe an extensible XML-based query markup language, called MRML (Multimedia Retrieval markup Language). MRML is primarily designed so as to ensure interoperability between different content-based multimedia retrieval systems. Further, MRML allows researchers to preserve their freedom in extending their system as needed. MRML encapsulates multimedia queries in a way that enable multimedia (MM) query languages, MM content descriptions, MM query engines, and MM user interfaces to grow independently from each other, reaching a maximum of interoperability while ensuring a maximum of freedom for the developer. For benefitting from this, only a few simple design principles have to be respected when extending MRML for one's fprivate needs. The design of extensions withing the MRML framework will be described in detail in the paper. MRML has been implemented and tested for the CBIRS Viper, using the user interface Snake Charmer. Both are part of the GNU project and can be downloaded at our site.
DOT National Transportation Integrated Search
2014-12-01
The Bureau of Transportation Statistics (BTS) leads in the collection, analysis, and dissemination of transportation data. The Intermodal Passenger Connectivity Database : (ICPD) is an ongoing data collection that measures the degree of connectivity ...
2017-09-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS DATABASE CREATION AND STATISTICAL ANALYSIS: FINDING CONNECTIONS BETWEEN TWO OR MORE SECONDARY...BLANK ii Approved for public release. Distribution is unlimited. DATABASE CREATION AND STATISTICAL ANALYSIS: FINDING CONNECTIONS BETWEEN TWO OR MORE...Problem and Motivation . . . . . . . . . . . . . . . . . . . 1 1.2 DOD Applicability . . . . . . . . . . . . . . . . .. . . . . . . 2 1.3 Research
Using OpenOffice as a Portable Interface to JAVA-Based Applications
NASA Astrophysics Data System (ADS)
Comeau, T.; Garrett, B.; Richon, J.; Romelfanger, F.
2004-07-01
STScI previously used Microsoft Word and Microsoft Access, a Sybase ODBC driver, and the Adobe Acrobat PDF writer, along with a substantial amount of Visual Basic, to generate a variety of documents for the internal Space Telescope Grants Administration System (STGMS). While investigating an upgrade to Microsoft Office XP, we began considering alternatives, ultimately selecting an open source product, OpenOffice.org. This reduces the total number of products required to operate the internal STGMS system, simplifies the build system, and opens the possibility of moving to a non-Windows platform. We describe the experience of moving from Microsoft Office to OpenOffice.org, and our other internal uses of OpenOffice.org in our development environment.
Sukhinin, Dmitrii I.; Engel, Andreas K.; Manger, Paul; Hilgetag, Claus C.
2016-01-01
Databases of structural connections of the mammalian brain, such as CoCoMac (cocomac.g-node.org) or BAMS (https://bams1.org), are valuable resources for the analysis of brain connectivity and the modeling of brain dynamics in species such as the non-human primate or the rodent, and have also contributed to the computational modeling of the human brain. Another animal model that is widely used in electrophysiological or developmental studies is the ferret; however, no systematic compilation of brain connectivity is currently available for this species. Thus, we have started developing a database of anatomical connections and architectonic features of the ferret brain, the Ferret(connect)ome, www.Ferretome.org. The Ferretome database has adapted essential features of the CoCoMac methodology and legacy, such as the CoCoMac data model. This data model was simplified and extended in order to accommodate new data modalities that were not represented previously, such as the cytoarchitecture of brain areas. The Ferretome uses a semantic parcellation of brain regions as well as a logical brain map transformation algorithm (objective relational transformation, ORT). The ORT algorithm was also adopted for the transformation of architecture data. The database is being developed in MySQL and has been populated with literature reports on tract-tracing observations in the ferret brain using a custom-designed web interface that allows efficient and validated simultaneous input and proofreading by multiple curators. The database is equipped with a non-specialist web interface. This interface can be extended to produce connectivity matrices in several formats, including a graphical representation superimposed on established ferret brain maps. An important feature of the Ferretome database is the possibility to trace back entries in connectivity matrices to the original studies archived in the system. Currently, the Ferretome contains 50 reports on connections comprising 20 injection reports with more than 150 labeled source and target areas, the majority reflecting connectivity of subcortical nuclei and 15 descriptions of regional brain architecture. We hope that the Ferretome database will become a useful resource for neuroinformatics and neural modeling, and will support studies of the ferret brain as well as facilitate advances in comparative studies of mesoscopic brain connectivity. PMID:27242503
Sukhinin, Dmitrii I; Engel, Andreas K; Manger, Paul; Hilgetag, Claus C
2016-01-01
Databases of structural connections of the mammalian brain, such as CoCoMac (cocomac.g-node.org) or BAMS (https://bams1.org), are valuable resources for the analysis of brain connectivity and the modeling of brain dynamics in species such as the non-human primate or the rodent, and have also contributed to the computational modeling of the human brain. Another animal model that is widely used in electrophysiological or developmental studies is the ferret; however, no systematic compilation of brain connectivity is currently available for this species. Thus, we have started developing a database of anatomical connections and architectonic features of the ferret brain, the Ferret(connect)ome, www.Ferretome.org. The Ferretome database has adapted essential features of the CoCoMac methodology and legacy, such as the CoCoMac data model. This data model was simplified and extended in order to accommodate new data modalities that were not represented previously, such as the cytoarchitecture of brain areas. The Ferretome uses a semantic parcellation of brain regions as well as a logical brain map transformation algorithm (objective relational transformation, ORT). The ORT algorithm was also adopted for the transformation of architecture data. The database is being developed in MySQL and has been populated with literature reports on tract-tracing observations in the ferret brain using a custom-designed web interface that allows efficient and validated simultaneous input and proofreading by multiple curators. The database is equipped with a non-specialist web interface. This interface can be extended to produce connectivity matrices in several formats, including a graphical representation superimposed on established ferret brain maps. An important feature of the Ferretome database is the possibility to trace back entries in connectivity matrices to the original studies archived in the system. Currently, the Ferretome contains 50 reports on connections comprising 20 injection reports with more than 150 labeled source and target areas, the majority reflecting connectivity of subcortical nuclei and 15 descriptions of regional brain architecture. We hope that the Ferretome database will become a useful resource for neuroinformatics and neural modeling, and will support studies of the ferret brain as well as facilitate advances in comparative studies of mesoscopic brain connectivity.
DOT National Transportation Integrated Search
2016-10-01
Each database record shows the modes that serve the facility, those that are nearby but not connecting, and incudes facility location information. The data can be analyzed on a city, state, zip code, metropolitan area, or modal basis. Geographic coor...
Database Entity Persistence with Hibernate for the Network Connectivity Analysis Model
2014-04-01
time savings in the Java coding development process. Appendices A and B describe address setup procedures for installing the MySQL database...development environment is required: • The open source MySQL Database Management System (DBMS) from Oracle, which is a Java Database Connectivity (JDBC...compliant DBMS • MySQL JDBC Driver library that comes as a plug-in with the Netbeans distribution • The latest Java Development Kit with the latest
Distributed Episodic Exploratory Planning (DEEP)
2008-12-01
API). For DEEP, Hibernate offered the following advantages: • Abstracts SQL by utilizing HQL so any database with a Java Database Connectivity... Hibernate SQL ICCRTS International Command and Control Research and Technology Symposium JDB Java Distributed Blackboard JDBC Java Database Connectivity...selected because of its opportunistic reasoning capabilities and implemented in Java for platform independence. Java was chosen for ease of
Flexible Decision Support in Device-Saturated Environments
2003-10-01
also output tuples to a remote MySQL or Postgres database. 3.3 GUI The GUI allows the user to pose queries using SQL and to display query...DatabaseConnection.java – handles connections to an external database (such as MySQL or Postgres ). • Debug.java – contains the code for printing out Debug messages...also provided. It is possible to output the results of queries to a MySQL or Postgres database for archival and the GUI can query those results
Yang, Haixiu; Shang, Desi; Xu, Yanjun; Zhang, Chunlong; Feng, Li; Sun, Zeguo; Shi, Xinrui; Zhang, Yunpeng; Han, Junwei; Su, Fei; Li, Chunquan; Li, Xia
2017-07-27
Well characterized the connections among diseases, long non-coding RNAs (lncRNAs) and drugs are important for elucidating the key roles of lncRNAs in biological mechanisms in various biological states. In this study, we constructed a database called LNCmap (LncRNA Connectivity Map), available at http://www.bio-bigdata.com/LNCmap/ , to establish the correlations among diseases, physiological processes, and the action of small molecule therapeutics by attempting to describe all biological states in terms of lncRNA signatures. By reannotating the microarray data from the Connectivity Map database, the LNCmap obtained 237 lncRNA signatures of 5916 instances corresponding to 1262 small molecular drugs. We provided a user-friendly interface for the convenient browsing, retrieval and download of the database, including detailed information and the associations of drugs and corresponding affected lncRNAs. Additionally, we developed two enrichment analysis methods for users to identify candidate drugs for a particular disease by inputting the corresponding lncRNA expression profiles or an associated lncRNA list and then comparing them to the lncRNA signatures in our database. Overall, LNCmap could significantly improve our understanding of the biological roles of lncRNAs and provide a unique resource to reveal the connections among drugs, lncRNAs and diseases.
Weathering Database Technology
ERIC Educational Resources Information Center
Snyder, Robert
2005-01-01
Collecting weather data is a traditional part of a meteorology unit at the middle level. However, making connections between the data and weather conditions can be a challenge. One way to make these connections clearer is to enter the data into a database. This allows students to quickly compare different fields of data and recognize which…
[Plug-in Based Centralized Control System in Operating Rooms].
Wang, Yunlong
2017-05-30
Centralized equipment controls in an operating room (OR) is crucial to an efficient workflow in the OR. To achieve centralized control, an integrative OR needs to focus on designing a control panel that can appropriately incorporate equipment from different manufactures with various connecting ports and controls. Here we propose to achieve equipment integration using plug-in modules. Each OR will be equipped with a dynamic plug-in control panel containing physically removable connecting ports. Matching outlets will be installed onto the control panels of each equipment used at any given time. This dynamic control panel will be backed with a database containing plug-in modules that can connect any two types of connecting ports common among medical equipment manufacturers. The correct connecting ports will be called using reflection dynamics. This database will be updated regularly to include new connecting ports on the market, making it easy to maintain, update, expand and remain relevant as new equipment are developed. Together, the physical panel and the database will achieve centralized equipment controls in the OR that can be easily adapted to any equipment in the OR.
A proposal of fuzzy connective with learning function and its application to fuzzy retrieval system
NASA Technical Reports Server (NTRS)
Hayashi, Isao; Naito, Eiichi; Ozawa, Jun; Wakami, Noboru
1993-01-01
A new fuzzy connective and a structure of network constructed by fuzzy connectives are proposed to overcome a drawback of conventional fuzzy retrieval systems. This network represents a retrieval query and the fuzzy connectives in networks have a learning function to adjust its parameters by data from a database and outputs of a user. The fuzzy retrieval systems employing this network are also constructed. Users can retrieve results even with a query whose attributes do not exist in a database schema and can get satisfactory results for variety of thinkings by learning function.
Spatial and environmental connectivity analysis in a cholera vaccine trial.
Emch, Michael; Ali, Mohammad; Root, Elisabeth D; Yunus, Mohammad
2009-02-01
This paper develops theory and methods for vaccine trials that utilize spatial and environmental information. Satellite imagery is used to identify whether households are connected to one another via water bodies in a study area in rural Bangladesh. Then relationships between neighborhood-level cholera vaccine coverage and placebo incidence and neighborhood-level spatial variables are measured. The study hypothesis is that unvaccinated people who are environmentally connected to people who have been vaccinated will be at lower risk compared to unvaccinated people who are environmentally connected to people who have not been vaccinated. We use four datasets including: a cholera vaccine trial database, a longitudinal demographic database of the rural population from which the vaccine trial participants were selected, a household-level geographic information system (GIS) database of the same study area, and high resolution Quickbird satellite imagery. An environmental connectivity metric was constructed by integrating the satellite imagery with the vaccine and demographic databases linked with GIS. The results show that there is a relationship between neighborhood rates of cholera vaccination and placebo incidence. Thus, people are indirectly protected when more people in their environmentally connected neighborhood are vaccinated. This result is similar to our previous work that used a simpler Euclidean distance neighborhood to measure neighborhood vaccine coverage [Ali, M., Emch, M., von Seidlein, L., Yunus, M., Sack, D. A., Holmgren, J., et al. (2005). Herd immunity conferred by killed oral cholera vaccines in Bangladesh. Lancet, 366(9479), 44-49]. Our new method of measuring environmental connectivity is more precise since it takes into account the transmission mode of cholera and therefore this study validates our assertion that the oral cholera vaccine provides indirect protection in addition to direct protection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaponov, Yu.A.; Igarashi, N.; Hiraki, M.
2004-05-12
An integrated controlling system and a unified database for high throughput protein crystallography experiments have been developed. Main features of protein crystallography experiments (purification, crystallization, crystal harvesting, data collection, data processing) were integrated into the software under development. All information necessary to perform protein crystallography experiments is stored (except raw X-ray data that are stored in a central data server) in a MySQL relational database. The database contains four mutually linked hierarchical trees describing protein crystals, data collection of protein crystal and experimental data processing. A database editor was designed and developed. The editor supports basic database functions to view,more » create, modify and delete user records in the database. Two search engines were realized: direct search of necessary information in the database and object oriented search. The system is based on TCP/IP secure UNIX sockets with four predefined sending and receiving behaviors, which support communications between all connected servers and clients with remote control functions (creating and modifying data for experimental conditions, data acquisition, viewing experimental data, and performing data processing). Two secure login schemes were designed and developed: a direct method (using the developed Linux clients with secure connection) and an indirect method (using the secure SSL connection using secure X11 support from any operating system with X-terminal and SSH support). A part of the system has been implemented on a new MAD beam line, NW12, at the Photon Factory Advanced Ring for general user experiments.« less
Wang, Lei; Alpert, Kathryn I.; Calhoun, Vince D.; Cobia, Derin J.; Keator, David B.; King, Margaret D.; Kogan, Alexandr; Landis, Drew; Tallis, Marcelo; Turner, Matthew D.; Potkin, Steven G.; Turner, Jessica A.; Ambite, Jose Luis
2015-01-01
SchizConnect (www.schizconnect.org) is built to address the issues of multiple data repositories in schizophrenia neuroimaging studies. It includes a level of mediation—translating across data sources—so that the user can place one query, e.g. for diffusion images from male individuals with schizophrenia, and find out from across participating data sources how many datasets there are, as well as downloading the imaging and related data. The current version handles the Data Usage Agreements across different studies, as well as interpreting database-specific terminologies into a common framework. New data repositories can also be mediated to bring immediate access to existing datasets. Compared with centralized, upload data sharing models, SchizConnect is a unique, virtual database with a focus on schizophrenia and related disorders that can mediate live data as information are being updated at each data source. It is our hope that SchizConnect can facilitate testing new hypotheses through aggregated datasets, promoting discovery related to the mechanisms underlying schizophrenic dysfunction. PMID:26142271
PathwayAccess: CellDesigner plugins for pathway databases.
Van Hemert, John L; Dickerson, Julie A
2010-09-15
CellDesigner provides a user-friendly interface for graphical biochemical pathway description. Many pathway databases are not directly exportable to CellDesigner models. PathwayAccess is an extensible suite of CellDesigner plugins, which connect CellDesigner directly to pathway databases using respective Java application programming interfaces. The process is streamlined for creating new PathwayAccess plugins for specific pathway databases. Three PathwayAccess plugins, MetNetAccess, BioCycAccess and ReactomeAccess, directly connect CellDesigner to the pathway databases MetNetDB, BioCyc and Reactome. PathwayAccess plugins enable CellDesigner users to expose pathway data to analytical CellDesigner functions, curate their pathway databases and visually integrate pathway data from different databases using standard Systems Biology Markup Language and Systems Biology Graphical Notation. Implemented in Java, PathwayAccess plugins run with CellDesigner version 4.0.1 and were tested on Ubuntu Linux, Windows XP and 7, and MacOSX. Source code, binaries, documentation and video walkthroughs are freely available at http://vrac.iastate.edu/~jlv.
The Web-Database Connection Tools for Sharing Information on the Campus Intranet.
ERIC Educational Resources Information Center
Thibeault, Nancy E.
This paper evaluates four tools for creating World Wide Web pages that interface with Microsoft Access databases: DB Gateway, Internet Database Assistant (IDBA), Microsoft Internet Database Connector (IDC), and Cold Fusion. The system requirements and features of each tool are discussed. A sample application, "The Virtual Help Desk"…
NASA Technical Reports Server (NTRS)
Li, Chung-Sheng (Inventor); Smith, John R. (Inventor); Chang, Yuan-Chi (Inventor); Jhingran, Anant D. (Inventor); Padmanabhan, Sriram K. (Inventor); Hsiao, Hui-I (Inventor); Choy, David Mun-Hien (Inventor); Lin, Jy-Jine James (Inventor); Fuh, Gene Y. C. (Inventor); Williams, Robin (Inventor)
2004-01-01
Methods and apparatus for providing a multi-tier object-relational database architecture are disclosed. In one illustrative embodiment of the present invention, a multi-tier database architecture comprises an object-relational database engine as a top tier, one or more domain-specific extension modules as a bottom tier, and one or more universal extension modules as a middle tier. The individual extension modules of the bottom tier operationally connect with the one or more universal extension modules which, themselves, operationally connect with the database engine. The domain-specific extension modules preferably provide such functions as search, index, and retrieval services of images, video, audio, time series, web pages, text, XML, spatial data, etc. The domain-specific extension modules may include one or more IBM DB2 extenders, Oracle data cartridges and/or Informix datablades, although other domain-specific extension modules may be used.
DOT National Transportation Integrated Search
2012-03-01
This project initiated the development of a computerized database of ITS facilities, including conduits, junction : boxes, cameras, connections, etc. The current system consists of a database of conduit sections of various lengths. : Over the length ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kabat, C; Cline, K; Li, Y
Purpose: With increasing numbers of cancer patients being diagnosed and the complexity of radiotherapy treatments rising it’s paramount that patient plan development continues to stay fluid within the clinic. In order to maintain a high standard of care and clinical efficiency the establishment of a tracking system for patient plan development allows healthcare providers to view real time plan progression and drive clinical workflow. In addition, it provides statistical datasets which can further identify inefficiencies within the clinic. Methods: An application was developed utilizing Microsoft’s ODBC SQL database engine to track patient plan status throughout the treatment planning process whilemore » also managing key factors pertaining to the patient’s treatment. Pertinent information is accessible to staff in many locations, including tracking monitors within dosimetry, the clinic network for both computers and handheld devices, and through email notifications. Plans are initiated with a CT and continually tracked through planning stages until final approval by staff. Patient’s status is dynamically updated by the physicians, dosimetrists, and medical physicists based on the stage of the patient’s plan. Results: Our application has been running over a six month period with all patients being processed through the system. Modifications have been made to allow for new features to be implemented along with additional tracking parameters. Based on in-house feedback, the application has been supportive in streamlining patient plans through the treatment planning process and data has been accumulating to further improve procedures within the clinic. Conclusion: Over time the clinic will continue to track data with this application. As data accumulates the clinic will be able to highlight inefficiencies within the workflow and adapt accordingly. We will add in new features to help support the treatment planning process in the future.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harwood, R.G.; Billington, C.J.; Buitrago, J.
1996-12-01
A Technical Core Group (TCG) was set up in March 1994 to review the design practice provisions for grouted pile to sleeve connections, mechanical connections and repairs as part of the international harmonization process for the new ISO Standard, ISO 13819-2, Petroleum and Natural Gas Industries--Offshore Structures, Part 2: Fixed Steel Structures. This paper provides an overview of the development of the proposed new design provisions for grouted connections including, the gathering and screening of the data, the evolution of the design formulae, and the evaluation of the resistance factor. Detailed comparisons of the new formulae with current design practicemore » (API, HSE and DnV) are also included. In the development of the new provisions the TCG has been given access to the largest database ever assembled on this topic. This database includes all the major testing programs performed over the last 20 years, and recent UK and Norwegian research projects not previously reported. The limitations in the database are discussed and the areas where future research would be of benefit are highlighted.« less
Wang, Lei; Alpert, Kathryn I; Calhoun, Vince D; Cobia, Derin J; Keator, David B; King, Margaret D; Kogan, Alexandr; Landis, Drew; Tallis, Marcelo; Turner, Matthew D; Potkin, Steven G; Turner, Jessica A; Ambite, Jose Luis
2016-01-01
SchizConnect (www.schizconnect.org) is built to address the issues of multiple data repositories in schizophrenia neuroimaging studies. It includes a level of mediation--translating across data sources--so that the user can place one query, e.g. for diffusion images from male individuals with schizophrenia, and find out from across participating data sources how many datasets there are, as well as downloading the imaging and related data. The current version handles the Data Usage Agreements across different studies, as well as interpreting database-specific terminologies into a common framework. New data repositories can also be mediated to bring immediate access to existing datasets. Compared with centralized, upload data sharing models, SchizConnect is a unique, virtual database with a focus on schizophrenia and related disorders that can mediate live data as information is being updated at each data source. It is our hope that SchizConnect can facilitate testing new hypotheses through aggregated datasets, promoting discovery related to the mechanisms underlying schizophrenic dysfunction. Copyright © 2015 Elsevier Inc. All rights reserved.
A Concept for Continuous Monitoring that Reduces Redundancy in Information Assurance Processes
2011-09-01
System.out.println(“Driver loaded”); String url=“jdbc:postgresql://localhost/IAcontrols”; String user = “ postgres ”; String pwd... postgres ”; Connection DB_mobile_conn = DriverManager.getConnection(url,user,pwd); System.out.println(“Database Connect ok...user = “ postgres ”; String pwd = “ postgres ”; Connection DB_mobile_conn = DriverManager.getConnection(url,user,pwd); System.out.println
Online with the world - International telecommunications connections (and how to make them)
NASA Technical Reports Server (NTRS)
Jack, Robert F.
1986-01-01
The intricacies involved in connecting to online services in Europe are discussed. A sample connection is presented. It is noted that European services usually carry single files on large databases; thus, good searchers can save some money by avoiding backfiles.
Forrest, Laura; Mitchell, Gillian; Thrupp, Letitia; Petelin, Lara; Richardson, Kate; Mascarenhas, Lyon; Young, Mary-Anne
2018-01-01
Clinical genetics units hold large amounts of information which could be utilised to benefit patients and their families. In Australia, a national research database, the Inherited Cancer Connect (ICCon) database, is being established that comprises clinical genetic data held for all carriers of mutations in cancer predisposition genes. Consumer input was sought to establish the acceptability of the inclusion of clinical genetic data into a research database. A qualitative approach using a modified nominal group technique was used to collect data through consumer forums conducted in three Australian states. Individuals who had previously received care from Familial Cancer Centres were invited to participate. Twenty-four consumers participated in three forums. Participants expressed positive attitudes about the establishment of the ICCon database, which were informed by the perceived benefits of the database including improved health outcomes for individuals with inherited cancer syndromes. Most participants were comfortable to waive consent for their clinical information to be included in the research database in a de-identified format. As major stakeholders, consumers have an integral role in contributing to the development and conduct of the ICCon database. As an initial step in the development of the ICCon database, the forums demonstrated consumers' acceptance of important aspects of the database including waiver of consent.
cMapper: gene-centric connectivity mapper for EBI-RDF platform.
Shoaib, Muhammad; Ansari, Adnan Ahmad; Ahn, Sung-Min
2017-01-15
In this era of biological big data, data integration has become a common task and a challenge for biologists. The Resource Description Framework (RDF) was developed to enable interoperability of heterogeneous datasets. The EBI-RDF platform enables an efficient data integration of six independent biological databases using RDF technologies and shared ontologies. However, to take advantage of this platform, biologists need to be familiar with RDF technologies and SPARQL query language. To overcome this practical limitation of the EBI-RDF platform, we developed cMapper, a web-based tool that enables biologists to search the EBI-RDF databases in a gene-centric manner without a thorough knowledge of RDF and SPARQL. cMapper allows biologists to search data entities in the EBI-RDF platform that are connected to genes or small molecules of interest in multiple biological contexts. The input to cMapper consists of a set of genes or small molecules, and the output are data entities in six independent EBI-RDF databases connected with the given genes or small molecules in the user's query. cMapper provides output to users in the form of a graph in which nodes represent data entities and the edges represent connections between data entities and inputted set of genes or small molecules. Furthermore, users can apply filters based on database, taxonomy, organ and pathways in order to focus on a core connectivity graph of their interest. Data entities from multiple databases are differentiated based on background colors. cMapper also enables users to investigate shared connections between genes or small molecules of interest. Users can view the output graph on a web browser or download it in either GraphML or JSON formats. cMapper is available as a web application with an integrated MySQL database. The web application was developed using Java and deployed on Tomcat server. We developed the user interface using HTML5, JQuery and the Cytoscape Graph API. cMapper can be accessed at http://cmapper.ewostech.net Readers can download the development manual from the website http://cmapper.ewostech.net/docs/cMapperDocumentation.pdf. Source Code is available at https://github.com/muhammadshoaib/cmapperContact:smahn@gachon.ac.krSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Atomic and Molecular Databases, VAMDC (Virtual Atomic and Molecular Data Centre)
NASA Astrophysics Data System (ADS)
Dubernet, Marie-Lise; Zwölf, Carlo Maria; Moreau, Nicolas; Awa Ba, Yaya; VAMDC Consortium
2015-08-01
The "Virtual Atomic and Molecular Data Centre Consortium",(VAMDC Consortium, http://www.vamdc.eu) is a Consortium bound by an Memorandum of Understanding aiming at ensuring the sustainability of the VAMDC e-infrastructure. The current VAMDC e-infrastructure inter-connects about 30 atomic and molecular databases with the number of connected databases increasing every year: some databases are well-known databases such as CDMS, JPL, HITRAN, VALD,.., other databases have been created since the start of VAMDC. About 90% of our databases are used for astrophysical applications. The data can be queried, retrieved, visualized in a single format from a general portal (http://portal.vamdc.eu) and VAMDC is also developing standalone tools in order to retrieve and handle the data. VAMDC provides software and support in order to include databases within the VAMDC e-infrastructure. One current feature of VAMDC is the constrained environnement of description of data that ensures a higher quality for distribution of data; a future feature is the link of VAMDC with evaluation/validation groups. The talk will present the VAMDC Consortium and the VAMDC e infrastructure with its underlying technology, its services, its science use cases and its etension towards other communities than the academic research community.
The International Space Station Comparative Maintenance Analysis(CMAM)
2004-09-01
External Component • Entire ORU Database 2. Database Connectivity The CMAM ORU database consists of three tables: an ORU master parts list , an ISS...Flight table, and an ISS Subsystem table. The ORU master parts list and the ISS Flight table can be updated or modified from the CMAM user interface
Intelligent communication assistant for databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakobson, G.; Shaked, V.; Rowley, S.
1983-01-01
An intelligent communication assistant for databases, called FRED (front end for databases) is explored. FRED is designed to facilitate access to database systems by users of varying levels of experience. FRED is a second generation of natural language front-ends for databases and intends to solve two critical interface problems existing between end-users and databases: connectivity and communication problems. The authors report their experiences in developing software for natural language query processing, dialog control, and knowledge representation, as well as the direction of future work. 10 references.
ERIC Educational Resources Information Center
Williams, Martha E.
1982-01-01
Provides update to 13-year analysis of finances of major database producer noting actions taken to improve finances (decrease expenses, increase efficiency, develop new products, market strategies and services, change pricing scheme, omit print products, increase prices) and consequences of actions (revenue increase, connect hour increase). Five…
Tao of Gateway: Providing Internet Access to Licensed Databases.
ERIC Educational Resources Information Center
McClellan, Gregory A.; Garrison, William V.
1997-01-01
Illustrates an approach for providing networked access to licensed databases over the Internet by positioning the library between patron and vendor. Describes how the gateway systems and database connection servers work and discusses how treatment of security has evolved with the introduction of the World Wide Web. Outlines plans to reimplement…
Massive Scale Cyber Traffic Analysis: A Driver for Graph Database Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joslyn, Cliff A.; Choudhury, S.; Haglin, David J.
2013-06-19
We describe the significance and prominence of network traffic analysis (TA) as a graph- and network-theoretical domain for advancing research in graph database systems. TA involves observing and analyzing the connections between clients, servers, hosts, and actors within IP networks, both at particular times and as extended over times. Towards that end, NetFlow (or more generically, IPFLOW) data are available from routers and servers which summarize coherent groups of IP packets flowing through the network. IPFLOW databases are routinely interrogated statistically and visualized for suspicious patterns. But the ability to cast IPFLOW data as a massive graph and query itmore » interactively, in order to e.g.\\ identify connectivity patterns, is less well advanced, due to a number of factors including scaling, and their hybrid nature combining graph connectivity and quantitative attributes. In this paper, we outline requirements and opportunities for graph-structured IPFLOW analytics based on our experience with real IPFLOW databases. Specifically, we describe real use cases from the security domain, cast them as graph patterns, show how to express them in two graph-oriented query languages SPARQL and Datalog, and use these examples to motivate a new class of "hybrid" graph-relational systems.« less
Bluetooth wireless database for scoliosis clinics.
Lou, E; Fedorak, M V; Hill, D L; Raso, J V; Moreau, M J; Mahood, J K
2003-05-01
A database system with Bluetooth wireless connectivity has been developed so that scoliosis clinics can be run more efficiently and data can be mined for research studies without significant increases in equipment cost. The wireless database system consists of a Bluetooth-enabled laptop or PC and a Bluetooth-enabled handheld personal data assistant (PDA). Each patient has a profile in the database, which has all of his or her clinical history. Immediately prior to the examination, the orthopaedic surgeon selects a patient's profile from the database and uploads that data to the PDA over a Bluetooth wireless connection. The surgeon can view the entire clinical history of the patient while in the examination room and, at the same time, enter in any new measurements and comments from the current examination. After seeing the patient, the surgeon synchronises the newly entered information with the database wirelessly and prints a record for the chart. This combination of the database and the PDA both improves efficiency and accuracy and can save significant time, as there is less duplication of work, and no dictation is required. The equipment required to implement this solution is a Bluetooth-enabled PDA and a Bluetooth wireless transceiver for the PC or laptop.
Jones, Benjamin M.; Arp, Christopher D.; Whitman, Matthew S.; Nigro, Debora A.; Nitze, Ingmar; Beaver, John; Gadeke, Anne; Zuck, Callie; Liljedahl, Anna K.; Daanen, Ronald; Torvinen, Eric; Fritz, Stacey; Grosse, Guido
2017-01-01
Lakes are dominant and diverse landscape features in the Arctic, but conventional land cover classification schemes typically map them as a single uniform class. Here, we present a detailed lake-centric geospatial database for an Arctic watershed in northern Alaska. We developed a GIS dataset consisting of 4362 lakes that provides information on lake morphometry, hydrologic connectivity, surface area dynamics, surrounding terrestrial ecotypes, and other important conditions describing Arctic lakes. Analyzing the geospatial database relative to fish and bird survey data shows relations to lake depth and hydrologic connectivity, which are being used to guide research and aid in the management of aquatic resources in the National Petroleum Reserve in Alaska. Further development of similar geospatial databases is needed to better understand and plan for the impacts of ongoing climate and land-use changes occurring across lake-rich landscapes in the Arctic.
76 FR 42677 - Notice of Intent To Seek Approval To Collect Information
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-19
... and maintains an on-line recipe database, the Recipe Finder, as a popular feature to the SNAP-Ed Connection Web site. The purpose of the Recipe Finder database is to provide SNAP-Ed providers with low-cost... inclusion in the database. SNAP-Ed staff and providers benefit from collecting and posting feedback on...
Varela, Sara; González-Hernández, Javier; Casabella, Eduardo; Barrientos, Rafael
2014-01-01
Citizen science projects store an enormous amount of information about species distribution, diversity and characteristics. Researchers are now beginning to make use of this rich collection of data. However, access to these databases is not always straightforward. Apart from the largest and international projects, citizen science repositories often lack specific Application Programming Interfaces (APIs) to connect them to the scientific environments. Thus, it is necessary to develop simple routines to allow researchers to take advantage of the information collected by smaller citizen science projects, for instance, programming specific packages to connect them to popular scientific environments (like R). Here, we present rAvis, an R-package to connect R-users with Proyecto AVIS (http://proyectoavis.com), a Spanish citizen science project with more than 82,000 bird observation records. We develop several functions to explore the database, to plot the geographic distribution of the species occurrences, and to generate personal queries to the database about species occurrences (number of individuals, distribution, etc.) and birdwatcher observations (number of species recorded by each collaborator, UTMs visited, etc.). This new R-package will allow scientists to access this database and to exploit the information generated by Spanish birdwatchers over the last 40 years.
Neurotree: a collaborative, graphical database of the academic genealogy of neuroscience.
David, Stephen V; Hayden, Benjamin Y
2012-01-01
Neurotree is an online database that documents the lineage of academic mentorship in neuroscience. Modeled on the tree format typically used to describe biological genealogies, the Neurotree web site provides a concise summary of the intellectual history of neuroscience and relationships between individuals in the current neuroscience community. The contents of the database are entirely crowd-sourced: any internet user can add information about researchers and the connections between them. As of July 2012, Neurotree has collected information from 10,000 users about 35,000 researchers and 50,000 mentor relationships, and continues to grow. The present report serves to highlight the utility of Neurotree as a resource for academic research and to summarize some basic analysis of its data. The tree structure of the database permits a variety of graphical analyses. We find that the connectivity and graphical distance between researchers entered into Neurotree early has stabilized and thus appears to be mostly complete. The connectivity of more recent entries continues to mature. A ranking of researcher fecundity based on their mentorship reveals a sustained period of influential researchers from 1850-1950, with the most influential individuals active at the later end of that period. Finally, a clustering analysis reveals that some subfields of neuroscience are reflected in tightly interconnected mentor-trainee groups.
Neurotree: A Collaborative, Graphical Database of the Academic Genealogy of Neuroscience
David, Stephen V.; Hayden, Benjamin Y.
2012-01-01
Neurotree is an online database that documents the lineage of academic mentorship in neuroscience. Modeled on the tree format typically used to describe biological genealogies, the Neurotree web site provides a concise summary of the intellectual history of neuroscience and relationships between individuals in the current neuroscience community. The contents of the database are entirely crowd-sourced: any internet user can add information about researchers and the connections between them. As of July 2012, Neurotree has collected information from 10,000 users about 35,000 researchers and 50,000 mentor relationships, and continues to grow. The present report serves to highlight the utility of Neurotree as a resource for academic research and to summarize some basic analysis of its data. The tree structure of the database permits a variety of graphical analyses. We find that the connectivity and graphical distance between researchers entered into Neurotree early has stabilized and thus appears to be mostly complete. The connectivity of more recent entries continues to mature. A ranking of researcher fecundity based on their mentorship reveals a sustained period of influential researchers from 1850–1950, with the most influential individuals active at the later end of that period. Finally, a clustering analysis reveals that some subfields of neuroscience are reflected in tightly interconnected mentor-trainee groups. PMID:23071595
Varela, Sara; González-Hernández, Javier; Casabella, Eduardo; Barrientos, Rafael
2014-01-01
Citizen science projects store an enormous amount of information about species distribution, diversity and characteristics. Researchers are now beginning to make use of this rich collection of data. However, access to these databases is not always straightforward. Apart from the largest and international projects, citizen science repositories often lack specific Application Programming Interfaces (APIs) to connect them to the scientific environments. Thus, it is necessary to develop simple routines to allow researchers to take advantage of the information collected by smaller citizen science projects, for instance, programming specific packages to connect them to popular scientific environments (like R). Here, we present rAvis, an R-package to connect R-users with Proyecto AVIS (http://proyectoavis.com), a Spanish citizen science project with more than 82,000 bird observation records. We develop several functions to explore the database, to plot the geographic distribution of the species occurrences, and to generate personal queries to the database about species occurrences (number of individuals, distribution, etc.) and birdwatcher observations (number of species recorded by each collaborator, UTMs visited, etc.). This new R-package will allow scientists to access this database and to exploit the information generated by Spanish birdwatchers over the last 40 years. PMID:24626233
The background, criteria, and usage of the intermodal passenger connectivity database
DOT National Transportation Integrated Search
2009-04-01
Intermodal connections, the links that allow passengers to switch from one mode to another to complete a trip, have been an important element of federal transportation policy since passage of the Intermodal Surface Transportation Efficiency Act of 19...
Combining computational models, semantic annotations and simulation experiments in a graph database
Henkel, Ron; Wolkenhauer, Olaf; Waltemath, Dagmar
2015-01-01
Model repositories such as the BioModels Database, the CellML Model Repository or JWS Online are frequently accessed to retrieve computational models of biological systems. However, their storage concepts support only restricted types of queries and not all data inside the repositories can be retrieved. In this article we present a storage concept that meets this challenge. It grounds on a graph database, reflects the models’ structure, incorporates semantic annotations and simulation descriptions and ultimately connects different types of model-related data. The connections between heterogeneous model-related data and bio-ontologies enable efficient search via biological facts and grant access to new model features. The introduced concept notably improves the access of computational models and associated simulations in a model repository. This has positive effects on tasks such as model search, retrieval, ranking, matching and filtering. Furthermore, our work for the first time enables CellML- and Systems Biology Markup Language-encoded models to be effectively maintained in one database. We show how these models can be linked via annotations and queried. Database URL: https://sems.uni-rostock.de/projects/masymos/ PMID:25754863
Reliability database development for use with an object-oriented fault tree evaluation program
NASA Technical Reports Server (NTRS)
Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann
1989-01-01
A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.
Mesh Networking in the Tactical Environment Using White Space Technolog
2015-12-01
Connect network with multi- ple clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Table 4.5 Results of White Space simulation...functionality to devices seeking to allocate unutilized spectrum space . The devices are able to poll the database, via a connection to a web based...and 28 schools, all of whom were provided Internet connectivity by Adaptrum white space devices [16]. The use of white space devices made this
NASA Astrophysics Data System (ADS)
Gross, M. B.; Mayernik, M. S.; Rowan, L. R.; Khan, H.; Boler, F. M.; Maull, K. E.; Stott, D.; Williams, S.; Corson-Rikert, J.; Johns, E. M.; Daniels, M. D.; Krafft, D. B.
2015-12-01
UNAVCO, UCAR, and Cornell University are working together to leverage semantic web technologies to enable discovery of people, datasets, publications and other research products, as well as the connections between them. The EarthCollab project, an EarthCube Building Block, is enhancing an existing open-source semantic web application, VIVO, to address connectivity gaps across distributed networks of researchers and resources related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) UNAVCO, a geodetic facility and consortium that supports diverse research projects informed by geodesy. People, publications, datasets and grant information have been mapped to an extended version of the VIVO-ISF ontology and ingested into VIVO's database. Data is ingested using a custom set of scripts that include the ability to perform basic automated and curated disambiguation. VIVO can display a page for every object ingested, including connections to other objects in the VIVO database. A dataset page, for example, includes the dataset type, time interval, DOI, related publications, and authors. The dataset type field provides a connection to all other datasets of the same type. The author's page will show, among other information, related datasets and co-authors. Information previously spread across several unconnected databases is now stored in a single location. In addition to VIVO's default display, the new database can also be queried using SPARQL, a query language for semantic data. EarthCollab will also extend the VIVO web application. One such extension is the ability to cross-link separate VIVO instances across institutions, allowing local display of externally curated information. For example, Cornell's VIVO faculty pages will display UNAVCO's dataset information and UNAVCO's VIVO will display Cornell faculty member contact and position information. Additional extensions, including enhanced geospatial capabilities, will be developed following task-centered usability testing.
GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database
NASA Astrophysics Data System (ADS)
Bottigli, U.; Cerello, P.; Cheran, S.; Delogu, P.; Fantacci, M. E.; Fauci, F.; Golosio, B.; Lauria, A.; Lopez Torres, E.; Magro, R.; Masala, G. L.; Oliva, P.; Palmiero, R.; Raso, G.; Retico, A.; Stumbo, S.; Tangaro, S.
2003-09-01
The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN (National Institute of Nuclear Physics) sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used to acquire new images, as archive and to perform statistical analysis. The images (18×24 cm2, digitised by a CCD linear scanner with a 85 μm pitch and 4096 gray levels) are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized throught the connection of all the hospitals and research centers in GRID tecnology. In each hospital local patients digital images are stored in the local database. Using GRID connection, GPCALMA will allow each node to work on distributed database data as well as local database data. Using its database the GPCALMA tools perform several analysis. A texture analysis, i.e. an automated classification on adipose, dense or glandular texture, can be provided by the system. GPCALMA software also allows classification of pathological features, in particular massive lesions (both opacities and spiculated lesions) analysis and microcalcification clusters analysis. The detection of pathological features is made using neural network software that provides a selection of areas showing a given "suspicion level" of lesion occurrence. The performance of the GPCALMA system will be presented in terms of the ROC (Receiver Operating Characteristic) curves. The results of GPCALMA system as "second reader" will also be presented.
Curriculum Connection. Take Technology Outdoors.
ERIC Educational Resources Information Center
Dean, Bruce Robert
1992-01-01
Technology can support hands-on science as elementary students use computers to formulate field guides to nature surrounding their school. Students examine other field guides; open databases for recording information; collect, draw, and identify plants, insects, and animals; enter data into the database; then generate a computerized field guide.…
Methods to Secure Databases Against Vulnerabilities
2015-12-01
for several languages such as C, C++, PHP, Java and Python [16]. MySQL will work well with very large databases. The documentation references...using Eclipse and connected to each database management system using Python and Java drivers provided by MySQL , MongoDB, and Datastax (for Cassandra...tiers in Python and Java . Problem MySQL MongoDB Cassandra 1. Injection a. Tautologies Vulnerable Vulnerable Not Vulnerable b. Illegal query
High-Order Methods for Computational Physics
1999-03-01
computation is running in 278 Ronald D. Henderson parallel. Instead we use the concept of a voxel database (VDB) of geometric positions in the mesh [85...processor 0 Fig. 4.19. Connectivity and communications axe established by building a voxel database (VDB) of positions. A VDB maps each position to a...studies such as the highly accurate stability computations considered help expand the database for this benchmark problem. The two-dimensional linear
NASA Astrophysics Data System (ADS)
Schaefer, Bastian; Goedecker, Stefan; Goedecker Group Team
Based on Lennard-Jones, Silicon, Sodium-Chloride and Gold clusters, it was found that uphill barrier energies of transition states between directly connected minima tend to increase with increasing structural differences of the two minima. Based on this insight it also turned out that post-processing minima hopping data at a negligible computational cost allows to obtain qualitative topological information on potential energy surfaces that can be stored in so called qualitative connectivity databases. These qualitative connectivity databases are used for generating fingerprint disconnectivity graphs that allow to obtain a first qualitative idea on thermodynamic and kinetic properties of a system of interest. This research was supported by the NCCR MARVEL, funded by the Swiss National Science Foundation. Computer time was provided by the Swiss National Supercomputing Centre (CSCS) under Project ID No. s499.
Application of connectivity mapping in predictive toxicology based on gene-expression similarity.
Smalley, Joshua L; Gant, Timothy W; Zhang, Shu-Dong
2010-02-09
Connectivity mapping is the process of establishing connections between different biological states using gene-expression profiles or signatures. There are a number of applications but in toxicology the most pertinent is for understanding mechanisms of toxicity. In its essence the process involves comparing a query gene signature generated as a result of exposure of a biological system to a chemical to those in a database that have been previously derived. In the ideal situation the query gene-expression signature is characteristic of the event and will be matched to similar events in the database. Key criteria are therefore the means of choosing the signature to be matched and the means by which the match is made. In this article we explore these concepts with examples applicable to toxicology. (c) 2009 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Powell, Sarah R.; Stecker, Pamela M.
2014-01-01
This article describes data-based individualization (DBI) as a continuous process connecting assessment and intervention in mathematics for students with disabilities. DBI provides teachers with an evidence-based method for individualizing interventions for students who do not demonstrate adequate response. Assessment data gathered through the use…
ERIC Educational Resources Information Center
Rzepa, Henry S.
2016-01-01
Three new examples are presented illustrating three-dimensional chemical information searches of the Cambridge structure database (CSD) from which basic core concepts in organic and inorganic chemistry emerge. These include connecting the regiochemistry of aromatic electrophilic substitution with the geometrical properties of hydrogen bonding…
DOT National Transportation Integrated Search
2009-01-01
The Database demonstrates the unity and commonality of T-M but presents each one in its separate state. Yet in that process the full panopoly of T-M is unfolded including their shared and connected state. There are thousands of Trasportation-Markings...
Integrating Digital Images into the Art and Art History Curriculum.
ERIC Educational Resources Information Center
Pitt, Sharon P.; Updike, Christina B.; Guthrie, Miriam E.
2002-01-01
Describes an Internet-based image database system connected to a flexible, in-class teaching and learning tool (the Madison Digital Image Database) developed at James Madison University to bring digital images to the arts and humanities classroom. Discusses content, copyright issues, ensuring system effectiveness, instructional impact, sharing the…
Zhang, Shu-Dong; Gant, Timothy W
2009-07-31
Connectivity mapping is a process to recognize novel pharmacological and toxicological properties in small molecules by comparing their gene expression signatures with others in a database. A simple and robust method for connectivity mapping with increased specificity and sensitivity was recently developed, and its utility demonstrated using experimentally derived gene signatures. This paper introduces sscMap (statistically significant connections' map), a Java application designed to undertake connectivity mapping tasks using the recently published method. The software is bundled with a default collection of reference gene-expression profiles based on the publicly available dataset from the Broad Institute Connectivity Map 02, which includes data from over 7000 Affymetrix microarrays, for over 1000 small-molecule compounds, and 6100 treatment instances in 5 human cell lines. In addition, the application allows users to add their custom collections of reference profiles and is applicable to a wide range of other 'omics technologies. The utility of sscMap is two fold. First, it serves to make statistically significant connections between a user-supplied gene signature and the 6100 core reference profiles based on the Broad Institute expanded dataset. Second, it allows users to apply the same improved method to custom-built reference profiles which can be added to the database for future referencing. The software can be freely downloaded from http://purl.oclc.org/NET/sscMap.
Neural Network Modeling of UH-60A Pilot Vibration
NASA Technical Reports Server (NTRS)
Kottapalli, Sesi
2003-01-01
Full-scale flight-test pilot floor vibration is modeled using neural networks and full-scale wind tunnel test data for low speed level flight conditions. Neural network connections between the wind tunnel test data and the tlxee flight test pilot vibration components (vertical, lateral, and longitudinal) are studied. Two full-scale UH-60A Black Hawk databases are used. The first database is the NASMArmy UH-60A Airloads Program flight test database. The second database is the UH-60A rotor-only wind tunnel database that was acquired in the NASA Ames SO- by 120- Foot Wind Tunnel with the Large Rotor Test Apparatus (LRTA). Using neural networks, the flight-test pilot vibration is modeled using the wind tunnel rotating system hub accelerations, and separately, using the hub loads. The results show that the wind tunnel rotating system hub accelerations and the operating parameters can represent the flight test pilot vibration. The six components of the wind tunnel N/rev balance-system hub loads and the operating parameters can also represent the flight test pilot vibration. The present neural network connections can significandy increase the value of wind tunnel testing.
HEDD: Human Enhancer Disease Database
Wang, Zhen; Zhang, Quanwei; Zhang, Wen; Lin, Jhih-Rong; Cai, Ying; Mitra, Joydeep
2018-01-01
Abstract Enhancers, as specialized genomic cis-regulatory elements, activate transcription of their target genes and play an important role in pathogenesis of many human complex diseases. Despite recent systematic identification of them in the human genome, currently there is an urgent need for comprehensive annotation databases of human enhancers with a focus on their disease connections. In response, we built the Human Enhancer Disease Database (HEDD) to facilitate studies of enhancers and their potential roles in human complex diseases. HEDD currently provides comprehensive genomic information for ∼2.8 million human enhancers identified by ENCODE, FANTOM5 and RoadMap with disease association scores based on enhancer–gene and gene–disease connections. It also provides Web-based analytical tools to visualize enhancer networks and score enhancers given a set of selected genes in a specific gene network. HEDD is freely accessible at http://zdzlab.einstein.yu.edu/1/hedd.php. PMID:29077884
Large-scale extraction of brain connectivity from the neuroscientific literature
Richardet, Renaud; Chappelier, Jean-Cédric; Telefont, Martin; Hill, Sean
2015-01-01
Motivation: In neuroscience, as in many other scientific domains, the primary form of knowledge dissemination is through published articles. One challenge for modern neuroinformatics is finding methods to make the knowledge from the tremendous backlog of publications accessible for search, analysis and the integration of such data into computational models. A key example of this is metascale brain connectivity, where results are not reported in a normalized repository. Instead, these experimental results are published in natural language, scattered among individual scientific publications. This lack of normalization and centralization hinders the large-scale integration of brain connectivity results. In this article, we present text-mining models to extract and aggregate brain connectivity results from 13.2 million PubMed abstracts and 630 216 full-text publications related to neuroscience. The brain regions are identified with three different named entity recognizers (NERs) and then normalized against two atlases: the Allen Brain Atlas (ABA) and the atlas from the Brain Architecture Management System (BAMS). We then use three different extractors to assess inter-region connectivity. Results: NERs and connectivity extractors are evaluated against a manually annotated corpus. The complete in litero extraction models are also evaluated against in vivo connectivity data from ABA with an estimated precision of 78%. The resulting database contains over 4 million brain region mentions and over 100 000 (ABA) and 122 000 (BAMS) potential brain region connections. This database drastically accelerates connectivity literature review, by providing a centralized repository of connectivity data to neuroscientists. Availability and implementation: The resulting models are publicly available at github.com/BlueBrain/bluima. Contact: renaud.richardet@epfl.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25609795
Data mining the EXFOR database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, David A.; Hirdt, John; Herman, Michal
2013-12-13
The EXFOR database contains the largest collection of experimental nuclear reaction data available as well as this data's bibliographic information and experimental details. We created an undirected graph from the EXFOR datasets with graph nodes representing single observables and graph links representing the connections of various types between these observables. This graph is an abstract representation of the connections in EXFOR, similar to graphs of social networks, authorship networks, etc. Analysing this abstract graph, we are able to address very specific questions such as 1) what observables are being used as reference measurements by the experimental community? 2) are thesemore » observables given the attention needed by various standards organisations? 3) are there classes of observables that are not connected to these reference measurements? In addressing these questions, we propose several (mostly cross section) observables that should be evaluated and made into reaction reference standards.« less
ERMes: Open Source Simplicity for Your E-Resource Management
ERIC Educational Resources Information Center
Doering, William; Chilton, Galadriel
2009-01-01
ERMes, the latest version of electronic resource management system (ERM), is a relational database; content in different tables connects to, and works with, content in other tables. ERMes requires Access 2007 (Windows) or Access 2008 (Mac) to operate as the database utilizes functionality not available in previous versions of Microsoft Access. The…
ERIC Educational Resources Information Center
Oliver, Astrid; Dahlquist, Janet; Tankersley, Jan; Emrich, Beth
2010-01-01
This article discusses the processes that occurred when the Library, Controller's Office, and Information Technology Department agreed to create an interface between the Library's Innovative Interfaces patron database and campus administrative software, Banner, using file transfer protocol, in an effort to streamline the Library's accounts…
CartograTree: connecting tree genomes, phenotypes and environment.
Vasquez-Gross, Hans A; Yu, John J; Figueroa, Ben; Gessler, Damian D G; Neale, David B; Wegrzyn, Jill L
2013-05-01
Today, researchers spend a tremendous amount of time gathering, formatting, filtering and visualizing data collected from disparate sources. Under the umbrella of forest tree biology, we seek to provide a platform and leverage modern technologies to connect biotic and abiotic data. Our goal is to provide an integrated web-based workspace that connects environmental, genomic and phenotypic data via geo-referenced coordinates. Here, we connect the genomic query web-based workspace, DiversiTree and a novel geographical interface called CartograTree to data housed on the TreeGenes database. To accomplish this goal, we implemented Simple Semantic Web Architecture and Protocol to enable the primary genomics database, TreeGenes, to communicate with semantic web services regardless of platform or back-end technologies. The novelty of CartograTree lies in the interactive workspace that allows for geographical visualization and engagement of high performance computing (HPC) resources. The application provides a unique tool set to facilitate research on the ecology, physiology and evolution of forest tree species. CartograTree can be accessed at: http://dendrome.ucdavis.edu/cartogratree. © 2013 Blackwell Publishing Ltd.
Bailey, Sarah F; Scheible, Melissa K; Williams, Christopher; Silva, Deborah S B S; Hoggan, Marina; Eichman, Christopher; Faith, Seth A
2017-11-01
Next-generation Sequencing (NGS) is a rapidly evolving technology with demonstrated benefits for forensic genetic applications, and the strategies to analyze and manage the massive NGS datasets are currently in development. Here, the computing, data storage, connectivity, and security resources of the Cloud were evaluated as a model for forensic laboratory systems that produce NGS data. A complete front-to-end Cloud system was developed to upload, process, and interpret raw NGS data using a web browser dashboard. The system was extensible, demonstrating analysis capabilities of autosomal and Y-STRs from a variety of NGS instrumentation (Illumina MiniSeq and MiSeq, and Oxford Nanopore MinION). NGS data for STRs were concordant with standard reference materials previously characterized with capillary electrophoresis and Sanger sequencing. The computing power of the Cloud was implemented with on-demand auto-scaling to allow multiple file analysis in tandem. The system was designed to store resulting data in a relational database, amenable to downstream sample interpretations and databasing applications following the most recent guidelines in nomenclature for sequenced alleles. Lastly, a multi-layered Cloud security architecture was tested and showed that industry standards for securing data and computing resources were readily applied to the NGS system without disadvantageous effects for bioinformatic analysis, connectivity or data storage/retrieval. The results of this study demonstrate the feasibility of using Cloud-based systems for secured NGS data analysis, storage, databasing, and multi-user distributed connectivity. Copyright © 2017 Elsevier B.V. All rights reserved.
Enhancing Geoscience Research Discovery Through the Semantic Web
NASA Astrophysics Data System (ADS)
Rowan, Linda R.; Gross, M. Benjamin; Mayernik, Matthew; Khan, Huda; Boler, Frances; Maull, Keith; Stott, Don; Williams, Steve; Corson-Rikert, Jon; Johns, Erica M.; Daniels, Michael; Krafft, Dean B.; Meertens, Charles
2016-04-01
UNAVCO, UCAR, and Cornell University are working together to leverage semantic web technologies to enable discovery of people, datasets, publications and other research products, as well as the connections between them. The EarthCollab project, a U.S. National Science Foundation EarthCube Building Block, is enhancing an existing open-source semantic web application, VIVO, to enhance connectivity across distributed networks of researchers and resources related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) UNAVCO, a geodetic facility and consortium that supports diverse research projects informed by geodesy. People, publications, datasets and grant information have been mapped to an extended version of the VIVO-ISF ontology and ingested into VIVO's database. Much of the VIVO ontology was built for the life sciences, so we have added some components of existing geoscience-based ontologies and a few terms from a local ontology that we created. The UNAVCO VIVO instance, connect.unavco.org, utilizes persistent identifiers whenever possible; for example using ORCIDs for people, publication DOIs, data DOIs and unique NSF grant numbers. Data is ingested using a custom set of scripts that include the ability to perform basic automated and curated disambiguation. VIVO can display a page for every object ingested, including connections to other objects in the VIVO database. A dataset page, for example, includes the dataset type, time interval, DOI, related publications, and authors. The dataset type field provides a connection to all other datasets of the same type. The author's page shows, among other information, related datasets and co-authors. Information previously spread across several unconnected databases is now stored in a single location. In addition to VIVO's default display, the new database can be queried using SPARQL, a query language for semantic data. EarthCollab is extending the VIVO web application. One such extension is the ability to cross-link separate VIVO instances across institutions, allowing local display of externally curated information. For example, Cornell's VIVO faculty pages will display UNAVCO's dataset information and UNAVCO's VIVO will display Cornell faculty member contact and position information. About half of UNAVCO's membership is international and we hope to connect our data to institutions in other countries with a similar approach. Additional extensions, including enhanced geospatial capabilities, will be developed based on task-centered usability testing.
Chang, Meiping; Smith, Sarah; Thorpe, Andrew; Barratt, Michael J; Karim, Farzana
2010-09-16
We have previously used the rat 4 day Complete Freund's Adjuvant (CFA) model to screen compounds with potential to reduce osteoarthritic pain. The aim of this study was to identify genes altered in this model of osteoarthritic pain and use this information to infer analgesic potential of compounds based on their own gene expression profiles using the Connectivity Map approach. Using microarrays, we identified differentially expressed genes in L4 and L5 dorsal root ganglia (DRG) from rats that had received intraplantar CFA for 4 days compared to matched, untreated control animals. Analysis of these data indicated that the two groups were distinguishable by differences in genes important in immune responses, nerve growth and regeneration. This list of differentially expressed genes defined a "CFA signature". We used the Connectivity Map approach to identify pharmacologic agents in the Broad Institute Build02 database that had gene expression signatures that were inversely related ('negatively connected') with our CFA signature. To test the predictive nature of the Connectivity Map methodology, we tested phenoxybenzamine (an alpha adrenergic receptor antagonist) - one of the most negatively connected compounds identified in this database - for analgesic activity in the CFA model. Our results indicate that at 10 mg/kg, phenoxybenzamine demonstrated analgesia comparable to that of Naproxen in this model. Evaluation of phenoxybenzamine-induced analgesia in the current study lends support to the utility of the Connectivity Map approach for identifying compounds with analgesic properties in the CFA model.
Assessing habitat connectivity for ground-dwelling animals in an urban environment.
Braaker, S; Moretti, M; Boesch, R; Ghazoul, J; Obrist, M K; Bontadina, F
To ensure viable species populations in fragmented landscapes, individuals must be able to move between suitable habitat patches. Despite the increased interest in biodiversity assessment in urban environments, the ecological relevance of habitat connectivity in highly fragmented landscapes remains largely unknown. The first step to understanding the role of habitat connectivity in urban ecology is the challenging task of assessing connectivity in the complex patchwork of contrasting habitats that is found in cities. We developed a data-based framework, minimizing the use of subjective assumptions, to assess habitat connectivity that consists of the following sequential steps: (1) identification of habitat preference based on empirical habitat-use data; (2) derivation of habitat resistance surfaces evaluating various transformation functions; (3) modeling of different connectivity maps with electrical circuit theory (Circuitscape), a method considering all possible pathways across the landscape simultaneously; and (4) identification of the best connectivity map with information-theoretic model selection. We applied this analytical framework to assess habitat connectivity for the European hedgehog Erinaceus europaeus, a model species for ground-dwelling animals, in the city of Zurich, Switzerland, using GPS track points from 40 individuals. The best model revealed spatially explicit connectivity “pinch points,” as well as multiple habitat connections. Cross-validation indicated the general validity of the selected connectivity model. The results show that both habitat connectivity and habitat quality affect the movement of urban hedgehogs (relative importance of the two variables was 19.2% and 80.8%, respectively), and are thus both relevant for predicting urban animal movements. Our study demonstrates that even in the complex habitat patchwork of cities, habitat connectivity plays a major role for ground-dwelling animal movement. Data-based habitat connectivity maps can thus serve as an important tool for city planners to identify habitat corridors and plan appropriate management and conservation measures for urban animals. The analytical framework we describe to model such connectivity maps is generally applicable to different types of habitat-use data and can be adapted to the movement scale of the focal species. It also allows evaluation of the impact of future landscape changes or management scenarios on habitat connectivity in urban landscapes.
Estimating migratory connectivity of birds when re-encounter probabilities are heterogeneous
Cohen, Emily B; Hostetler, Jeffrey A; Royle, J Andrew; Marra, Peter P
2014-01-01
Understanding the biology and conducting effective conservation of migratory species requires an understanding of migratory connectivity – the geographic linkages of populations between stages of the annual cycle. Unfortunately, for most species, we are lacking such information. The North American Bird Banding Laboratory (BBL) houses an extensive database of marking, recaptures and recoveries, and such data could provide migratory connectivity information for many species. To date, however, few species have been analyzed for migratory connectivity largely because heterogeneous re-encounter probabilities make interpretation problematic. We accounted for regional variation in re-encounter probabilities by borrowing information across species and by using effort covariates on recapture and recovery probabilities in a multistate capture–recapture and recovery model. The effort covariates were derived from recaptures and recoveries of species within the same regions. We estimated the migratory connectivity for three tern species breeding in North America and over-wintering in the tropics, common (Sterna hirundo), roseate (Sterna dougallii), and Caspian terns (Hydroprogne caspia). For western breeding terns, model-derived estimates of migratory connectivity differed considerably from those derived directly from the proportions of re-encounters. Conversely, for eastern breeding terns, estimates were merely refined by the inclusion of re-encounter probabilities. In general, eastern breeding terns were strongly connected to eastern South America, and western breeding terns were strongly linked to the more western parts of the nonbreeding range under both models. Through simulation, we found this approach is likely useful for many species in the BBL database, although precision improved with higher re-encounter probabilities and stronger migratory connectivity. We describe an approach to deal with the inherent biases in BBL banding and re-encounter data to demonstrate that this large dataset is a valuable source of information about the migratory connectivity of the birds of North America. PMID:24967083
Estimating migratory connectivity of birds when re-encounter probabilities are heterogeneous
Cohen, Emily B.; Hostelter, Jeffrey A.; Royle, J. Andrew; Marra, Peter P.
2014-01-01
Understanding the biology and conducting effective conservation of migratory species requires an understanding of migratory connectivity – the geographic linkages of populations between stages of the annual cycle. Unfortunately, for most species, we are lacking such information. The North American Bird Banding Laboratory (BBL) houses an extensive database of marking, recaptures and recoveries, and such data could provide migratory connectivity information for many species. To date, however, few species have been analyzed for migratory connectivity largely because heterogeneous re-encounter probabilities make interpretation problematic. We accounted for regional variation in re-encounter probabilities by borrowing information across species and by using effort covariates on recapture and recovery probabilities in a multistate capture–recapture and recovery model. The effort covariates were derived from recaptures and recoveries of species within the same regions. We estimated the migratory connectivity for three tern species breeding in North America and over-wintering in the tropics, common (Sterna hirundo), roseate (Sterna dougallii), and Caspian terns (Hydroprogne caspia). For western breeding terns, model-derived estimates of migratory connectivity differed considerably from those derived directly from the proportions of re-encounters. Conversely, for eastern breeding terns, estimates were merely refined by the inclusion of re-encounter probabilities. In general, eastern breeding terns were strongly connected to eastern South America, and western breeding terns were strongly linked to the more western parts of the nonbreeding range under both models. Through simulation, we found this approach is likely useful for many species in the BBL database, although precision improved with higher re-encounter probabilities and stronger migratory connectivity. We describe an approach to deal with the inherent biases in BBL banding and re-encounter data to demonstrate that this large dataset is a valuable source of information about the migratory connectivity of the birds of North America.
Intelligent printing system with AMPAC: boot program for printing machine with AMPAC
NASA Astrophysics Data System (ADS)
Yuasa, Tomonori; Mishina, Hiromichi
2000-12-01
The database AMPAC proposes the simple and unified format to describe single parameter of whole field of design, production and management. The database described by the format can be used commonly in any field connected by the network production system, since the description accepts any parameter in any fields and is field independent definition.
2001-10-25
within one of the programmes sponsored by the European Commission.The system mainly consists of a shared care database in which each groups of...care database in which each community facility, or group of facilities, is supported by a local area network (LAN). Each of these LANs is connected over...functions. The software is layered, so that the client application is not affected by how the servers are implemented or which database system they use
[The future of clinical laboratory database management system].
Kambe, M; Imidy, D; Matsubara, A; Sugimoto, Y
1999-09-01
To assess the present status of the clinical laboratory database management system, the difference between the Clinical Laboratory Information System and Clinical Laboratory System was explained in this study. Although three kinds of database management systems (DBMS) were shown including the relational model, tree model and network model, the relational model was found to be the best DBMS for the clinical laboratory database based on our experience and developments of some clinical laboratory expert systems. As a future clinical laboratory database management system, the IC card system connected to an automatic chemical analyzer was proposed for personal health data management and a microscope/video system was proposed for dynamic data management of leukocytes or bacteria.
CoCoMac 2.0 and the future of tract-tracing databases
Bakker, Rembrandt; Wachtler, Thomas; Diesmann, Markus
2012-01-01
The CoCoMac database contains the results of several hundred published axonal tract-tracing studies in the macaque monkey brain. The combined results are used for constructing the macaque macro-connectome. Here we discuss the redevelopment of CoCoMac and compare it to six connectome-related projects: two online resources that provide full access to raw tracing data in rodents, a connectome viewer for advanced 3D graphics, a partial but highly detailed rat connectome, a brain data management system that generates custom connectivity matrices, and a software package that covers the complete pipeline from connectivity data to large-scale brain simulations. The second edition of CoCoMac features many enhancements over the original. For example, a search wizard is provided for full access to all tables and their nested dependencies. Connectivity matrices can be computed on demand in a user-selected nomenclature. A new data entry system is available as a preview, and is to become a generic solution for community-driven data entry in manually collated databases. We conclude with the question whether neuronal tracing will remain the gold standard to uncover the wiring of brains, thereby highlighting developments in human connectome construction, tracer substances, polarized light imaging, and serial block-face scanning electron microscopy. PMID:23293600
CoCoMac 2.0 and the future of tract-tracing databases.
Bakker, Rembrandt; Wachtler, Thomas; Diesmann, Markus
2012-01-01
The CoCoMac database contains the results of several hundred published axonal tract-tracing studies in the macaque monkey brain. The combined results are used for constructing the macaque macro-connectome. Here we discuss the redevelopment of CoCoMac and compare it to six connectome-related projects: two online resources that provide full access to raw tracing data in rodents, a connectome viewer for advanced 3D graphics, a partial but highly detailed rat connectome, a brain data management system that generates custom connectivity matrices, and a software package that covers the complete pipeline from connectivity data to large-scale brain simulations. The second edition of CoCoMac features many enhancements over the original. For example, a search wizard is provided for full access to all tables and their nested dependencies. Connectivity matrices can be computed on demand in a user-selected nomenclature. A new data entry system is available as a preview, and is to become a generic solution for community-driven data entry in manually collated databases. We conclude with the question whether neuronal tracing will remain the gold standard to uncover the wiring of brains, thereby highlighting developments in human connectome construction, tracer substances, polarized light imaging, and serial block-face scanning electron microscopy.
Bezgin, Gleb; Reid, Andrew T; Schubert, Dirk; Kötter, Rolf
2009-01-01
Brain atlases are widely used in experimental neuroscience as tools for locating and targeting specific brain structures. Delineated structures in a given atlas, however, are often difficult to interpret and to interface with database systems that supply additional information using hierarchically organized vocabularies (ontologies). Here we discuss the concept of volume-to-ontology mapping in the context of macroscopical brain structures. We present Java tools with which we have implemented this concept for retrieval of mapping and connectivity data on the macaque brain from the CoCoMac database in connection with an electronic version of "The Rhesus Monkey Brain in Stereotaxic Coordinates" authored by George Paxinos and colleagues. The software, including our manually drawn monkey brain template, can be downloaded freely under the GNU General Public License. It adds value to the printed atlas and has a wider (neuro-)informatics application since it can read appropriately annotated data from delineated sections of other species and organs, and turn them into 3D registered stacks. The tools provide additional features, including visualization and analysis of connectivity data, volume and centre-of-mass estimates, and graphical manipulation of entire structures, which are potentially useful for a range of research and teaching applications.
A World Wide Web (WWW) server database engine for an organelle database, MitoDat.
Lemkin, P F; Chipperfield, M; Merril, C; Zullo, S
1996-03-01
We describe a simple database search engine "dbEngine" which may be used to quickly create a searchable database on a World Wide Web (WWW) server. Data may be prepared from spreadsheet programs (such as Excel, etc.) or from tables exported from relationship database systems. This Common Gateway Interface (CGI-BIN) program is used with a WWW server such as available commercially, or from National Center for Supercomputer Algorithms (NCSA) or CERN. Its capabilities include: (i) searching records by combinations of terms connected with ANDs or ORs; (ii) returning search results as hypertext links to other WWW database servers; (iii) mapping lists of literature reference identifiers to the full references; (iv) creating bidirectional hypertext links between pictures and the database. DbEngine has been used to support the MitoDat database (Mendelian and non-Mendelian inheritance associated with the Mitochondrion) on the WWW.
NASA Astrophysics Data System (ADS)
Waki, Masaki; Uruno, Shigenori; Ohashi, Hiroyuki; Manabe, Tetsuya; Azuma, Yuji
We propose an optical fiber connection navigation system that uses visible light communication for an integrated distribution module in a central office. The system realizes an accurate database, requires less skilled work to operate and eliminates human error. This system can achieve a working time reduction of up to 88.0% compared with the conventional work without human error for the connection/removal of optical fiber cords, and is economical as regards installation and operation.
Walsh, John; Roberts, Ruth; Morris, Richard
2015-01-01
Patients with diabetes have to take numerous factors/data into their therapeutic decisions in daily life. Connecting the devices they are using by feeding the data generated into a database/app is supposed to help patients to optimize their glycemic control. As this is not established in practice, the different roadblocks have to be discussed to open the road. That large telecommunication companies are now entering this market might be a big help in pushing this forward. Smartphones offer an ideal platform for connectivity solutions. PMID:25614015
[National Database of Genotypes--ethical and legal issues].
Franková, Vera; Tesínová, Jolana; Brdicka, Radim
2011-01-01
National Database of Genotypes--ethical and legal issues The aim of the project National Database of Genotypes is to outline structure and rules for the database operation collecting information about genotypes of individual persons. The database should be used entirely for health care. Its purpose is to enable physicians to gain quick and easy access to the information about persons requiring specialized care due to their genetic constitution. In the future, another introduction of new genetic tests into the clinical practice can be expected thus the database of genotypes facilitates substantial financial savings by exclusion of duplicates of the expensive genetic testing. Ethical questions connected with the creating and functioning of such database concern mainly privacy protection, confidentiality of personal sensitive data, protection of database from misuse, consent with participation and public interests. Due to necessity of correct interpretation by qualified professional (= clinical geneticist), particular categorization of genetic data within the database is discussed. The function of proposed database has to be governed in concordance with the Czech legislation together with solving ethical problems.
SQLGEN: a framework for rapid client-server database application development.
Nadkarni, P M; Cheung, K H
1995-12-01
SQLGEN is a framework for rapid client-server relational database application development. It relies on an active data dictionary on the client machine that stores metadata on one or more database servers to which the client may be connected. The dictionary generates dynamic Structured Query Language (SQL) to perform common database operations; it also stores information about the access rights of the user at log-in time, which is used to partially self-configure the behavior of the client to disable inappropriate user actions. SQLGEN uses a microcomputer database as the client to store metadata in relational form, to transiently capture server data in tables, and to allow rapid application prototyping followed by porting to client-server mode with modest effort. SQLGEN is currently used in several production biomedical databases.
Meiler, Arno; Klinger, Claudia; Kaufmann, Michael
2012-09-08
The COG database is the most popular collection of orthologous proteins from many different completely sequenced microbial genomes. Per definition, a cluster of orthologous groups (COG) within this database exclusively contains proteins that most likely achieve the same cellular function. Recently, the COG database was extended by assigning to every protein both the corresponding amino acid and its encoding nucleotide sequence resulting in the NUCOCOG database. This extended version of the COG database is a valuable resource connecting sequence features with the functionality of the respective proteins. Here we present ANCAC, a web tool and MySQL database for the analysis of amino acid, nucleotide, and codon frequencies in COGs on the basis of freely definable phylogenetic patterns. We demonstrate the usefulness of ANCAC by analyzing amino acid frequencies, codon usage, and GC-content in a species- or function-specific context. With respect to amino acids we, at least in part, confirm the cognate bias hypothesis by using ANCAC's NUCOCOG dataset as the largest one available for that purpose thus far. Using the NUCOCOG datasets, ANCAC connects taxonomic, amino acid, and nucleotide sequence information with the functional classification via COGs and provides a GUI for flexible mining for sequence-bias. Thereby, to our knowledge, it is the only tool for the analysis of sequence composition in the light of physiological roles and phylogenetic context without requirement of substantial programming-skills.
2012-01-01
Background The COG database is the most popular collection of orthologous proteins from many different completely sequenced microbial genomes. Per definition, a cluster of orthologous groups (COG) within this database exclusively contains proteins that most likely achieve the same cellular function. Recently, the COG database was extended by assigning to every protein both the corresponding amino acid and its encoding nucleotide sequence resulting in the NUCOCOG database. This extended version of the COG database is a valuable resource connecting sequence features with the functionality of the respective proteins. Results Here we present ANCAC, a web tool and MySQL database for the analysis of amino acid, nucleotide, and codon frequencies in COGs on the basis of freely definable phylogenetic patterns. We demonstrate the usefulness of ANCAC by analyzing amino acid frequencies, codon usage, and GC-content in a species- or function-specific context. With respect to amino acids we, at least in part, confirm the cognate bias hypothesis by using ANCAC’s NUCOCOG dataset as the largest one available for that purpose thus far. Conclusions Using the NUCOCOG datasets, ANCAC connects taxonomic, amino acid, and nucleotide sequence information with the functional classification via COGs and provides a GUI for flexible mining for sequence-bias. Thereby, to our knowledge, it is the only tool for the analysis of sequence composition in the light of physiological roles and phylogenetic context without requirement of substantial programming-skills. PMID:22958836
jSPyDB, an open source database-independent tool for data management
NASA Astrophysics Data System (ADS)
Pierro, Giuseppe Antonio; Cavallari, Francesca; Di Guida, Salvatore; Innocente, Vincenzo
2011-12-01
Nowadays, the number of commercial tools available for accessing Databases, built on Java or .Net, is increasing. However, many of these applications have several drawbacks: usually they are not open-source, they provide interfaces only with a specific kind of database, they are platform-dependent and very CPU and memory consuming. jSPyDB is a free web-based tool written using Python and Javascript. It relies on jQuery and python libraries, and is intended to provide a simple handler to different database technologies inside a local web browser. Such a tool, exploiting fast access libraries such as SQLAlchemy, is easy to install, and to configure. The design of this tool envisages three layers. The front-end client side in the local web browser communicates with a backend server. Only the server is able to connect to the different databases for the purposes of performing data definition and manipulation. The server makes the data available to the client, so that the user can display and handle them safely. Moreover, thanks to jQuery libraries, this tool supports export of data in different formats, such as XML and JSON. Finally, by using a set of pre-defined functions, users are allowed to create their customized views for a better data visualization. In this way, we optimize the performance of database servers by avoiding short connections and concurrent sessions. In addition, security is enforced since we do not provide users the possibility to directly execute any SQL statement.
EuCliD (European Clinical Database): a database comparing different realities.
Marcelli, D; Kirchgessner, J; Amato, C; Steil, H; Mitteregger, A; Moscardò, V; Carioni, C; Orlandini, G; Gatti, E
2001-01-01
Quality and variability of dialysis practice are generally gaining more and more importance. Fresenius Medical Care (FMC), as provider of dialysis, has the duty to continuously monitor and guarantee the quality of care delivered to patients treated in its European dialysis units. Accordingly, a new clinical database called EuCliD has been developed. It is a multilingual and fully codified database, using as far as possible international standard coding tables. EuCliD collects and handles sensitive medical patient data, fully assuring confidentiality. The Infrastructure: a Domino server is installed in each country connected to EuCliD. All the centres belonging to a country are connected via modem to the country server. All the Domino Servers are connected via Wide Area Network to the Head Quarter Server in Bad Homburg (Germany). Inside each country server only anonymous data related to that particular country are available. The only place where all the anonymous data are available is the Head Quarter Server. The data collection is strongly supported in each country by "key-persons" with solid relationships to their respective national dialysis units. The quality of the data in EuCliD is ensured at different levels. At the end of January 2001, more than 11,000 patients treated in 135 centres located in 7 countries are already included in the system. FMC has put the patient care at the centre of its activities for many years and now is able to provide transparency to the community (Authorities, Nephrologists, Patients.....) thus demonstrating the quality of the service.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-29
... requirements for the agency (DHS) to respect individuals' rights to control their information in possession of... Database System of Records is a repository of information held by DHS in connection with its several and.... The DHS/ALL-030 Use of Terrorist Screening Database System of Records contains information that is...
Silva-Lopes, Victor W; Monteiro-Leal, Luiz H
2003-07-01
The development of new technology and the possibility of fast information delivery by either Internet or Intranet connections are changing education. Microanatomy education depends basically on the correct interpretation of microscopy images by students. Modern microscopes coupled to computers enable the presentation of these images in a digital form by creating image databases. However, the access to this new technology is restricted entirely to those living in cities and towns with an Information Technology (IT) infrastructure. This study describes the creation of a free Internet histology database composed by high-quality images and also presents an inexpensive way to supply it to a greater number of students through Internet/Intranet connections. By using state-of-the-art scientific instruments, we developed a Web page (http://www2.uerj.br/~micron/atlas/atlasenglish/index.htm) that, in association with a multimedia microscopy laboratory, intends to help in the reduction of the IT educational gap between developed and underdeveloped regions. Copyright 2003 Wiley-Liss, Inc.
Hirata, Yasutaka; Hirahara, Norimichi; Murakami, Arata; Motomura, Noboru; Miyata, Hiroaki; Takamoto, Shinichi
2018-01-01
We analyzed the mortality and morbidity of congenital heart surgery in Japan using the Japan Cardiovascular Surgery Database (JCVSD). Data regarding congenital heart surgery performed between January 2013 and December 2014 were obtained from JCVSD. The 20 most frequent procedures were selected and the mortality rates and major morbidities were analyzed. The mortality rates of atrial septal defect repair and ventricular septal defect repair were less than 1%, and the mortality rates of tetralogy of Fallot repair, complete atrioventricular septal defect repair, bidirectional Glenn, and total cavopulmonary connection were less than 2%. The mortality rates of the Norwood procedure and total anomalous pulmonary venous connection repair were more than 10%. The rates of unplanned reoperation, pacemaker implantation, chylothorax, deep sternal infection, phrenic nerve injury, and neurological deficit were shown for each procedure. Using JCVSD, the national data for congenital heart surgery, including postoperative complications, were analyzed. Further improvements of the database and feedback for clinical practice are required.
Development of ISO connection-oriented/correctionless gateways
NASA Technical Reports Server (NTRS)
Landweber, Lawrence H.
1991-01-01
The project had two goals, establishment of a gateway between French and U.S. academic networks and studies of issues related to the development of ISO connection-oriented/connectionless (CO/CL) gateways. The first component involved installation of a 56K bps line between Princeton Univ. and INRIA in France. The end-points of these lines were connected by Vitalink link level bridges. The Princeton end was then connected to the NSFNET via the John Von Neumann Supercomputer Center. The French end was connected to Transpac, the French X.25 public data network and to the French IP research internet. U.S. users may communicate with users of the French internet by e-mail and may access computational and data resources in France by use of remote login and file transfer. The connection to Transpac enables U.S. users to access the SIMBAD astronomical database outside of Paris. Access to this database from the U.S. can be via TCP/IP or DECNET (via a DECNET to TCP/IP gateway) protocols utilizing a TCP/IP to X.25 gateway developed and operated by INRIA. The second component of the project involved experiments aimed at understanding the issues involved is ISO CO/CL gateways. An experimental gateway was developed at Wisconsin and a preliminary report was prepared. Because of the need to devote most resources to the first component of the project, work in this area did not go beyond development of a prototype gateway.
Türei, Dénes; Papp, Diána; Fazekas, Dávid; Földvári-Nagy, László; Módos, Dezső; Lenti, Katalin; Csermely, Péter; Korcsmáros, Tamás
2013-01-01
NRF2 is the master transcriptional regulator of oxidative and xenobiotic stress responses. NRF2 has important roles in carcinogenesis, inflammation, and neurodegenerative diseases. We developed an online resource, NRF2-ome, to provide an integrated and systems-level database for NRF2. The database contains manually curated and predicted interactions of NRF2 as well as data from external interaction databases. We integrated NRF2 interactome with NRF2 target genes, NRF2 regulating TFs, and miRNAs. We connected NRF2-ome to signaling pathways to allow mapping upstream NRF2 regulatory components that could directly or indirectly influence NRF2 activity totaling 35,967 protein-protein and signaling interactions. The user-friendly website allows researchers without computational background to search, browse, and download the database. The database can be downloaded in SQL, CSV, BioPAX, SBML, PSI-MI, and in a Cytoscape CYS file formats. We illustrated the applicability of the website by suggesting a posttranscriptional negative feedback of NRF2 by MAFG protein and raised the possibility of a connection between NRF2 and the JAK/STAT pathway through STAT1 and STAT3. NRF2-ome can also be used as an evaluation tool to help researchers and drug developers to understand the hidden regulatory mechanisms in the complex network of NRF2.
Process evaluation distributed system
NASA Technical Reports Server (NTRS)
Moffatt, Christopher L. (Inventor)
2006-01-01
The distributed system includes a database server, an administration module, a process evaluation module, and a data display module. The administration module is in communication with the database server for providing observation criteria information to the database server. The process evaluation module is in communication with the database server for obtaining the observation criteria information from the database server and collecting process data based on the observation criteria information. The process evaluation module utilizes a personal digital assistant (PDA). A data display module in communication with the database server, including a website for viewing collected process data in a desired metrics form, the data display module also for providing desired editing and modification of the collected process data. The connectivity established by the database server to the administration module, the process evaluation module, and the data display module, minimizes the requirement for manual input of the collected process data.
[The database server for the medical bibliography database at Charles University].
Vejvalka, J; Rojíková, V; Ulrych, O; Vorísek, M
1998-01-01
In the medical community, bibliographic databases are widely accepted as a most important source of information both for theoretical and clinical disciplines. To improve access to medical bibliographic databases at Charles University, a database server (ERL by Silver Platter) was set up at the 2nd Faculty of Medicine in Prague. The server, accessible by Internet 24 hours/7 days, hosts now 14 years' MEDLINE and 10 years' EMBASE Paediatrics. Two different strategies are available for connecting to the server: a specialized client program that communicates over the Internet (suitable for professional searching) and a web-based access that requires no specialized software (except the WWW browser) on the client side. The server is now offered to academic community to host further databases, possibly subscribed by consortia whose individual members would not subscribe them by themselves.
A Case Study in Software Adaptation
2002-01-01
1 A Case Study in Software Adaptation Giuseppe Valetto Telecom Italia Lab Via Reiss Romoli 274 10148, Turin, Italy +39 011 2288788...configuration of the service; monitoring of database connectivity from within the service; monitoring of crashes and shutdowns of IM servers; monitoring of...of the IM server all share a relational database and a common runtime state repository, which make up the backend tier, and allow replicas to
MedlinePlus FAQ: How Often MedlinePlus is Updated
... System Pharmacists is updated monthly. Natural Medicines Comprehensive Database Consumer Version is updated quarterly. Medical Encyclopedia: Updated monthly. ... Guidelines Viewers & Players MedlinePlus Connect for ...
Eronen, Lauri; Toivonen, Hannu
2012-06-06
Biological databases contain large amounts of data concerning the functions and associations of genes and proteins. Integration of data from several such databases into a single repository can aid the discovery of previously unknown connections spanning multiple types of relationships and databases. Biomine is a system that integrates cross-references from several biological databases into a graph model with multiple types of edges, such as protein interactions, gene-disease associations and gene ontology annotations. Edges are weighted based on their type, reliability, and informativeness. We present Biomine and evaluate its performance in link prediction, where the goal is to predict pairs of nodes that will be connected in the future, based on current data. In particular, we formulate protein interaction prediction and disease gene prioritization tasks as instances of link prediction. The predictions are based on a proximity measure computed on the integrated graph. We consider and experiment with several such measures, and perform a parameter optimization procedure where different edge types are weighted to optimize link prediction accuracy. We also propose a novel method for disease-gene prioritization, defined as finding a subset of candidate genes that cluster together in the graph. We experimentally evaluate Biomine by predicting future annotations in the source databases and prioritizing lists of putative disease genes. The experimental results show that Biomine has strong potential for predicting links when a set of selected candidate links is available. The predictions obtained using the entire Biomine dataset are shown to clearly outperform ones obtained using any single source of data alone, when different types of links are suitably weighted. In the gene prioritization task, an established reference set of disease-associated genes is useful, but the results show that under favorable conditions, Biomine can also perform well when no such information is available.The Biomine system is a proof of concept. Its current version contains 1.1 million entities and 8.1 million relations between them, with focus on human genetics. Some of its functionalities are available in a public query interface at http://biomine.cs.helsinki.fi, allowing searching for and visualizing connections between given biological entities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Y; Medin, P; Yordy, J
2014-06-01
Purpose: To present a strategy to integrate the imaging database of a VERO unit with a treatment management system (TMS) to improve clinical workflow and consolidate image data to facilitate clinical quality control and documentation. Methods: A VERO unit is equipped with both kV and MV imaging capabilities for IGRT treatments. It has its own imaging database behind a firewall. It has been a challenge to transfer images on this unit to a TMS in a radiation therapy clinic so that registered images can be reviewed remotely with an approval or rejection record. In this study, a software system, iPump-VERO,more » was developed to connect VERO and a TMS in our clinic. The patient database folder on the VERO unit was mapped to a read-only folder on a file server outside VERO firewall. The application runs on a regular computer with the read access to the patient database folder. It finds the latest registered images and fuses them in one of six predefined patterns before sends them via DICOM connection to the TMS. The residual image registration errors will be overlaid on the fused image to facilitate image review. Results: The fused images of either registered kV planar images or CBCT images are fully DICOM compatible. A sentinel module is built to sense new registered images with negligible computing resources from the VERO ExacTrac imaging computer. It takes a few seconds to fuse registered images and send them to the TMS. The whole process is automated without any human intervention. Conclusion: Transferring images in DICOM connection is the easiest way to consolidate images of various sources in your TMS. Technically the attending does not have to go to the VERO treatment console to review image registration prior delivery. It is a useful tool for a busy clinic with a VERO unit.« less
IAS telecommunication infrastructure and value added network services provided by IASNET
NASA Astrophysics Data System (ADS)
Smirnov, Oleg L.; Marchenko, Sergei
The topology of a packet switching network for the Soviet National Centre for Automated Data Exchange with Foreign Computer Networks and Databanks (NCADE) based on a design by the Institute for Automated Systems (IAS) is discussed. NCADE has partners all over the world: it is linked to East European countries via telephone lines while satellites are used for communication with remote partners, such as Cuba, Mongolia, and Vietnam. Moreover, there is a connection to the Austrian, British, Canadian, Finnish, French, U.S. and other western networks through which users can have access to databases on each network. At the same time, NCADE provides western customers with access to more than 70 Soviet databases. Software and hardware of IASNET use data exchange recommendations agreed with the International Standard Organization (ISO) and International Telegraph and Telephone Consultative Committee (CCITT). Technical parameters of IASNET are compatible with the majority of foreign networks such as DATAPAK, TRANSPAC, TELENET, and others. By means of IASNET, the NCADE provides connection of Soviet and foreign users to information and computer centers around the world on the basis of the CCITT X.25 and X.75 recommendations. Any information resources of IASNET and value added network services, such as computer teleconferences, E-mail, information retrieval system, intelligent support of access to databanks and databases, and others are discussed. The topology of the ACADEMNET connected to IASNET over an X.25 gateway is also discussed.
NASA Astrophysics Data System (ADS)
Loveless, R.; Erhard, P.; Ficenec, J.; Gather, K.; Heath, G.; Iacovacci, M.; Kehres, J.; Mobayyen, M.; Notz, D.; Orr, R.; Orr, R.; Sephton, A.; Stroili, R.; Tokushuku, K.; Vogel, W.; Whitmore, J.; Wiggers, L.
1989-12-01
The ZEUS collaboration is building a system to monitor, control and document the hardware of the ZEUS detector. This system is based on a network of VAX computers and microprocessors connected via ethernet. The database for the hardware values will be ADAMO tables; the ethernet connection will be DECNET, TCP/IP, or RPC. Most of the documentation will also be kept in ADAMO tables for easy access by users.
... to Main Content Two Ways to Explore Toxic Chemicals in Your Community TOXMAP classic provides an Advanced ... group of TOXNET databases related to toxicology, hazardous chemicals, environmental health, and toxic releases. Connect with Us ...
Brown, Ramsay A; Swanson, Larry W
2013-09-01
Systematic description and the unambiguous communication of findings and models remain among the unresolved fundamental challenges in systems neuroscience. No common descriptive frameworks exist to describe systematically the connective architecture of the nervous system, even at the grossest level of observation. Furthermore, the accelerating volume of novel data generated on neural connectivity outpaces the rate at which this data is curated into neuroinformatics databases to synthesize digitally systems-level insights from disjointed reports and observations. To help address these challenges, we propose the Neural Systems Language (NSyL). NSyL is a modeling language to be used by investigators to encode and communicate systematically reports of neural connectivity from neuroanatomy and brain imaging. NSyL engenders systematic description and communication of connectivity irrespective of the animal taxon described, experimental or observational technique implemented, or nomenclature referenced. As a language, NSyL is internally consistent, concise, and comprehensible to both humans and computers. NSyL is a promising development for systematizing the representation of neural architecture, effectively managing the increasing volume of data on neural connectivity and streamlining systems neuroscience research. Here we present similar precedent systems, how NSyL extends existing frameworks, and the reasoning behind NSyL's development. We explore NSyL's potential for balancing robustness and consistency in representation by encoding previously reported assertions of connectivity from the literature as examples. Finally, we propose and discuss the implications of a framework for how NSyL will be digitally implemented in the future to streamline curation of experimental results and bridge the gaps among anatomists, imagers, and neuroinformatics databases. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Bashev, A.
2012-04-01
Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data treatment could be conducted in other programs after extraction the filtered data into *.csv file. It makes the database understandable for non-experts. The database employs open data format (*.csv) and wide spread tools: PHP as the program language, MySQL as database management system, JavaScript for interaction with GoogleMaps and JQueryUI for create user interface. The database is multilingual: there are association tables, which connect with elements of the database. In total the development required about 150 hours. The database still has several problems. The main problem is the reliability of the data. Actually it needs an expert system for estimation the reliability, but the elaboration of such a system would take more resources than the database itself. The second problem is the problem of stream selection - how to select the stations that are connected with each other (for example, belong to one water stream) and indicate their sequence. Currently the interface is English and Russian. However it can be easily translated to your language. But some problems we decided. For example problem "the problem of the same station" (sometimes the distance between stations is smaller, than the error of position): when you adding new station to the database our application automatically find station near this place. Also we decided problem of object and parameter type (how to regard "EC" and "electrical conductivity" as the same parameter). This problem has been solved using "associative tables". If you would like to see the interface on your language, just contact us. We should send you the list of terms and phrases for translation on your language. The main advantage of the database is that it is totally open: everybody can see, extract the data from the database and use them for non-commercial purposes with no charge. Registered users can contribute to the database without getting paid. We hope, that it will be widely used first of all for education purposes, but professional scientists could use it also.
JNDMS Task Authorization 2 Report
2013-10-01
uses Barnyard to store alarms from all DREnet Snort sensors in a MySQL database. Barnyard is an open source tool designed to work with Snort to take...Technology ITI Information Technology Infrastructure J2EE Java 2 Enterprise Edition JAR Java Archive. This is an archive file format defined by Java ...standards. JDBC Java Database Connectivity JDW JNDMS Data Warehouse JNDMS Joint Network and Defence Management System JNDMS Joint Network Defence and
2016-03-01
science IT information technology JBOD just a bunch of disks JDBC java database connectivity xviii JPME Joint Professional Military Education JSO...Joint Service Officer JVM java virtual machine MPP massively parallel processing MPTE Manpower, Personnel, Training, and Education NAVMAC Navy...27 external database, whether it is MySQL , Oracle, DB2, or SQL Server (Teller, 2015). Connectors optimize the data transfer by obtaining metadata
Gainotti, Sabina; Torreri, Paola; Wang, Chiuhui Mary; Reihs, Robert; Mueller, Heimo; Heslop, Emma; Roos, Marco; Badowska, Dorota Mazena; de Paulis, Federico; Kodra, Yllka; Carta, Claudio; Martìn, Estrella Lopez; Miller, Vanessa Rangel; Filocamo, Mirella; Mora, Marina; Thompson, Mark; Rubinstein, Yaffa; Posada de la Paz, Manuel; Monaco, Lucia; Lochmüller, Hanns; Taruscio, Domenica
2018-05-01
In rare disease (RD) research, there is a huge need to systematically collect biomaterials, phenotypic, and genomic data in a standardized way and to make them findable, accessible, interoperable and reusable (FAIR). RD-Connect is a 6 years global infrastructure project initiated in November 2012 that links genomic data with patient registries, biobanks, and clinical bioinformatics tools to create a central research resource for RDs. Here, we present RD-Connect Registry & Biobank Finder, a tool that helps RD researchers to find RD biobanks and registries and provide information on the availability and accessibility of content in each database. The finder concentrates information that is currently sparse on different repositories (inventories, websites, scientific journals, technical reports, etc.), including aggregated data and metadata from participating databases. Aggregated data provided by the finder, if appropriately checked, can be used by researchers who are trying to estimate the prevalence of a RD, to organize a clinical trial on a RD, or to estimate the volume of patients seen by different clinical centers. The finder is also a portal to other RD-Connect tools, providing a link to the RD-Connect Sample Catalogue, a large inventory of RD biological samples available in participating biobanks for RD research. There are several kinds of users and potential uses for the RD-Connect Registry & Biobank Finder, including researchers collaborating with academia and the industry, dealing with the questions of basic, translational, and/or clinical research. As of November 2017, the finder is populated with aggregated data for 222 registries and 21 biobanks.
Definition and maintenance of a telemetry database dictionary
NASA Technical Reports Server (NTRS)
Knopf, William P. (Inventor)
2007-01-01
A telemetry dictionary database includes a component for receiving spreadsheet workbooks of telemetry data over a web-based interface from other computer devices. Another component routes the spreadsheet workbooks to a specified directory on the host processing device. A process then checks the received spreadsheet workbooks for errors, and if no errors are detected the spreadsheet workbooks are routed to another directory to await initiation of a remote database loading process. The loading process first converts the spreadsheet workbooks to comma separated value (CSV) files. Next, a network connection with the computer system that hosts the telemetry dictionary database is established and the CSV files are ported to the computer system that hosts the telemetry dictionary database. This is followed by a remote initiation of a database loading program. Upon completion of loading a flatfile generation program is manually initiated to generate a flatfile to be used in a mission operations environment by the core ground system.
Similar compounds searching system by using the gene expression microarray database.
Toyoshiba, Hiroyoshi; Sawada, Hiroshi; Naeshiro, Ichiro; Horinouchi, Akira
2009-04-10
Numbers of microarrays have been examined and several public and commercial databases have been developed. However, it is not easy to compare in-house microarray data with those in a database because of insufficient reproducibility due to differences in the experimental conditions. As one of the approach to use these databases, we developed the similar compounds searching system (SCSS) on a toxicogenomics database. The datasets of 55 compounds administered to rats in the Toxicogenomics Project (TGP) database in Japan were used in this study. Using the fold-change ranking method developed by Lamb et al. [Lamb, J., Crawford, E.D., Peck, D., Modell, J.W., Blat, I.C., Wrobel, M.J., Lerner, J., Brunet, J.P., Subramanian, A., Ross, K.N., Reich, M., Hieronymus, H., Wei, G., Armstrong, S.A., Haggarty, S.J., Clemons, P.A., Wei, R., Carr, S.A., Lander, E.S., Golub, T.R., 2006. The connectivity map: using gene-expression signatures to connect small molecules, genes, and disease. Science 313, 1929-1935] and criteria called hit ratio, the system let us compare in-house microarray data and those in the database. In-house generated data for clofibrate, phenobarbital, and a proprietary compound were tested to evaluate the performance of the SCSS method. Phenobarbital and clofibrate, which were included in the TGP database, scored highest by the SCSS method. Other high scoring compounds had effects similar to either phenobarbital (a cytochrome P450s inducer) or clofibrate (a peroxisome proliferator). Some of high scoring compounds identified using the proprietary compound-administered rats have been known to cause similar toxicological changes in different species. Our results suggest that the SCSS method could be used in drug discovery and development. Moreover, this method may be a powerful tool to understand the mechanisms by which biological systems respond to various chemical compounds and may also predict adverse effects of new compounds.
Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.
2006-01-01
The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.
Ng, Curtise K C; White, Peter; McKay, Janice C
2009-04-01
Increasingly, the use of web database portfolio systems is noted in medical and health education, and for continuing professional development (CPD). However, the functions of existing systems are not always aligned with the corresponding pedagogy and hence reflection is often lost. This paper presents the development of a tailored web database portfolio system with Picture Archiving and Communication System (PACS) connectivity, which is based on the portfolio pedagogy. Following a pre-determined portfolio framework, a system model with the components of web, database and mail servers, server side scripts, and a Query/Retrieve (Q/R) broker for conversion between Hypertext Transfer Protocol (HTTP) requests and Q/R service class of Digital Imaging and Communication in Medicine (DICOM) standard, is proposed. The system was piloted with seventy-seven volunteers. A tailored web database portfolio system (http://radep.hti.polyu.edu.hk) was developed. Technological arrangements for reinforcing portfolio pedagogy include popup windows (reminders) with guidelines and probing questions of 'collect', 'select' and 'reflect' on evidence of development/experience, limitation in the number of files (evidence) to be uploaded, the 'Evidence Insertion' functionality to link the individual uploaded artifacts with reflective writing, capability to accommodate diversity of contents and convenient interfaces for reviewing portfolios and communication. Evidence to date suggests the system supports users to build their portfolios with sound hypertext reflection under a facilitator's guidance, and with reviewers to monitor students' progress providing feedback and comments online in a programme-wide situation.
BDVC (Bimodal Database of Violent Content): A database of violent audio and video
NASA Astrophysics Data System (ADS)
Rivera Martínez, Jose Luis; Mijes Cruz, Mario Humberto; Rodríguez Vázqu, Manuel Antonio; Rodríguez Espejo, Luis; Montoya Obeso, Abraham; García Vázquez, Mireya Saraí; Ramírez Acosta, Alejandro Álvaro
2017-09-01
Nowadays there is a trend towards the use of unimodal databases for multimedia content description, organization and retrieval applications of a single type of content like text, voice and images, instead bimodal databases allow to associate semantically two different types of content like audio-video, image-text, among others. The generation of a bimodal database of audio-video implies the creation of a connection between the multimedia content through the semantic relation that associates the actions of both types of information. This paper describes in detail the used characteristics and methodology for the creation of the bimodal database of violent content; the semantic relationship is stablished by the proposed concepts that describe the audiovisual information. The use of bimodal databases in applications related to the audiovisual content processing allows an increase in the semantic performance only and only if these applications process both type of content. This bimodal database counts with 580 audiovisual annotated segments, with a duration of 28 minutes, divided in 41 classes. Bimodal databases are a tool in the generation of applications for the semantic web.
Moving BASISplus and TECHLIBplus from VAX/VMS to UNIX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dominiak, R.
1993-12-31
BASISplus is used at the Laboratory by the Technical Information Services (TIS) Department which is part of the Information and Publishing Division at ARGONNE. TIS operates the Argonne Information Management System (AIM). The AIM System consists of the ANL Libraries On-Line Database (a TECHLIBplus database), the Current Journals Database (IDI`s current contents search), the ANL Publications Tracking Database (a TECHLIBplus database), the Powder Diffraction File Database, and several CD-ROM databases available through a Novell network. The AIM System is available from the desktop of ANL staff through modem and network connections, as well as from the 10 science libraries atmore » ARGONNE. TIS has been a BASISplus and TECHLIBplus site from the start, and never migrated from BASIS K. The decision to migrate from the VAX/VMS platform to a UNIX platform. Migrating a product from one platform to another involves many decisions and considerations. These justifications, decisions, and considerations are explored in this report.« less
Code of Federal Regulations, 2010 CFR
2010-10-01
... veterans and family members. To be eligible for inclusion in the VetBiz.gov VIP database, the following... disability rating or the veteran died as a direct result of a service-connected disability. Suspending...
Data Base Management: Proceedings of a Conference, November 1-2, 1984 Held at Monterey, California.
1985-07-31
Dolby Put the Information in the San Jose State University Database Not the Program San Jose , California 4:15 Douglas Lenat Relevance of Machine...network model permits multiple owners for one subsidi- ary entity. The DAPLEX network model includes the subset connection as well. I The SOCRATE system... Jose State University San Js, California -. A ..... .. .... [. . . ...- . . . - Js . . . .*es L * Dolby** PUT TIM INFORMATION IN THE DATABASE, NOT THE
Monitoring and tracing of critical software systems: State of the work and project definition
2008-12-01
analysis, troubleshooting and debugging. Some of these subsystems already come with ad hoc tracers for events like wireless connections or SCSI disk... SQLite ). Additional synthetic events (e.g. states) are added to the database. The database thus consists in contexts (process, CPU, state), event...capability on a [operating] system-by-system basis. Additionally, the mechanics of querying the data in an ad - hoc manner outside the boundaries of the
Reactive Aggregate Model Protecting Against Real-Time Threats
2014-09-01
on the underlying functionality of three core components. • MS SQL server 2008 backend database. • Microsoft IIS running on Windows server 2008...services. The capstone tested a Linux-based Apache web server with the following software implementations: • MySQL as a Linux-based backend server for...malicious compromise. 1. Assumptions • GINA could connect to a backend MS SQL database through proper configuration of DotNetNuke. • GINA had access
Pedretti, Kevin
2008-11-18
A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.
Explore a Career in Health Sciences Information
... tools that range from traditional print journals to electronic databases and the latest mobile devices, health sciences ... an expert search of the literature. connecting licensed electronic resources and decision tools into a patient's electronic ...
Dynamic graph system for a semantic database
Mizell, David
2016-04-12
A method and system in a computer system for dynamically providing a graphical representation of a data store of entries via a matrix interface is disclosed. A dynamic graph system provides a matrix interface that exposes to an application program a graphical representation of data stored in a data store such as a semantic database storing triples. To the application program, the matrix interface represents the graph as a sparse adjacency matrix that is stored in compressed form. Each entry of the data store is considered to represent a link between nodes of the graph. Each entry has a first field and a second field identifying the nodes connected by the link and a third field with a value for the link that connects the identified nodes. The first, second, and third fields represent the rows, column, and elements of the adjacency matrix.
Dynamic graph system for a semantic database
Mizell, David
2015-01-27
A method and system in a computer system for dynamically providing a graphical representation of a data store of entries via a matrix interface is disclosed. A dynamic graph system provides a matrix interface that exposes to an application program a graphical representation of data stored in a data store such as a semantic database storing triples. To the application program, the matrix interface represents the graph as a sparse adjacency matrix that is stored in compressed form. Each entry of the data store is considered to represent a link between nodes of the graph. Each entry has a first field and a second field identifying the nodes connected by the link and a third field with a value for the link that connects the identified nodes. The first, second, and third fields represent the rows, column, and elements of the adjacency matrix.
Design and Establishment of Quality Model of Fundamental Geographic Information Database
NASA Astrophysics Data System (ADS)
Ma, W.; Zhang, J.; Zhao, Y.; Zhang, P.; Dang, Y.; Zhao, T.
2018-04-01
In order to make the quality evaluation for the Fundamental Geographic Information Databases(FGIDB) more comprehensive, objective and accurate, this paper studies and establishes a quality model of FGIDB, which formed by the standardization of database construction and quality control, the conformity of data set quality and the functionality of database management system, and also designs the overall principles, contents and methods of the quality evaluation for FGIDB, providing the basis and reference for carry out quality control and quality evaluation for FGIDB. This paper designs the quality elements, evaluation items and properties of the Fundamental Geographic Information Database gradually based on the quality model framework. Connected organically, these quality elements and evaluation items constitute the quality model of the Fundamental Geographic Information Database. This model is the foundation for the quality demand stipulation and quality evaluation of the Fundamental Geographic Information Database, and is of great significance on the quality assurance in the design and development stage, the demand formulation in the testing evaluation stage, and the standard system construction for quality evaluation technology of the Fundamental Geographic Information Database.
Ichikawa, Kazunobu; Konta, Tsuneo; Sato, Hiroshi; Ueda, Yoshihiko; Yokoyama, Hitoshi
2017-12-01
In connective tissue diseases, a wide variety of glomerular, tubulointerstitial, and vascular lesions of the kidney are observed. Nonetheless, recent information is limited regarding renal lesions in connective tissue diseases, except in systemic lupus erythematosus (SLE). In this study, we used a nationwide database of biopsy-confirmed renal diseases in Japan (J-RBR) (UMIN000000618). In total, 20,523 registered patients underwent biopsy between 2007 and 2013; from 110 patients with connective tissue diseases except SLE, we extracted data regarding the clinico-pathological characteristics of the renal biopsy. Our analysis included patients with rheumatoid arthritis (RA) (n = 52), Sjögren's syndrome (SjS) (n = 35), scleroderma (n = 10), mixed connective tissue disease (MCTD; n = 5), anti-phospholipid syndrome (APS; n = 3), polymyositis/dermatomyositis (PM/DM; n = 1), Behçet's disease (n = 1) and others (n = 3). The clinico-pathological features differed greatly depending on the underlying disease. The major clinical diagnosis was nephrotic syndrome in RA; chronic nephritic syndrome with mild proteinuria and reduced renal function in SjS; rapidly progressive nephritic syndrome in scleroderma. The major pathological diagnosis was membranous nephropathy (MN) and amyloidosis in RA; tubulointerstitial nephritis in SjS; proliferative obliterative vasculopathy in scleroderma; MN in MCTD. In RA, most patients with nephrosis were treated using bucillamine, and showed membranous nephropathy. Using the J-RBR database, our study revealed that biopsy-confirmed cases of connective tissue diseases such as RA, SjS, scleroderma, and MCTD show various clinical and pathological characteristics, depending on the underlying diseases and the medication used.
Information Collection using Handheld Devices in Unreliable Networking Environments
2014-06-01
different types of mobile devices that connect wirelessly to a database 8 server. The actual backend database is not important to the mobile clients...Google’s infrastructure and local servers with MySQL and PostgreSQL on the backend (ODK 2014b). (2) Google Fusion Tables are used to do basic link...how we conduct business. Our requirements to share information do not change simply because there is little or no existing infrastructure in our
Arnold, Roland; Goldenberg, Florian; Mewes, Hans-Werner; Rattei, Thomas
2014-01-01
The Similarity Matrix of Proteins (SIMAP, http://mips.gsf.de/simap/) database has been designed to massively accelerate computationally expensive protein sequence analysis tasks in bioinformatics. It provides pre-calculated sequence similarities interconnecting the entire known protein sequence universe, complemented by pre-calculated protein features and domains, similarity clusters and functional annotations. SIMAP covers all major public protein databases as well as many consistently re-annotated metagenomes from different repositories. As of September 2013, SIMAP contains >163 million proteins corresponding to ∼70 million non-redundant sequences. SIMAP uses the sensitive FASTA search heuristics, the Smith–Waterman alignment algorithm, the InterPro database of protein domain models and the BLAST2GO functional annotation algorithm. SIMAP assists biologists by facilitating the interactive exploration of the protein sequence universe. Web-Service and DAS interfaces allow connecting SIMAP with any other bioinformatic tool and resource. All-against-all protein sequence similarity matrices of project-specific protein collections are generated on request. Recent improvements allow SIMAP to cover the rapidly growing sequenced protein sequence universe. New Web-Service interfaces enhance the connectivity of SIMAP. Novel tools for interactive extraction of protein similarity networks have been added. Open access to SIMAP is provided through the web portal; the portal also contains instructions and links for software access and flat file downloads. PMID:24165881
Application of real-time database to LAMOST control system
NASA Astrophysics Data System (ADS)
Xu, Lingzhe; Xu, Xinqi
2004-09-01
The QNX based real time database is one of main features for Large sky Area Multi-Object fiber Spectroscopic Telescope's (LAMOST) control system, which serves as a storage and platform for data flow, recording and updating timely various status of moving components in the telescope structure as well as environmental parameters around it. The database joins harmonically in the administration of the Telescope Control System (TCS). The paper presents methodology and technique tips in designing the EMPRESS database GUI software package, such as the dynamic creation of control widgets, dynamic query and share memory. The seamless connection between EMPRESS and the graphical development tool of QNX"s Photon Application Builder (PhAB) has been realized, and so have the Windows look and feel yet under Unix-like operating system. In particular, the real time feature of the database is analyzed that satisfies the needs of the control system.
Analyzing GAIAN Database (GaianDB) on a Tactical Network
2015-11-30
we connected 3 Raspberry Pi’s running GaianDB and our augmented version of splatform to a network of 3 CSRs. The Raspberry Pi is a low power, low...based on Debian from a connected secure digital high capacity (SDHC) card or a universal serial bus (USB) device. The Raspberry Pi comes equipped with...requirements, capabilities, and cost make the Raspberry Pi a useful device for sensor experimentation. From there, we performed 3 types of benchmarks
The use of a personal digital assistant for wireless entry of data into a database via the Internet.
Fowler, D L; Hogle, N J; Martini, F; Roh, M S
2002-01-01
Researchers typically record data on a worksheet and at some later time enter it into the database. Wireless data entry and retrieval using a personal digital assistant (PDA) at the site of patient contact can simplify this process and improve efficiency. A surgeon and a nurse coordinator provided the content for the database. The computer programmer created the database, placed the pages of the database on the PDA screen, and researched and installed security measures. Designing the database took 6 months. Meeting Health Insurance Portability and Accountability Act of 1996 (HIPAA) requirements for patient confidentiality, satisfying institutional Information Services requirements, and ensuring connectivity required an additional 8 months before the functional system was complete. It is now possible to achieve wireless entry and retrieval of data using a PDA. Potential advantages include collection and entry of data at the same time, easy entry of data from multiple sites, and retrieval of data at the patient's bedside.
NASA Astrophysics Data System (ADS)
Scharberg, Maureen A.; Cox, Oran E.; Barelli, Carl A.
1997-07-01
"The Molecule of the Day" consumer chemical database has been created to allow introductory chemistry students to explore molecular structures of chemicals in household products, and to provide opportunities in molecular modeling for undergraduate chemistry students. Before class begins, an overhead transparency is displayed which shows a three-dimensional molecular structure of a household chemical, and lists relevant features and uses of this chemical. Within answers to questionnaires, students have commented that this molecular graphics database has helped them to visually connect the microscopic structure of a molecule with its physical and chemical properties, as well as its uses in consumer products. It is anticipated that this database will be incorporated into a navigational software package such as Netscape.
Chapter 51: How to Build a Simple Cone Search Service Using a Local Database
NASA Astrophysics Data System (ADS)
Kent, B. R.; Greene, G. R.
The cone search service protocol will be examined from the server side in this chapter. A simple cone search service will be setup and configured locally using MySQL. Data will be read into a table, and the Java JDBC will be used to connect to the database. Readers will understand the VO cone search specification and how to use it to query a database on their local systems and return an XML/VOTable file based on an input of RA/DEC coordinates and a search radius. The cone search in this example will be deployed as a Java servlet. The resulting cone search can be tested with a verification service. This basic setup can be used with other languages and relational databases.
High performance semantic factoring of giga-scale semantic graph databases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
al-Saffar, Sinan; Adolf, Bob; Haglin, David
2010-10-01
As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors, including basic properties, connected components, namespace interaction, and typed paths.« less
JPEG2000 and dissemination of cultural heritage over the Internet.
Politou, Eugenia A; Pavlidis, George P; Chamzas, Christodoulos
2004-03-01
By applying the latest technologies in image compression for managing the storage of massive image data within cultural heritage databases and by exploiting the universality of the Internet we are now able not only to effectively digitize, record and preserve, but also to promote the dissemination of cultural heritage. In this work we present an application of the latest image compression standard JPEG2000 in managing and browsing image databases, focusing on the image transmission aspect rather than database management and indexing. We combine the technologies of JPEG2000 image compression with client-server socket connections and client browser plug-in, as to provide with an all-in-one package for remote browsing of JPEG2000 compressed image databases, suitable for the effective dissemination of cultural heritage.
The Canadian Connection: Business Online.
ERIC Educational Resources Information Center
Merry, Susan; And Others
1989-01-01
Provides an overview of the Canadian business environment and online sources of business information. The databases described cover the following areas: directories, financial information, stock quotes, investment reports, industrial and economic information, magazines, newspapers, wire services, biographical information, and government…
Highway bridges in the United States--an overview
DOT National Transportation Integrated Search
2007-09-01
Bridges are an integral part of the U.S. highway network, providing links across natural barriers, passage over railroads and highways, and freeway connections. The Federal Highway Administration (FHWA) maintains a database of our nations highway ...
76 FR 58767 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-22
... Finder in Fiscal Year 2005. The purpose of the Recipe Finder database is to provide our target audience... Connection to this information would inhibit the ability of the target audience to participate in a valuable...
Kuhn, Jens H.; Andersen, Kristian G.; Baize, Sylvain; Bào, Yīmíng; Bavari, Sina; Berthet, Nicolas; Blinkova, Olga; Brister, J. Rodney; Clawson, Anna N.; Fair, Joseph; Gabriel, Martin; Garry, Robert F.; Gire, Stephen K.; Goba, Augustine; Gonzalez, Jean-Paul; Günther, Stephan; Happi, Christian T.; Jahrling, Peter B.; Kapetshi, Jimmy; Kobinger, Gary; Kugelman, Jeffrey R.; Leroy, Eric M.; Maganga, Gael Darren; Mbala, Placide K.; Moses, Lina M.; Muyembe-Tamfum, Jean-Jacques; N’Faly, Magassouba; Nichol, Stuart T.; Omilabu, Sunday A.; Palacios, Gustavo; Park, Daniel J.; Paweska, Janusz T.; Radoshitzky, Sheli R.; Rossi, Cynthia A.; Sabeti, Pardis C.; Schieffelin, John S.; Schoepp, Randal J.; Sealfon, Rachel; Swanepoel, Robert; Towner, Jonathan S.; Wada, Jiro; Wauquier, Nadia; Yozwiak, Nathan L.; Formenty, Pierre
2014-01-01
In 2014, Ebola virus (EBOV) was identified as the etiological agent of a large and still expanding outbreak of Ebola virus disease (EVD) in West Africa and a much more confined EVD outbreak in Middle Africa. Epidemiological and evolutionary analyses confirmed that all cases of both outbreaks are connected to a single introduction each of EBOV into human populations and that both outbreaks are not directly connected. Coding-complete genomic sequence analyses of isolates revealed that the two outbreaks were caused by two novel EBOV variants, and initial clinical observations suggest that neither of them should be considered strains. Here we present consensus decisions on naming for both variants (West Africa: “Makona”, Middle Africa: “Lomela”) and provide database-compatible full, shortened, and abbreviated names that are in line with recently established filovirus sub-species nomenclatures. PMID:25421896
The Matchmaker Exchange: a platform for rare disease gene discovery.
Philippakis, Anthony A; Azzariti, Danielle R; Beltran, Sergi; Brookes, Anthony J; Brownstein, Catherine A; Brudno, Michael; Brunner, Han G; Buske, Orion J; Carey, Knox; Doll, Cassie; Dumitriu, Sergiu; Dyke, Stephanie O M; den Dunnen, Johan T; Firth, Helen V; Gibbs, Richard A; Girdea, Marta; Gonzalez, Michael; Haendel, Melissa A; Hamosh, Ada; Holm, Ingrid A; Huang, Lijia; Hurles, Matthew E; Hutton, Ben; Krier, Joel B; Misyura, Andriy; Mungall, Christopher J; Paschall, Justin; Paten, Benedict; Robinson, Peter N; Schiettecatte, François; Sobreira, Nara L; Swaminathan, Ganesh J; Taschner, Peter E; Terry, Sharon F; Washington, Nicole L; Züchner, Stephan; Boycott, Kym M; Rehm, Heidi L
2015-10-01
There are few better examples of the need for data sharing than in the rare disease community, where patients, physicians, and researchers must search for "the needle in a haystack" to uncover rare, novel causes of disease within the genome. Impeding the pace of discovery has been the existence of many small siloed datasets within individual research or clinical laboratory databases and/or disease-specific organizations, hoping for serendipitous occasions when two distant investigators happen to learn they have a rare phenotype in common and can "match" these cases to build evidence for causality. However, serendipity has never proven to be a reliable or scalable approach in science. As such, the Matchmaker Exchange (MME) was launched to provide a robust and systematic approach to rare disease gene discovery through the creation of a federated network connecting databases of genotypes and rare phenotypes using a common application programming interface (API). The core building blocks of the MME have been defined and assembled. Three MME services have now been connected through the API and are available for community use. Additional databases that support internal matching are anticipated to join the MME network as it continues to grow. © 2015 WILEY PERIODICALS, INC.
The Matchmaker Exchange: A Platform for Rare Disease Gene Discovery
Philippakis, Anthony A.; Azzariti, Danielle R.; Beltran, Sergi; Brookes, Anthony J.; Brownstein, Catherine A.; Brudno, Michael; Brunner, Han G.; Buske, Orion J.; Carey, Knox; Doll, Cassie; Dumitriu, Sergiu; Dyke, Stephanie O.M.; den Dunnen, Johan T.; Firth, Helen V.; Gibbs, Richard A.; Girdea, Marta; Gonzalez, Michael; Haendel, Melissa A.; Hamosh, Ada; Holm, Ingrid A.; Huang, Lijia; Hurles, Matthew E.; Hutton, Ben; Krier, Joel B.; Misyura, Andriy; Mungall, Christopher J.; Paschall, Justin; Paten, Benedict; Robinson, Peter N.; Schiettecatte, François; Sobreira, Nara L.; Swaminathan, Ganesh J.; Taschner, Peter E.; Terry, Sharon F.; Washington, Nicole L.; Züchner, Stephan; Boycott, Kym M.; Rehm, Heidi L.
2015-01-01
There are few better examples of the need for data sharing than in the rare disease community, where patients, physicians, and researchers must search for “the needle in a haystack” to uncover rare, novel causes of disease within the genome. Impeding the pace of discovery has been the existence of many small siloed datasets within individual research or clinical laboratory databases and/or disease-specific organizations, hoping for serendipitous occasions when two distant investigators happen to learn they have a rare phenotype in common and can “match” these cases to build evidence for causality. However, serendipity has never proven to be a reliable or scalable approach in science. As such, the Matchmaker Exchange (MME) was launched to provide a robust and systematic approach to rare disease gene discovery through the creation of a federated network connecting databases of genotypes and rare phenotypes using a common application programming interface (API). The core building blocks of the MME have been defined and assembled. Three MME services have now been connected through the API and are available for community use. Additional databases that support internal matching are anticipated to join the MME network as it continues to grow. PMID:26295439
The Matchmaker Exchange: A Platform for Rare Disease Gene Discovery
Philippakis, Anthony A.; Azzariti, Danielle R.; Beltran, Sergi; ...
2015-09-17
There are few better examples of the need for data sharing than in the rare disease community, where patients, physicians, and researchers must search for "the needle in a haystack" to uncover rare, novel causes of disease within the genome. Impeding the pace of discovery has been the existence of many small siloed datasets within individual research or clinical laboratory databases and/or disease-specific organizations, hoping for serendipitous occasions when two distant investigators happen to learn they have a rare phenotype in common and can "match" these cases to build evidence for causality. However, serendipity has never proven to be amore » reliable or scalable approach in science. As such, the Matchmaker Exchange (MME) was launched to provide a robust and systematic approach to rare disease gene discovery through the creation of a federated network connecting databases of genotypes and rare phenotypes using a common application programming interface (API). The core building blocks of the MME have been defined and assembled. In conclusion, three MME services have now been connected through the API and are available for community use. Additional databases that support internal matching are anticipated to join the MME network as it continues to grow.« less
The Matchmaker Exchange: A Platform for Rare Disease Gene Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Philippakis, Anthony A.; Azzariti, Danielle R.; Beltran, Sergi
There are few better examples of the need for data sharing than in the rare disease community, where patients, physicians, and researchers must search for "the needle in a haystack" to uncover rare, novel causes of disease within the genome. Impeding the pace of discovery has been the existence of many small siloed datasets within individual research or clinical laboratory databases and/or disease-specific organizations, hoping for serendipitous occasions when two distant investigators happen to learn they have a rare phenotype in common and can "match" these cases to build evidence for causality. However, serendipity has never proven to be amore » reliable or scalable approach in science. As such, the Matchmaker Exchange (MME) was launched to provide a robust and systematic approach to rare disease gene discovery through the creation of a federated network connecting databases of genotypes and rare phenotypes using a common application programming interface (API). The core building blocks of the MME have been defined and assembled. In conclusion, three MME services have now been connected through the API and are available for community use. Additional databases that support internal matching are anticipated to join the MME network as it continues to grow.« less
Bhinder, Prabhjot; Oberoi, Mandeep Singh
2009-01-01
Hospitals require better information connectivity because timing and content of the information to be traded is critical. The imperative success in the past has generated renewed thrust on the expectations and credibility of the current enterprise resource planning (ERP) applications in health care. The desire to bring improved connectivity and to match it with critical timing remains the penultimate dream. Currently, majority of ERP system integrators are not able to match these requirements of the healthcare industry. It is perceived that the concept of ERP has made the process of segregating bills and patient records much easier. Hence the industry is able to save more lives, but at the cost of an individual's privacy as it enables to access the database of patients and medical histories through the common database shared by hospitals though at a quicker rate. Businesses such as health care providers, pharmaceutical manufacturers, and distributors have already implemented rapid ERPs. The new concept "Smart Pharmacies" will link the process all the way from drug delivery, patient care, demand management, drug repository, and pharmaceutical manufacturers while maintaining Regulatory Compliances and make the vital connections where these Businesses will talk to each other electronically.
Building Change Detection from LIDAR Point Cloud Data Based on Connected Component Analysis
NASA Astrophysics Data System (ADS)
Awrangjeb, M.; Fraser, C. S.; Lu, G.
2015-08-01
Building data are one of the important data types in a topographic database. Building change detection after a period of time is necessary for many applications, such as identification of informal settlements. Based on the detected changes, the database has to be updated to ensure its usefulness. This paper proposes an improved building detection technique, which is a prerequisite for many building change detection techniques. The improved technique examines the gap between neighbouring buildings in the building mask in order to avoid under segmentation errors. Then, a new building change detection technique from LIDAR point cloud data is proposed. Buildings which are totally new or demolished are directly added to the change detection output. However, for demolished or extended building parts, a connected component analysis algorithm is applied and for each connected component its area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building part. Finally, a graphical user interface (GUI) has been developed to update detected changes to the existing building map. Experimental results show that the improved building detection technique can offer not only higher performance in terms of completeness and correctness, but also a lower number of undersegmentation errors as compared to its original counterpart. The proposed change detection technique produces no omission errors and thus it can be exploited for enhanced automated building information updating within a topographic database. Using the developed GUI, the user can quickly examine each suggested change and indicate his/her decision with a minimum number of mouse clicks.
sunstardb: A Database for the Study of Stellar Magnetism and the Solar-stellar Connection
NASA Astrophysics Data System (ADS)
Egeland, Ricky
2018-05-01
The “solar-stellar connection” began as a relatively small field of research focused on understanding the processes that generate magnetic fields in stars and sometimes lead to a cyclic pattern of long-term variability in activity, as demonstrated by our Sun. This area of study has recently become more broadly pertinent to questions of exoplanet habitability and exo-space weather, as well as stellar evolution. In contrast to other areas of stellar research, individual stars in the solar-stellar connection often have a distinct identity and character in the literature, due primarily to the rarity of the decades-long time-series that are necessary for studying stellar activity cycles. Furthermore, the underlying stellar dynamo is not well understood theoretically, and is thought to be sensitive to several stellar properties, e.g., luminosity, differential rotation, and the depth of the convection zone, which in turn are often parameterized by other more readily available properties. Relevant observations are scattered throughout the literature and existing stellar databases, and consolidating information for new studies is a tedious and laborious exercise. To accelerate research in this area I developed sunstardb, a relational database of stellar properties and magnetic activity proxy time-series keyed by individual named stars. The organization of the data eliminates the need for the problematic catalog cross-matching operations inherent when building an analysis data set from heterogeneous sources. In this article I describe the principles behind sunstardb, the data structures and programming interfaces, as well as use cases from solar-stellar connection research.
Application of SQL database to the control system of MOIRCS
NASA Astrophysics Data System (ADS)
Yoshikawa, Tomohiro; Omata, Koji; Konishi, Masahiro; Ichikawa, Takashi; Suzuki, Ryuji; Tokoku, Chihiro; Uchimoto, Yuka Katsuno; Nishimura, Tetsuo
2006-06-01
MOIRCS (Multi-Object Infrared Camera and Spectrograph) is a new instrument for the Subaru telescope. In order to perform observations of near-infrared imaging and spectroscopy with cold slit mask, MOIRCS contains many device components, which are distributed on an Ethernet LAN. Two PCs wired to the focal plane array electronics operate two HAWAII2 detectors, respectively, and other two PCs are used for integrated control and quick data reduction, respectively. Though most of the devices (e.g., filter and grism turrets, slit exchange mechanism for spectroscopy) are controlled via RS232C interface, they are accessible from TCP/IP connection using TCP/IP to RS232C converters. Moreover, other devices are also connected to the Ethernet LAN. This network distributed structure provides flexibility of hardware configuration. We have constructed an integrated control system for such network distributed hardwares, named T-LECS (Tohoku University - Layered Electronic Control System). T-LECS has also network distributed software design, applying TCP/IP socket communication to interprocess communication. In order to help the communication between the device interfaces and the user interfaces, we defined three layers in T-LECS; an external layer for user interface applications, an internal layer for device interface applications, and a communication layer, which connects two layers above. In the communication layer, we store the data of the system to an SQL database server; they are status data, FITS header data, and also meta data such as device configuration data and FITS configuration data. We present our software system design and the database schema to manage observations of MOIRCS with Subaru.
Metabolomics analysis: Finding out metabolic building blocks
2017-01-01
In this paper we propose a new methodology for the analysis of metabolic networks. We use the notion of strongly connected components of a graph, called in this context metabolic building blocks. Every strongly connected component is contracted to a single node in such a way that the resulting graph is a directed acyclic graph, called a metabolic DAG, with a considerably reduced number of nodes. The property of being a directed acyclic graph brings out a background graph topology that reveals the connectivity of the metabolic network, as well as bridges, isolated nodes and cut nodes. Altogether, it becomes a key information for the discovery of functional metabolic relations. Our methodology has been applied to the glycolysis and the purine metabolic pathways for all organisms in the KEGG database, although it is general enough to work on any database. As expected, using the metabolic DAGs formalism, a considerable reduction on the size of the metabolic networks has been obtained, specially in the case of the purine pathway due to its relative larger size. As a proof of concept, from the information captured by a metabolic DAG and its corresponding metabolic building blocks, we obtain the core of the glycolysis pathway and the core of the purine metabolism pathway and detect some essential metabolic building blocks that reveal the key reactions in both pathways. Finally, the application of our methodology to the glycolysis pathway and the purine metabolism pathway reproduce the tree of life for the whole set of the organisms represented in the KEGG database which supports the utility of this research. PMID:28493998
Virtual Teaching on the Tundra.
ERIC Educational Resources Information Center
McAuley, Alexander
1998-01-01
Describes how a teacher and a distance-learning consultant collaborate in using the Internet and Computer Supported Intentional Learning Environment (CISILE) to connect multicultural students on the harsh Baffin Island (Canada). Discusses the creation of the class's database and future implications. (AEF)
WorldWideScience.org: the global science gateway.
Fitzpatrick, Roberta Bronson
2009-10-01
WorldWideScience.org is a Web-based global gateway connecting users to both national and international scientific databases and portals. This column will provide background information on the resource as well as introduce basic searching practices for users.
The Topography of Names and Places.
ERIC Educational Resources Information Center
Morehead, Joe
1999-01-01
Discusses geographic naming with Geographic Information Systems (GIS) technology. Highlights include the Geographic Names Information System (GNIS) online database; United States Geological Survey (USGS) national mapping information; the USGS-Microsoft connection; and panoramic maps and the small LizardTech company. (AEF)
Technical Aspects of Interfacing MUMPS to an External SQL Relational Database Management System
Kuzmak, Peter M.; Walters, Richard F.; Penrod, Gail
1988-01-01
This paper describes an interface connecting InterSystems MUMPS (M/VX) to an external relational DBMS, the SYBASE Database Management System. The interface enables MUMPS to operate in a relational environment and gives the MUMPS language full access to a complete set of SQL commands. MUMPS generates SQL statements as ASCII text and sends them to the RDBMS. The RDBMS executes the statements and returns ASCII results to MUMPS. The interface suggests that the language features of MUMPS make it an attractive tool for use in the relational database environment. The approach described in this paper separates MUMPS from the relational database. Positioning the relational database outside of MUMPS promotes data sharing and permits a number of different options to be used for working with the data. Other languages like C, FORTRAN, and COBOL can access the RDBMS database. Advanced tools provided by the relational database vendor can also be used. SYBASE is an advanced high-performance transaction-oriented relational database management system for the VAX/VMS and UNIX operating systems. SYBASE is designed using a distributed open-systems architecture, and is relatively easy to interface with MUMPS.
A multidisciplinary database for global distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, P.J.
The issue of selenium toxicity in the environment has been documented in the scientific literature for over 50 years. Recent studies reveal a complex connection between selenium and human and animal populations. This article introduces a bibliographic citation database on selenium in the environment developed for global distribution via the Internet by the University of Wyoming Libraries. The database incorporates material from commercial sources, print abstracts, indexes, and U.S. government literature, resulting in a multidisciplinary resource. Relevant disciplines include, biology, medicine, veterinary science, botany, chemistry, geology, pollution, aquatic sciences, ecology, and others. It covers the years 1985-1996 for most subjectmore » material, with additional years being added as resources permit.« less
Automated extraction of knowledge for model-based diagnostics
NASA Technical Reports Server (NTRS)
Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.
1990-01-01
The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.
NASA Astrophysics Data System (ADS)
Yagi, Yukio; Takahashi, Kaei
The purpose of this report is to describe how the activities for managing technical information has been and is now being conducted by the Engineering department of Nippon Kokan Corp. In addition, as a practical example of database generation promoted by the department, this book gives whole aspects of the NEW-KOTIS (background of its development, history, features, functional details, control and operation method, use in search operations, and so forth). The NEW-KOTIS (3rd-term system) is an "in-house technical information database system," which started its operation on May, 1987. This database system now contains approximately 65,000 information items (research reports, investigation reports, technical reports, etc.) generated within the company, and this information is available to anyone in any department through the network connecting all the company's structures.
ECG-ViEW II, a freely accessible electrocardiogram database
Park, Man Young; Lee, Sukhoon; Jeon, Min Seok; Yoon, Dukyong; Park, Rae Woong
2017-01-01
The Electrocardiogram Vigilance with Electronic data Warehouse II (ECG-ViEW II) is a large, single-center database comprising numeric parameter data of the surface electrocardiograms of all patients who underwent testing from 1 June 1994 to 31 July 2013. The electrocardiographic data include the test date, clinical department, RR interval, PR interval, QRS duration, QT interval, QTc interval, P axis, QRS axis, and T axis. These data are connected with patient age, sex, ethnicity, comorbidities, age-adjusted Charlson comorbidity index, prescribed drugs, and electrolyte levels. This longitudinal observational database contains 979,273 electrocardiograms from 461,178 patients over a 19-year study period. This database can provide an opportunity to study electrocardiographic changes caused by medications, disease, or other demographic variables. ECG-ViEW II is freely available at http://www.ecgview.org. PMID:28437484
Security and privacy qualities of medical devices: an analysis of FDA postmarket surveillance.
Kramer, Daniel B; Baker, Matthew; Ransford, Benjamin; Molina-Markham, Andres; Stewart, Quinn; Fu, Kevin; Reynolds, Matthew R
2012-01-01
Medical devices increasingly depend on computing functions such as wireless communication and Internet connectivity for software-based control of therapies and network-based transmission of patients' stored medical information. These computing capabilities introduce security and privacy risks, yet little is known about the prevalence of such risks within the clinical setting. We used three comprehensive, publicly available databases maintained by the Food and Drug Administration (FDA) to evaluate recalls and adverse events related to security and privacy risks of medical devices. Review of weekly enforcement reports identified 1,845 recalls; 605 (32.8%) of these included computers, 35 (1.9%) stored patient data, and 31 (1.7%) were capable of wireless communication. Searches of databases specific to recalls and adverse events identified only one event with a specific connection to security or privacy. Software-related recalls were relatively common, and most (81.8%) mentioned the possibility of upgrades, though only half of these provided specific instructions for the update mechanism. Our review of recalls and adverse events from federal government databases reveals sharp inconsistencies with databases at individual providers with respect to security and privacy risks. Recalls related to software may increase security risks because of unprotected update and correction mechanisms. To detect signals of security and privacy problems that adversely affect public health, federal postmarket surveillance strategies should rethink how to effectively and efficiently collect data on security and privacy problems in devices that increasingly depend on computing systems susceptible to malware.
Security and Privacy Qualities of Medical Devices: An Analysis of FDA Postmarket Surveillance
Kramer, Daniel B.; Baker, Matthew; Ransford, Benjamin; Molina-Markham, Andres; Stewart, Quinn; Fu, Kevin; Reynolds, Matthew R.
2012-01-01
Background Medical devices increasingly depend on computing functions such as wireless communication and Internet connectivity for software-based control of therapies and network-based transmission of patients’ stored medical information. These computing capabilities introduce security and privacy risks, yet little is known about the prevalence of such risks within the clinical setting. Methods We used three comprehensive, publicly available databases maintained by the Food and Drug Administration (FDA) to evaluate recalls and adverse events related to security and privacy risks of medical devices. Results Review of weekly enforcement reports identified 1,845 recalls; 605 (32.8%) of these included computers, 35 (1.9%) stored patient data, and 31 (1.7%) were capable of wireless communication. Searches of databases specific to recalls and adverse events identified only one event with a specific connection to security or privacy. Software-related recalls were relatively common, and most (81.8%) mentioned the possibility of upgrades, though only half of these provided specific instructions for the update mechanism. Conclusions Our review of recalls and adverse events from federal government databases reveals sharp inconsistencies with databases at individual providers with respect to security and privacy risks. Recalls related to software may increase security risks because of unprotected update and correction mechanisms. To detect signals of security and privacy problems that adversely affect public health, federal postmarket surveillance strategies should rethink how to effectively and efficiently collect data on security and privacy problems in devices that increasingly depend on computing systems susceptible to malware. PMID:22829874
ADA LIBRARY PUBLICATIONS AND RESOURCES SEARCH ACCOMMODATIONS DATABASE A-Z OF DISABILITIES AND ACCOMMODATIONS NEWS Hot Topics How to Use this Site JAN en Español Print this Page A A A Text Size Connect with JAN (800)526-7234 (Voice) (877)781-9403 ( ...
SIMOS feasibility report, task 4 : sign inventory management and ordering system
DOT National Transportation Integrated Search
1997-12-01
The Sign Inventory Management and Ordering System (SIMOS) design is a merger of existing manually maintained information management systems married to PennDOT's GIS and department-wide mainframe database to form a logical connection for enhanced sign...
DATA FOR ENVIRONMENTAL MODELING (D4EM)
Data is a basic requirement for most modeling applications. Collecting data is expensive and time consuming. High speed internet connections and growing databases of online environmental data go a long way to overcoming issues of data scarcity. Among the obstacles still remain...
Ezra Tsur, Elishai
2017-01-01
Databases are imperative for research in bioinformatics and computational biology. Current challenges in database design include data heterogeneity and context-dependent interconnections between data entities. These challenges drove the development of unified data interfaces and specialized databases. The curation of specialized databases is an ever-growing challenge due to the introduction of new data sources and the emergence of new relational connections between established datasets. Here, an open-source framework for the curation of specialized databases is proposed. The framework supports user-designed models of data encapsulation, objects persistency and structured interfaces to local and external data sources such as MalaCards, Biomodels and the National Centre for Biotechnology Information (NCBI) databases. The proposed framework was implemented using Java as the development environment, EclipseLink as the data persistency agent and Apache Derby as the database manager. Syntactic analysis was based on J3D, jsoup, Apache Commons and w3c.dom open libraries. Finally, a construction of a specialized database for aneurysms associated vascular diseases is demonstrated. This database contains 3-dimensional geometries of aneurysms, patient's clinical information, articles, biological models, related diseases and our recently published model of aneurysms' risk of rapture. Framework is available in: http://nbel-lab.com.
NASA Astrophysics Data System (ADS)
Cervato, C.; Fils, D.; Bohling, G.; Diver, P.; Greer, D.; Reed, J.; Tang, X.
2006-12-01
The federation of databases is not a new endeavor. Great strides have been made e.g. in the health and astrophysics communities. Reviews of those successes indicate that they have been able to leverage off key cross-community core concepts. In its simplest implementation, a federation of databases with identical base schemas that can be extended to address individual efforts, is relatively easy to accomplish. Efforts of groups like the Open Geospatial Consortium have shown methods to geospatially relate data between different sources. We present here a summary of CHRONOS's (http://www.chronos.org) experience with highly heterogeneous data. Our experience with the federation of very diverse databases shows that the wide variety of encoding options for items like locality, time scale, taxon ID, and other key parameters makes it difficult to effectively join data across them. However, the response to this is not to develop one large, monolithic database, which will suffer growth pains due to social, national, and operational issues, but rather to systematically develop the architecture that will enable cross-resource (database, repository, tool, interface) interaction. CHRONOS has accomplished the major hurdle of federating small IT database efforts with service-oriented and XML-based approaches. The application of easy-to-use procedures that allow groups of all sizes to implement and experiment with searches across various databases and to use externally created tools is vital. We are sharing with the geoinformatics community the difficulties with application frameworks, user authentication, standards compliance, and data storage encountered in setting up web sites and portals for various science initiatives (e.g., ANDRILL, EARTHTIME). The ability to incorporate CHRONOS data, services, and tools into the existing framework of a group is crucial to the development of a model that supports and extends the vitality of the small- to medium-sized research effort that is essential for a vibrant scientific community. This presentation will directly address issues of portal development related to JSR-168 and other portal API's as well as issues related to both federated and local directory-based authentication. The application of service-oriented architecture in connection with ReST-based approaches is vital to facilitate service use by experienced and less experienced information technology groups. Application of these services with XML- based schemas allows for the connection to third party tools such a GIS-based tools and software designed to perform a specific scientific analysis. The connection of all these capabilities into a combined framework based on the standard XHTML Document object model and CSS 2.0 standards used in traditional web development will be demonstrated. CHRONOS also utilizes newer client techniques such as AJAX and cross- domain scripting along with traditional server-side database, application, and web servers. The combination of the various components of this architecture creates an environment based on open and free standards that allows for the discovery, retrieval, and integration of tools and data.
A spatial-temporal system for dynamic cadastral management.
Nan, Liu; Renyi, Liu; Guangliang, Zhu; Jiong, Xie
2006-03-01
A practical spatio-temporal database (STDB) technique for dynamic urban land management is presented. One of the STDB models, the expanded model of Base State with Amendments (BSA), is selected as the basis for developing the dynamic cadastral management technique. Two approaches, the Section Fast Indexing (SFI) and the Storage Factors of Variable Granularity (SFVG), are used to improve the efficiency of the BSA model. Both spatial graphic data and attribute data, through a succinct engine, are stored in standard relational database management systems (RDBMS) for the actual implementation of the BSA model. The spatio-temporal database is divided into three interdependent sub-databases: present DB, history DB and the procedures-tracing DB. The efficiency of database operation is improved by the database connection in the bottom layer of the Microsoft SQL Server. The spatio-temporal system can be provided at a low-cost while satisfying the basic needs of urban land management in China. The approaches presented in this paper may also be of significance to countries where land patterns change frequently or to agencies where financial resources are limited.
Clinician-Oriented Access to Data - C.O.A.D.: A Natural Language Interface to a VA DHCP Database
Levy, Christine; Rogers, Elizabeth
1995-01-01
Hospitals collect enormous amounts of data related to the on-going care of patients. Unfortunately, a clinicians access to the data is limited by complexities of the database structure and/or programming skills required to access the database. The COAD project attempts to bridge the gap between the clinical user's need for specific information from the database, and the wealth of data residing in the hospital information system. The project design includes a natural language interface to data contained in a VA DHCP database. We have developed a prototype which links natural language software to certain DHCP data elements, including, patient demographics, prescriptions, diagnoses, laboratory data, and provider information. English queries can by typed onto the system, and answers to the questions are returned. Future work includes refinement of natural language/DHCP connections to enable more sophisticated queries, and optimization of the system to reduce response time to user questions.
Pavlicek, W; Zavalkovskiy, B; Eversman, W G
1999-05-01
Mayo Clinic Scottsdale (MCS) is a busy outpatient facility (150,000 examinations per year) connected via asynchronous transfer mode (ATM; OC-3 155 MB/s) to a new Mayo Clinic Hospital (178 beds) located more than 12 miles distant. A primary care facility staffed by radiology lies roughly halfway between the hospital and clinic connected to both. Installed at each of the three locations is a high-speed star topology image network providing direct fiber connection (160 MB/s) from the local image storage unit (ISU) to the local radiology and clinical workstations. The clinic has 22 workstations in its star, the hospital has 13, and the primary care practice has two. In response to Mayo's request for a seamless service among the three locations, the vendor (GE Medical Systems, Milwaukee, WI) provided enhanced connectivity capability in a two-step process. First, a transfer gateway (TGW) was installed, tested, and implemented to provide the needed communication of the examinations generated at the three sites. Any examinations generated at either the hospital or the primary care facility (specified as the remote stars) automatically transfer their images to the ISU at the clinic. Permanent storage (Kodak optical jukebox, Rochester, NY) is only connected to the hub (Clinic) star. Thus, the hub ISU is provided with a copy of all examinations, while the two remote ISUs maintain local exams. Prefetching from the archive is intelligently accomplished during the off hours only to the hub star, thus providing the remote stars with network dependent access to comparison images. Image transfer is possible via remote log-on. The second step was the installation of an image transfer server (ITS) to replace the slower Digital Imaging and Communications in Medicine (DICOM)-based TGW, and a central higher performance database to replace the multiple database environment. This topology provides an enterprise view of the images at the three locations, while maintaining the high-speed performance of the local star connection to what is now called the short-term storage (STS). Performance was measured and 25 chest examinations (17 MB each) transferred in just over 4 minutes. Integration of the radiology information management system (RIMS) was modified to provide location-specific report and examination interfaces, thereby allowing local filtering of the worklist to remote and near real-time consultation, and remote examination monitoring of modalities are addressed with this technologic approach. The installation of the single database ITS environment has occurred for testing prior to implementation.
DOT National Transportation Integrated Search
2014-01-01
Connected vehicle wireless data communications can enable safety applications that may reduce injuries and fatalities suffered on our roads and highways, as well as enabling reductions in traffic congestion and impacts on the environment. As a critic...
The Histone Database: an integrated resource for histones and histone fold-containing proteins
Mariño-Ramírez, Leonardo; Levine, Kevin M.; Morales, Mario; Zhang, Suiyuan; Moreland, R. Travis; Baxevanis, Andreas D.; Landsman, David
2011-01-01
Eukaryotic chromatin is composed of DNA and protein components—core histones—that act to compactly pack the DNA into nucleosomes, the fundamental building blocks of chromatin. These nucleosomes are connected to adjacent nucleosomes by linker histones. Nucleosomes are highly dynamic and, through various core histone post-translational modifications and incorporation of diverse histone variants, can serve as epigenetic marks to control processes such as gene expression and recombination. The Histone Sequence Database is a curated collection of sequences and structures of histones and non-histone proteins containing histone folds, assembled from major public databases. Here, we report a substantial increase in the number of sequences and taxonomic coverage for histone and histone fold-containing proteins available in the database. Additionally, the database now contains an expanded dataset that includes archaeal histone sequences. The database also provides comprehensive multiple sequence alignments for each of the four core histones (H2A, H2B, H3 and H4), the linker histones (H1/H5) and the archaeal histones. The database also includes current information on solved histone fold-containing structures. The Histone Sequence Database is an inclusive resource for the analysis of chromatin structure and function focused on histones and histone fold-containing proteins. Database URL: The Histone Sequence Database is freely available and can be accessed at http://research.nhgri.nih.gov/histones/. PMID:22025671
Drug-Path: a database for drug-induced pathways
Zeng, Hui; Cui, Qinghua
2015-01-01
Some databases for drug-associated pathways have been built and are publicly available. However, the pathways curated in most of these databases are drug-action or drug-metabolism pathways. In recent years, high-throughput technologies such as microarray and RNA-sequencing have produced lots of drug-induced gene expression profiles. Interestingly, drug-induced gene expression profile frequently show distinct patterns, indicating that drugs normally induce the activation or repression of distinct pathways. Therefore, these pathways contribute to study the mechanisms of drugs and drug-repurposing. Here, we present Drug-Path, a database of drug-induced pathways, which was generated by KEGG pathway enrichment analysis for drug-induced upregulated genes and downregulated genes based on drug-induced gene expression datasets in Connectivity Map. Drug-Path provides user-friendly interfaces to retrieve, visualize and download the drug-induced pathway data in the database. In addition, the genes deregulated by a given drug are highlighted in the pathways. All data were organized using SQLite. The web site was implemented using Django, a Python web framework. Finally, we believe that this database will be useful for related researches. Database URL: http://www.cuilab.cn/drugpath PMID:26130661
New model for distributed multimedia databases and its application to networking of museums
NASA Astrophysics Data System (ADS)
Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki
1998-02-01
This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.
The evolution of pore connectivity in volcanic rocks
NASA Astrophysics Data System (ADS)
Colombier, Mathieu; Wadsworth, Fabian B.; Gurioli, Lucia; Scheu, Bettina; Kueppers, Ulrich; Di Muro, Andrea; Dingwell, Donald B.
2017-03-01
Pore connectivity is a measure of the fraction of pore space (vesicles, voids or cracks) in a material that is interconnected on the system length scale. Pore connectivity is fundamentally related to permeability, which has been shown to control magma outgassing and the explosive potential of magma during ascent in the shallowest part of the crust. Here, we compile a database of connectivity and porosity from published sources and supplement this with additional measurements, using natural volcanic rocks produced in a broad range of eruptive styles and with a range of bulk composition. The database comprises 2715 pairs of connectivity C and porosity ϕ values for rocks from 35 volcanoes as well as 116 products of experimental work. For 535 volcanic rock samples, the permeability k was also measured. Data from experimental studies constrain the general features of the relationship between C and ϕ associated with both vesiculation and densification processes, which can then be used to interpret natural data. To a first order, we show that a suite of rocks originating from effusive eruptive behaviour can be distinguished from rocks originating from explosive eruptive behaviour using C and ϕ. We observe that on this basis, a particularly clear distinction can be made between scoria formed in fire-fountains and that formed in Strombolian activity. With increasing ϕ, the onset of connectivity occurs at the percolation threshold ϕc which in turn can be hugely variable. We demonstrate that C is an excellent metric for constraining ϕc in suites of porous rocks formed in a common process and discuss the range of ϕc values recorded in volcanic rocks. The percolation threshold is key to understanding the onset of permeability, outgassing and compaction in shallow magmas. We show that this threshold is dramatically different in rocks formed during densification processes than in rocks formed in vesiculating processes and propose that this value is the biggest factor in controlling the evolution of permeability at porosities above ϕc.
JANIS 4: An Improved Version of the NEA Java-based Nuclear Data Information System
NASA Astrophysics Data System (ADS)
Soppera, N.; Bossant, M.; Dupont, E.
2014-06-01
JANIS is software developed to facilitate the visualization and manipulation of nuclear data, giving access to evaluated data libraries, and to the EXFOR and CINDA databases. It is stand-alone Java software, downloadable from the web and distributed on DVD. Used offline, the system also makes use of an internet connection to access the NEA Data Bank database. It is now also offered as a full web application, only requiring a browser. The features added in the latest version of the software and this new web interface are described.
JANIS 4: An Improved Version of the NEA Java-based Nuclear Data Information System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soppera, N., E-mail: nicolas.soppera@oecd.org; Bossant, M.; Dupont, E.
JANIS is software developed to facilitate the visualization and manipulation of nuclear data, giving access to evaluated data libraries, and to the EXFOR and CINDA databases. It is stand-alone Java software, downloadable from the web and distributed on DVD. Used offline, the system also makes use of an internet connection to access the NEA Data Bank database. It is now also offered as a full web application, only requiring a browser. The features added in the latest version of the software and this new web interface are described.
X-Ray Transition Energies Database
National Institute of Standards and Technology Data Gateway
SRD 128 NIST X-Ray Transition Energies Database (Web, free access) This X-ray transition table provides the energies and wavelengths for the K and L transitions connecting energy levels having principal quantum numbers n = 1, 2, 3, and 4. The elements covered include Z = 10, neon to Z = 100, fermium. There are two unique features of this data base: (1) a serious attempt to have all experimental values on a scale consistent with the International System of measurement (the SI) and (2) inclusion of accurate theoretical estimates for all transitions.
Quirós, Miguel; Gražulis, Saulius; Girdzijauskaitė, Saulė; Merkys, Andrius; Vaitkus, Antanas
2018-05-18
Computer descriptions of chemical molecular connectivity are necessary for searching chemical databases and for predicting chemical properties from molecular structure. In this article, the ongoing work to describe the chemical connectivity of entries contained in the Crystallography Open Database (COD) in SMILES format is reported. This collection of SMILES is publicly available for chemical (substructure) search or for any other purpose on an open-access basis, as is the COD itself. The conventions that have been followed for the representation of compounds that do not fit into the valence bond theory are outlined for the most frequently found cases. The procedure for getting the SMILES out of the CIF files starts with checking whether the atoms in the asymmetric unit are a chemically acceptable image of the compound. When they are not (molecule in a symmetry element, disorder, polymeric species,etc.), the previously published cif_molecule program is used to get such image in many cases. The program package Open Babel is then applied to get SMILES strings from the CIF files (either those directly taken from the COD or those produced by cif_molecule when applicable). The results are then checked and/or fixed by a human editor, in a computer-aided task that at present still consumes a great deal of human time. Even if the procedure still needs to be improved to make it more automatic (and hence faster), it has already yielded more than 160,000 curated chemical structures and the purpose of this article is to announce the existence of this work to the chemical community as well as to spread the use of its results.
BIOZON: a system for unification, management and analysis of heterogeneous biological data.
Birkland, Aaron; Yona, Golan
2006-02-15
Integration of heterogeneous data types is a challenging problem, especially in biology, where the number of databases and data types increase rapidly. Amongst the problems that one has to face are integrity, consistency, redundancy, connectivity, expressiveness and updatability. Here we present a system (Biozon) that addresses these problems, and offers biologists a new knowledge resource to navigate through and explore. Biozon unifies multiple biological databases consisting of a variety of data types (such as DNA sequences, proteins, interactions and cellular pathways). It is fundamentally different from previous efforts as it uses a single extensive and tightly connected graph schema wrapped with hierarchical ontology of documents and relations. Beyond warehousing existing data, Biozon computes and stores novel derived data, such as similarity relationships and functional predictions. The integration of similarity data allows propagation of knowledge through inference and fuzzy searches. Sophisticated methods of query that span multiple data types were implemented and first-of-a-kind biological ranking systems were explored and integrated. The Biozon system is an extensive knowledge resource of heterogeneous biological data. Currently, it holds more than 100 million biological documents and 6.5 billion relations between them. The database is accessible through an advanced web interface that supports complex queries, "fuzzy" searches, data materialization and more, online at http://biozon.org.
Moser, Richard P.; Hesse, Bradford W.; Shaikh, Abdul R.; Courtney, Paul; Morgan, Glen; Augustson, Erik; Kobrin, Sarah; Levin, Kerry; Helba, Cynthia; Garner, David; Dunn, Marsha; Coa, Kisha
2011-01-01
Scientists are taking advantage of the Internet and collaborative web technology to accelerate discovery in a massively connected, participative environment —a phenomenon referred to by some as Science 2.0. As a new way of doing science, this phenomenon has the potential to push science forward in a more efficient manner than was previously possible. The Grid-Enabled Measures (GEM) database has been conceptualized as an instantiation of Science 2.0 principles by the National Cancer Institute with two overarching goals: (1) Promote the use of standardized measures, which are tied to theoretically based constructs; and (2) Facilitate the ability to share harmonized data resulting from the use of standardized measures. This is done by creating an online venue connected to the Cancer Biomedical Informatics Grid (caBIG®) where a virtual community of researchers can collaborate together and come to consensus on measures by rating, commenting and viewing meta-data about the measures and associated constructs. This paper will describe the web 2.0 principles on which the GEM database is based, describe its functionality, and discuss some of the important issues involved with creating the GEM database, such as the role of mutually agreed-on ontologies (i.e., knowledge categories and the relationships among these categories— for data sharing). PMID:21521586
Clique-based data mining for related genes in a biomedical database.
Matsunaga, Tsutomu; Yonemori, Chikara; Tomita, Etsuji; Muramatsu, Masaaki
2009-07-01
Progress in the life sciences cannot be made without integrating biomedical knowledge on numerous genes in order to help formulate hypotheses on the genetic mechanisms behind various biological phenomena, including diseases. There is thus a strong need for a way to automatically and comprehensively search from biomedical databases for related genes, such as genes in the same families and genes encoding components of the same pathways. Here we address the extraction of related genes by searching for densely-connected subgraphs, which are modeled as cliques, in a biomedical relational graph. We constructed a graph whose nodes were gene or disease pages, and edges were the hyperlink connections between those pages in the Online Mendelian Inheritance in Man (OMIM) database. We obtained over 20,000 sets of related genes (called 'gene modules') by enumerating cliques computationally. The modules included genes in the same family, genes for proteins that form a complex, and genes for components of the same signaling pathway. The results of experiments using 'metabolic syndrome'-related gene modules show that the gene modules can be used to get a coherent holistic picture helpful for interpreting relations among genes. We presented a data mining approach extracting related genes by enumerating cliques. The extracted gene sets provide a holistic picture useful for comprehending complex disease mechanisms.
CHOmine: an integrated data warehouse for CHO systems biology and modeling
Hanscho, Michael; Ruckerbauer, David E.; Zanghellini, Jürgen; Borth, Nicole
2017-01-01
Abstract The last decade has seen a surge in published genome-scale information for Chinese hamster ovary (CHO) cells, which are the main production vehicles for therapeutic proteins. While a single access point is available at www.CHOgenome.org, the primary data is distributed over several databases at different institutions. Currently research is frequently hampered by a plethora of gene names and IDs that vary between published draft genomes and databases making systems biology analyses cumbersome and elaborate. Here we present CHOmine, an integrative data warehouse connecting data from various databases and links to other ones. Furthermore, we introduce CHOmodel, a web based resource that provides access to recently published CHO cell line specific metabolic reconstructions. Both resources allow to query CHO relevant data, find interconnections between different types of data and thus provides a simple, standardized entry point to the world of CHO systems biology. Database URL: http://www.chogenome.org PMID:28605771
Collod-Béroud, G; Béroud, C; Adès, L; Black, C; Boxer, M; Brock, D J; Godfrey, M; Hayward, C; Karttunen, L; Milewicz, D; Peltonen, L; Richards, R I; Wang, M; Junien, C; Boileau, C
1997-01-01
Fibrillin is the major component of extracellular microfibrils. Mutations in the fibrillin gene on chromosome 15 (FBN1) were described at first in the heritable connective tissue disorder, Marfan syndrome (MFS). More recently, FBN1 has also been shown to harbor mutations related to a spectrum of conditions phenotypically related to MFS. These mutations are private, essentially missense, generally non-recurrent and widely distributed throughout the gene. To date no clear genotype/phenotype relationship has been observed excepted for the localization of neonatal mutations in a cluster between exons 24 and 32. The second version of the computerized Marfan database contains 89 entries. The software has been modified to accomodate new functions and routines. PMID:9016526
Generation of comprehensive thoracic oncology database--tool for translational research.
Surati, Mosmi; Robinson, Matthew; Nandi, Suvobroto; Faoro, Leonardo; Demchuk, Carley; Kanteti, Rajani; Ferguson, Benjamin; Gangadhar, Tara; Hensing, Thomas; Hasina, Rifat; Husain, Aliya; Ferguson, Mark; Karrison, Theodore; Salgia, Ravi
2011-01-22
The Thoracic Oncology Program Database Project was created to serve as a comprehensive, verified, and accessible repository for well-annotated cancer specimens and clinical data to be available to researchers within the Thoracic Oncology Research Program. This database also captures a large volume of genomic and proteomic data obtained from various tumor tissue studies. A team of clinical and basic science researchers, a biostatistician, and a bioinformatics expert was convened to design the database. Variables of interest were clearly defined and their descriptions were written within a standard operating manual to ensure consistency of data annotation. Using a protocol for prospective tissue banking and another protocol for retrospective banking, tumor and normal tissue samples from patients consented to these protocols were collected. Clinical information such as demographics, cancer characterization, and treatment plans for these patients were abstracted and entered into an Access database. Proteomic and genomic data have been included in the database and have been linked to clinical information for patients described within the database. The data from each table were linked using the relationships function in Microsoft Access to allow the database manager to connect clinical and laboratory information during a query. The queried data can then be exported for statistical analysis and hypothesis generation.
DOT National Transportation Integrated Search
2013-04-01
Connected Vehicle to Vehicle (V2V) safety applications heavily rely on the BSM, which is one of the messages defined in the Society of Automotive standard J2735, Dedicated Short Range Communications (DSRC) Message Set Dictionary, November 2009. The B...
ERIC Educational Resources Information Center
Dealy, Jacqueline
1994-01-01
Offers instructions and resources for Internet novices wanting to access Internet services. Instructions are offered for connecting to 13 education listservs, 9 electronic journals and newsletters, 3 education databases, 7 Telnet gopher sites, Veronica and Archie search tools, and File Transfer Protocol (FTP). (Contains 16 references.) (SLW)
Teaching Software Componentization: A Bar Chart Java Bean
ERIC Educational Resources Information Center
Mitri, Michel
2010-01-01
In the current object-oriented paradigm, software construction increasingly involves creating and utilizing "software components". These components can serve a variety of functions, from common algorithmic processes to database connectivity to graphical interfaces. The advantage of component architectures is that programmers can use pre-existing…
Charbonnier, Amandine; Knapp, Jenny; Demonmerot, Florent; Bresson-Hadni, Solange; Raoul, Francis; Grenouillet, Frédéric; Millon, Laurence; Vuitton, Dominique Angèle; Damy, Sylvie
2014-01-01
Alveolar echinococcosis (AE) is an endemic zoonosis in France due to the cestode Echinococcus multilocularis. The French National Reference Centre for Alveolar Echinococcosis (CNR-EA), connected to the FrancEchino network, is responsible for recording all AE cases diagnosed in France. Administrative, epidemiological and medical information on the French AE cases may currently be considered exhaustive only on the diagnosis time. To constitute a reference data set, an information system (IS) was developed thanks to a relational database management system (MySQL language). The current data set will evolve towards a dynamic surveillance system, including follow-up data (e.g. imaging, serology) and will be connected to environmental and parasitological data relative to E. multilocularis to better understand the pathogen transmission pathway. A particularly important goal is the possible interoperability of the IS with similar European and other databases abroad; this new IS could play a supporting role in the creation of new AE registries. © A. Charbonnier et al., published by EDP Sciences, 2014.
A new data management system for the French National Registry of human alveolar echinococcosis cases
Charbonnier, Amandine; Knapp, Jenny; Demonmerot, Florent; Bresson-Hadni, Solange; Raoul, Francis; Grenouillet, Frédéric; Millon, Laurence; Vuitton, Dominique Angèle; Damy, Sylvie
2014-01-01
Alveolar echinococcosis (AE) is an endemic zoonosis in France due to the cestode Echinococcus multilocularis. The French National Reference Centre for Alveolar Echinococcosis (CNR-EA), connected to the FrancEchino network, is responsible for recording all AE cases diagnosed in France. Administrative, epidemiological and medical information on the French AE cases may currently be considered exhaustive only on the diagnosis time. To constitute a reference data set, an information system (IS) was developed thanks to a relational database management system (MySQL language). The current data set will evolve towards a dynamic surveillance system, including follow-up data (e.g. imaging, serology) and will be connected to environmental and parasitological data relative to E. multilocularis to better understand the pathogen transmission pathway. A particularly important goal is the possible interoperability of the IS with similar European and other databases abroad; this new IS could play a supporting role in the creation of new AE registries. PMID:25526544
Wide-area-distributed storage system for a multimedia database
NASA Astrophysics Data System (ADS)
Ueno, Masahiro; Kinoshita, Shigechika; Kuriki, Makato; Murata, Setsuko; Iwatsu, Shigetaro
1998-12-01
We have developed a wide-area-distribution storage system for multimedia databases, which minimizes the possibility of simultaneous failure of multiple disks in the event of a major disaster. It features a RAID system, whose member disks are spatially distributed over a wide area. Each node has a device, which includes the controller of the RAID and the controller of the member disks controlled by other nodes. The devices in the node are connected to a computer, using fiber optic cables and communicate using fiber-channel technology. Any computer at a node can utilize multiple devices connected by optical fibers as a single 'virtual disk.' The advantage of this system structure is that devices and fiber optic cables are shared by the computers. In this report, we first described our proposed system, and a prototype was used for testing. We then discussed its performance; i.e., how to read and write throughputs are affected by data-access delay, the RAID level, and queuing.
The CMS dataset bookkeeping service
NASA Astrophysics Data System (ADS)
Afaq, A.; Dolgert, A.; Guo, Y.; Jones, C.; Kosyakov, S.; Kuznetsov, V.; Lueking, L.; Riley, D.; Sekhri, V.
2008-07-01
The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.
Chemotext: A Publicly Available Web Server for Mining Drug-Target-Disease Relationships in PubMed.
Capuzzi, Stephen J; Thornton, Thomas E; Liu, Kammy; Baker, Nancy; Lam, Wai In; O'Banion, Colin P; Muratov, Eugene N; Pozefsky, Diane; Tropsha, Alexander
2018-02-26
Elucidation of the mechanistic relationships between drugs, their targets, and diseases is at the core of modern drug discovery research. Thousands of studies relevant to the drug-target-disease (DTD) triangle have been published and annotated in the Medline/PubMed database. Mining this database affords rapid identification of all published studies that confirm connections between vertices of this triangle or enable new inferences of such connections. To this end, we describe the development of Chemotext, a publicly available Web server that mines the entire compendium of published literature in PubMed annotated by Medline Subject Heading (MeSH) terms. The goal of Chemotext is to identify all known DTD relationships and infer missing links between vertices of the DTD triangle. As a proof-of-concept, we show that Chemotext could be instrumental in generating new drug repurposing hypotheses or annotating clinical outcomes pathways for known drugs. The Chemotext Web server is freely available at http://chemotext.mml.unc.edu .
A Diffusion MRI Tractography Connectome of the Mouse Brain and Comparison with Neuronal Tracer Data
Calabrese, Evan; Badea, Alexandra; Cofer, Gary; Qi, Yi; Johnson, G. Allan
2015-01-01
Interest in structural brain connectivity has grown with the understanding that abnormal neural connections may play a role in neurologic and psychiatric diseases. Small animal connectivity mapping techniques are particularly important for identifying aberrant connectivity in disease models. Diffusion magnetic resonance imaging tractography can provide nondestructive, 3D, brain-wide connectivity maps, but has historically been limited by low spatial resolution, low signal-to-noise ratio, and the difficulty in estimating multiple fiber orientations within a single image voxel. Small animal diffusion tractography can be substantially improved through the combination of ex vivo MRI with exogenous contrast agents, advanced diffusion acquisition and reconstruction techniques, and probabilistic fiber tracking. Here, we present a comprehensive, probabilistic tractography connectome of the mouse brain at microscopic resolution, and a comparison of these data with a neuronal tracer-based connectivity data from the Allen Brain Atlas. This work serves as a reference database for future tractography studies in the mouse brain, and demonstrates the fundamental differences between tractography and neuronal tracer data. PMID:26048951
Route Sanitizer: Connected Vehicle Trajectory De-Identification Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, Jason M; Ferber, Aaron E
Route Sanitizer is ORNL's connected vehicle moving object database de-identification tool and a graphical user interface to ORNL's connected vehicle de-identification algorithm. It uses the Google Chrome (soon to be Electron) platform so it will run on different computing platforms. The basic de-identification strategy is record redaction: portions of a vehicle trajectory (e.g. sequences of precise temporal spatial records) are removed. It does not alter retained records. The algorithm uses custom techniques to find areas within trajectories that may be considered private, then it suppresses those in addition to enough of the trajectory surrounding those locations to protect against "inferencemore » attacks" in a mathematically sound way. Map data is integrated into the process to make this possible.« less
Cheng, Wei; Rolls, Edmund T; Zhang, Jie; Sheng, Wenbo; Ma, Liang; Wan, Lin; Luo, Qiang; Feng, Jianfeng
2017-03-01
A powerful new method is described called Knowledge based functional connectivity Enrichment Analysis (KEA) for interpreting resting state functional connectivity, using circuits that are functionally identified using search terms with the Neurosynth database. The method derives its power by focusing on neural circuits, sets of brain regions that share a common biological function, instead of trying to interpret single functional connectivity links. This provides a novel way of investigating how task- or function-related networks have resting state functional connectivity differences in different psychiatric states, provides a new way to bridge the gap between task and resting-state functional networks, and potentially helps to identify brain networks that might be treated. The method was applied to interpreting functional connectivity differences in autism. Functional connectivity decreases at the network circuit level in 394 patients with autism compared with 473 controls were found in networks involving the orbitofrontal cortex, anterior cingulate cortex, middle temporal gyrus cortex, and the precuneus, in networks that are implicated in the sense of self, face processing, and theory of mind. The decreases were correlated with symptom severity. Copyright © 2017. Published by Elsevier Inc.
Son, Seong-Jin; Kim, Jonghoon; Park, Hyunjin
2017-01-01
Regional volume atrophy and functional degeneration are key imaging hallmarks of Alzheimer's disease (AD) in structural and functional magnetic resonance imaging (MRI), respectively. We jointly explored regional volume atrophy and functional connectivity to better characterize neuroimaging data of AD and mild cognitive impairment (MCI). All data were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. We compared regional volume atrophy and functional connectivity in 10 subcortical regions using structural MRI and resting-state functional MRI (rs-fMRI). Neuroimaging data of normal controls (NC) (n = 35), MCI (n = 40), and AD (n = 30) were compared. Significant differences of regional volumes and functional connectivity measures between groups were assessed using permutation tests in 10 regions. The regional volume atrophy and functional connectivity of identified regions were used as features for the random forest classifier to distinguish among three groups. The features of the identified regions were also regarded as connectional fingerprints that could distinctively separate a given group from the others. We identified a few regions with distinctive regional atrophy and functional connectivity patterns for NC, MCI, and AD groups. A three label classifier using the information of regional volume atrophy and functional connectivity of identified regions achieved classification accuracy of 53.33% to distinguish among NC, MCI, and AD. We identified distinctive regional atrophy and functional connectivity patterns that could be regarded as a connectional fingerprint.
Son, Seong-Jin; Kim, Jonghoon
2017-01-01
Regional volume atrophy and functional degeneration are key imaging hallmarks of Alzheimer’s disease (AD) in structural and functional magnetic resonance imaging (MRI), respectively. We jointly explored regional volume atrophy and functional connectivity to better characterize neuroimaging data of AD and mild cognitive impairment (MCI). All data were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. We compared regional volume atrophy and functional connectivity in 10 subcortical regions using structural MRI and resting-state functional MRI (rs-fMRI). Neuroimaging data of normal controls (NC) (n = 35), MCI (n = 40), and AD (n = 30) were compared. Significant differences of regional volumes and functional connectivity measures between groups were assessed using permutation tests in 10 regions. The regional volume atrophy and functional connectivity of identified regions were used as features for the random forest classifier to distinguish among three groups. The features of the identified regions were also regarded as connectional fingerprints that could distinctively separate a given group from the others. We identified a few regions with distinctive regional atrophy and functional connectivity patterns for NC, MCI, and AD groups. A three label classifier using the information of regional volume atrophy and functional connectivity of identified regions achieved classification accuracy of 53.33% to distinguish among NC, MCI, and AD. We identified distinctive regional atrophy and functional connectivity patterns that could be regarded as a connectional fingerprint. PMID:28333946
Modeling and Databases for Teaching Petrology
NASA Astrophysics Data System (ADS)
Asher, P.; Dutrow, B.
2003-12-01
With the widespread availability of high-speed computers with massive storage and ready transport capability of large amounts of data, computational and petrologic modeling and the use of databases provide new tools with which to teach petrology. Modeling can be used to gain insights into a system, predict system behavior, describe a system's processes, compare with a natural system or simply to be illustrative. These aspects result from data driven or empirical, analytical or numerical models or the concurrent examination of multiple lines of evidence. At the same time, use of models can enhance core foundations of the geosciences by improving critical thinking skills and by reinforcing prior knowledge gained. However, the use of modeling to teach petrology is dictated by the level of expectation we have for students and their facility with modeling approaches. For example, do we expect students to push buttons and navigate a program, understand the conceptual model and/or evaluate the results of a model. Whatever the desired level of sophistication, specific elements of design should be incorporated into a modeling exercise for effective teaching. These include, but are not limited to; use of the scientific method, use of prior knowledge, a clear statement of purpose and goals, attainable goals, a connection to the natural/actual system, a demonstration that complex heterogeneous natural systems are amenable to analyses by these techniques and, ideally, connections to other disciplines and the larger earth system. Databases offer another avenue with which to explore petrology. Large datasets are available that allow integration of multiple lines of evidence to attack a petrologic problem or understand a petrologic process. These are collected into a database that offers a tool for exploring, organizing and analyzing the data. For example, datasets may be geochemical, mineralogic, experimental and/or visual in nature, covering global, regional to local scales. These datasets provide students with access to large amount of related data through space and time. Goals of the database working group include educating earth scientists about information systems in general, about the importance of metadata about ways of using databases and datasets as educational tools and about the availability of existing datasets and databases. The modeling and databases groups hope to create additional petrologic teaching tools using these aspects and invite the community to contribute to the effort.
New Catalog of Resources Enables Paleogeosciences Research
NASA Astrophysics Data System (ADS)
Lingo, R. C.; Horlick, K. A.; Anderson, D. M.
2014-12-01
The 21st century promises a new era for scientists of all disciplines, the age where cyber infrastructure enables research and education and fuels discovery. EarthCube is a working community of over 2,500 scientists and students of many Earth Science disciplines who are looking to build bridges between disciplines. The EarthCube initiative will create a digital infrastructure that connects databases, software, and repositories. A catalog of resources (databases, software, repositories) has been produced by the Research Coordination Network for Paleogeosciences to improve the discoverability of resources. The Catalog is currently made available within the larger-scope CINERGI geosciences portal (http://hydro10.sdsc.edu/geoportal/catalog/main/home.page). Other distribution points and web services are planned, using linked data, content services for the web, and XML descriptions that can be harvested using metadata protocols. The databases provide searchable interfaces to find data sets that would otherwise remain dark data, hidden in drawers and on personal computers. The software will be described in catalog entries so just one click will lead users to methods and analytical tools that many geoscientists were unaware of. The repositories listed in the Paleogeosciences Catalog contain physical samples found all across the globe, from natural history museums to the basements of university buildings. EarthCube has over 250 databases, 300 software systems, and 200 repositories which will grow in the coming year. When completed, geoscientists across the world will be connected into a productive workflow for managing, sharing, and exploring geoscience data and information that expedites collaboration and innovation within the paleogeosciences, potentially bringing about new interdisciplinary discoveries.
NASA Astrophysics Data System (ADS)
Karabat, Cagatay; Kiraz, Mehmet Sabir; Erdogan, Hakan; Savas, Erkay
2015-12-01
In this paper, we introduce a new biometric verification and template protection system which we call THRIVE. The system includes novel enrollment and authentication protocols based on threshold homomorphic encryption where a private key is shared between a user and a verifier. In the THRIVE system, only encrypted binary biometric templates are stored in a database and verification is performed via homomorphically randomized templates, thus, original templates are never revealed during authentication. Due to the underlying threshold homomorphic encryption scheme, a malicious database owner cannot perform full decryption on encrypted templates of the users in the database. In addition, security of the THRIVE system is enhanced using a two-factor authentication scheme involving user's private key and biometric data. Using simulation-based techniques, the proposed system is proven secure in the malicious model. The proposed system is suitable for applications where the user does not want to reveal her biometrics to the verifier in plain form, but needs to prove her identity by using biometrics. The system can be used with any biometric modality where a feature extraction method yields a fixed size binary template and a query template is verified when its Hamming distance to the database template is less than a threshold. The overall connection time for the proposed THRIVE system is estimated to be 336 ms on average for 256-bit biometric templates on a desktop PC running with quad core 3.2 GHz CPUs at 10 Mbit/s up/down link connection speed. Consequently, the proposed system can be efficiently used in real-life applications.
Developing Modern Information Systems and Services: Africa's Challenges for the Future.
ERIC Educational Resources Information Center
Chowdhury, G. G.
1996-01-01
Discusses the current state of information systems and services in Africa, examines future possibilities, and suggests areas for improvement. Topics include the lack of automation; CD-ROM databases for accessibility to information sources; developing low-cost electronic communication facilities; Internet connectivity; dependence on imported…
New Hampshire: The Automated Information System.
ERIC Educational Resources Information Center
Wiggin, Kendall F.
1996-01-01
Reviews statewide multitype library automation and connectivity initiatives in New Hampshire. Topics include an information system incorporating a union catalog, interlibrary loan, electronic mail, CD-ROM databases, and Internet access; state government information on the World Wide Web through the state library; OCLC FirstSearch access; and a…
MaizeGDB - Past, present, and future
USDA-ARS?s Scientific Manuscript database
The Maize Genetics and Genomics Database (MaizeGDB) turns 20 this year. This editorial outlines MaizeGDB's history and connection to the Maize Genetics Cooperation, describes key components of how the MaizeGDB interface will be completely redesigned over the course of the next two years to meet cur...
Volcanoes of the World: Reconfiguring a scientific database to meet new goals and expectations
NASA Astrophysics Data System (ADS)
Venzke, Edward; Andrews, Ben; Cottrell, Elizabeth
2015-04-01
The Smithsonian Global Volcanism Program's (GVP) database of Holocene volcanoes and eruptions, Volcanoes of the World (VOTW), originated in 1971, and was largely populated with content from the IAVCEI Catalog of Volcanoes of Active Volcanoes and some independent datasets. Volcanic activity reported by Smithsonian's Bulletin of the Global Volcanism Network and USGS/SI Weekly Activity Reports (and their predecessors), published research, and other varied sources has expanded the database significantly over the years. Three editions of the VOTW were published in book form, creating a catalog with new ways to display data that included regional directories, a gazetteer, and a 10,000-year chronology of eruptions. The widespread dissemination of the data in electronic media since the first GVP website in 1995 has created new challenges and opportunities for this unique collection of information. To better meet current and future goals and expectations, we have recently transitioned VOTW into a SQL Server database. This process included significant schema changes to the previous relational database, data auditing, and content review. We replaced a disparate, confusing, and changeable volcano numbering system with unique and permanent volcano numbers. We reconfigured structures for recording eruption data to allow greater flexibility in describing the complexity of observed activity, adding in the ability to distinguish episodes within eruptions (in time and space) and events (including dates) rather than characteristics that take place during an episode. We have added a reference link field in multiple tables to enable attribution of sources at finer levels of detail. We now store and connect synonyms and feature names in a more consistent manner, which will allow for morphological features to be given unique numbers and linked to specific eruptions or samples; if the designated overall volcano name is also a morphological feature, it is then also listed and described as that feature. One especially significant audit involved re-evaluating the categories of evidence used to include a volcano in the Holocene list, and reviewing in detail the entries in low-certainty categories. Concurrently, we developed a new data entry system that may in the future allow trusted users outside of Smithsonian to input data into VOTW. A redesigned website now provides new search tools and data download options. We are collaborating with organizations that manage volcano and eruption databases, physical sample databases, and geochemical databases to allow real-time connections and complex queries. VOTW serves the volcanological community by providing a clear and consistent core database of distinctly identified volcanoes and eruptions to advance goals in research, civil defense, and public outreach.
A georeferenced Landsat digital database for forest insect-damage assessment
NASA Technical Reports Server (NTRS)
Williams, D. L.; Nelson, R. F.; Dottavio, C. L.
1985-01-01
In 1869, the gypsy moth caterpillar was introduced in the U.S. in connection with the experiments of a French scientist. Throughout the insect's period of establishment, gypsy moth populations have periodically increased to epidemic proportions. For programs concerned with preventing the insect's spread, it would be highly desirable to be able to employ a survey technique which could provide timely, accurate, and standardized assessments at a reasonable cost. A project was, therefore, initiated with the aim to demonstrate the usefulness of satellite remotely sensed data for monitoring the insect defoliation of hardwood forests in Pennsylvania. A major effort within this project involved the development of a map-registered Landsat digital database. A complete description of the database developed is provided along with information regarding the employed data management system.
Development and operations of the astrophysics data system
NASA Technical Reports Server (NTRS)
Murray, Stephen S.; Oliversen, Ronald (Technical Monitor)
2005-01-01
Abstract service - Continued regular updates of abstracts in the databases, both at SA0 and at all mirror sites. - Modified loading scripts to accommodate changes in data format (PhyS) - Discussed data deliveries with providers to clear up problems with format or other errors (EGU) - Continued inclusion of large numbers of historical literature volumes and physics conference volumes xeroxed from the library. - Performed systematic fixes on some data sets in the database to account for changes in article numbering (AGU journals) - Implemented linking of ADS bibliographic records with multimedia files - Debugged and fixed obscure connection problems with the ADS Korean mirror site which were preventing successful updates of the data holdings. - Wrote procedure to parse citation data and characterize an ADS record based on its citation ratios within each database.
Drug-Path: a database for drug-induced pathways.
Zeng, Hui; Qiu, Chengxiang; Cui, Qinghua
2015-01-01
Some databases for drug-associated pathways have been built and are publicly available. However, the pathways curated in most of these databases are drug-action or drug-metabolism pathways. In recent years, high-throughput technologies such as microarray and RNA-sequencing have produced lots of drug-induced gene expression profiles. Interestingly, drug-induced gene expression profile frequently show distinct patterns, indicating that drugs normally induce the activation or repression of distinct pathways. Therefore, these pathways contribute to study the mechanisms of drugs and drug-repurposing. Here, we present Drug-Path, a database of drug-induced pathways, which was generated by KEGG pathway enrichment analysis for drug-induced upregulated genes and downregulated genes based on drug-induced gene expression datasets in Connectivity Map. Drug-Path provides user-friendly interfaces to retrieve, visualize and download the drug-induced pathway data in the database. In addition, the genes deregulated by a given drug are highlighted in the pathways. All data were organized using SQLite. The web site was implemented using Django, a Python web framework. Finally, we believe that this database will be useful for related researches. © The Author(s) 2015. Published by Oxford University Press.
Student Service Members/Veterans in Higher Education: A Systematic Review
ERIC Educational Resources Information Center
Barry, Adam E.; Whiteman, Shawn D.; MacDermid Wadsworth, Shelley
2014-01-01
We systematically reviewed the data-based peer-reviewed research examining student service members/veterans (SSM/V) in higher education. Compared to civilian peers, SSM/V exhibit disproportionately higher rates of health risk behaviors and psychological symptoms, and personal and educational adjustment difficulties (i.e., inability to connect with…
Granular Security in a Graph Database
2016-03-01
have a presence in more than one layer. For example, a single social media user may have an account in Twitter, Facebook, and Instagram with... Instagram layers. This restriction re- flects the reality that user A’s Facebook account cannot connect directly to user B’s Twitter account. A security
Implementing Smart School Technology at the Secondary Level.
ERIC Educational Resources Information Center
Stallard, Charles K.
This paper describes the characteristics of "smart schools" and offers guidelines for developing such schools. Smart schools are defined as having three features: (1) they are computer networked via local area networks in order to share information through teleconferencing, databases, and electronic mail; (2) they are connected beyond…
DATA FOR ENVIRONMENTAL MODELING (D4EM): BACKGROUND AND EXAMPLE APPLICATIONS OF DATA AUTOMATION
Data is a basic requirement for most modeling applications. Collecting data is expensive and time consuming. High speed internet connections and growing databases of online environmental data go a long way to overcoming issues of data scarcity. Among the obstacles still remaining...
Lin, Ying-Chi; Wang, Chia-Chi; Chen, Ih-Sheng; Jheng, Jhao-Liang; Li, Jih-Heng; Tung, Chun-Wei
2013-01-01
The unique geographic features of Taiwan are attributed to the rich indigenous and endemic plant species in Taiwan. These plants serve as resourceful bank for biologically active phytochemicals. Given that these plant-derived chemicals are prototypes of potential drugs for diseases, databases connecting the chemical structures and pharmacological activities may facilitate drug development. To enhance the utility of the data, it is desirable to develop a database of chemical compounds and corresponding activities from indigenous plants in Taiwan. A database of anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan was constructed. The database, TIPdb, is composed of a standardized format of published anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan. A browse function was implemented for users to browse the database in a taxonomy-based manner. Search functions can be utilized to filter records of interest by botanical name, part, chemical class, or compound name. The structured and searchable database TIPdb was constructed to serve as a comprehensive and standardized resource for anticancer, antiplatelet, and antituberculosis compounds search. The manually curated chemical structures and activities provide a great opportunity to develop quantitative structure-activity relationship models for the high-throughput screening of potential anticancer, antiplatelet, and antituberculosis drugs.
Lin, Ying-Chi; Wang, Chia-Chi; Chen, Ih-Sheng; Jheng, Jhao-Liang; Li, Jih-Heng; Tung, Chun-Wei
2013-01-01
The unique geographic features of Taiwan are attributed to the rich indigenous and endemic plant species in Taiwan. These plants serve as resourceful bank for biologically active phytochemicals. Given that these plant-derived chemicals are prototypes of potential drugs for diseases, databases connecting the chemical structures and pharmacological activities may facilitate drug development. To enhance the utility of the data, it is desirable to develop a database of chemical compounds and corresponding activities from indigenous plants in Taiwan. A database of anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan was constructed. The database, TIPdb, is composed of a standardized format of published anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan. A browse function was implemented for users to browse the database in a taxonomy-based manner. Search functions can be utilized to filter records of interest by botanical name, part, chemical class, or compound name. The structured and searchable database TIPdb was constructed to serve as a comprehensive and standardized resource for anticancer, antiplatelet, and antituberculosis compounds search. The manually curated chemical structures and activities provide a great opportunity to develop quantitative structure-activity relationship models for the high-throughput screening of potential anticancer, antiplatelet, and antituberculosis drugs. PMID:23766708
Building a highly available and intrusion tolerant Database Security and Protection System (DSPS).
Cai, Liang; Yang, Xiao-Hu; Dong, Jin-Xiang
2003-01-01
Database Security and Protection System (DSPS) is a security platform for fighting malicious DBMS. The security and performance are critical to DSPS. The authors suggested a key management scheme by combining the server group structure to improve availability and the key distribution structure needed by proactive security. This paper detailed the implementation of proactive security in DSPS. After thorough performance analysis, the authors concluded that the performance difference between the replicated mechanism and proactive mechanism becomes smaller and smaller with increasing number of concurrent connections; and that proactive security is very useful and practical for large, critical applications.
Software and database for the analysis of mutations in the human FBN1 gene.
Collod, G; Béroud, C; Soussi, T; Junien, C; Boileau, C
1996-01-01
Fibrillin is the major component of extracellular microfibrils. Mutations in the fibrillin gene on chromosome 15 (FBN1) were described at first in the heritable connective tissue disorder, Marfan syndrome (MFS). More recently, FBN1 has also been shown to harbor mutations related to a spectrum of conditions phenotypically related to MFS and many mutations will have to be accumulated before genotype/phenotype relationships emerge. To facilitate mutational analysis of the FBN1 gene, a software package along with a computerized database (currently listing 63 entries) have been created. PMID:8594563
The Russian effort in establishing large atomic and molecular databases
NASA Astrophysics Data System (ADS)
Presnyakov, Leonid P.
1998-07-01
The database activities in Russia have been developed in connection with UV and soft X-ray spectroscopic studies of extraterrestrial and laboratory (magnetically confined and laser-produced) plasmas. Two forms of database production are used: i) a set of computer programs to calculate radiative and collisional data for the general atom or ion, and ii) development of numeric database systems with the data stored in the computer. The first form is preferable for collisional data. At the Lebedev Physical Institute, an appropriate set of the codes has been developed. It includes all electronic processes at collision energies from the threshold up to the relativistic limit. The ion -atom (and -ion) collisional data are calculated with the methods developed recently. The program for the calculations of the level populations and line intensities is used for spectrical diagnostics of transparent plasmas. The second form of database production is widely used at the Institute of Physico-Technical Measurements (VNIIFTRI), and the Troitsk Center: the Institute of Spectroscopy and TRINITI. The main results obtained at the centers above are reviewed. Plans for future developments jointly with international collaborations are discussed.
Social media based NPL system to find and retrieve ARM data: Concept paper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devarakonda, Ranjeet; Giansiracusa, Michael T.; Kumar, Jitendra
Information connectivity and retrieval has a role in our daily lives. The most pervasive source of online information is databases. The amount of data is growing at rapid rate and database technology is improving and having a profound effect. Almost all online applications are storing and retrieving information from databases. One challenge in supplying the public with wider access to informational databases is the need for knowledge of database languages like Structured Query Language (SQL). Although the SQL language has been published in many forms, not everybody is able to write SQL queries. Another challenge is that it may notmore » be practical to make the public aware of the structure of the database. There is a need for novice users to query relational databases using their natural language. To solve this problem, many natural language interfaces to structured databases have been developed. The goal is to provide more intuitive method for generating database queries and delivering responses. Social media makes it possible to interact with a wide section of the population. Through this medium, and with the help of Natural Language Processing (NLP) we can make the data of the Atmospheric Radiation Measurement Data Center (ADC) more accessible to the public. We propose an architecture for using Apache Lucene/Solr [1], OpenML [2,3], and Kafka [4] to generate an automated query/response system with inputs from Twitter5, our Cassandra DB, and our log database. Using the Twitter API and NLP we can give the public the ability to ask questions of our database and get automated responses.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devarakonda, Ranjeet; Giansiracusa, Michael T.; Kumar, Jitendra
Information connectivity and retrieval has a role in our daily lives. The most pervasive source of online information is databases. The amount of data is growing at rapid rate and database technology is improving and having a profound effect. Almost all online applications are storing and retrieving information from databases. One challenge in supplying the public with wider access to informational databases is the need for knowledge of database languages like Structured Query Language (SQL). Although the SQL language has been published in many forms, not everybody is able to write SQL queries. Another challenge is that it may notmore » be practical to make the public aware of the structure of the database. There is a need for novice users to query relational databases using their natural language. To solve this problem, many natural language interfaces to structured databases have been developed. The goal is to provide more intuitive method for generating database queries and delivering responses. Social media makes it possible to interact with a wide section of the population. Through this medium, and with the help of Natural Language Processing (NLP) we can make the data of the Atmospheric Radiation Measurement Data Center (ADC) more accessible to the public. We propose an architecture for using Apache Lucene/Solr [1], OpenML [2,3], and Kafka [4] to generate an automated query/response system with inputs from Twitter5, our Cassandra DB, and our log database. Using the Twitter API and NLP we can give the public the ability to ask questions of our database and get automated responses.« less
Wang, L.; Infante, D.; Esselman, P.; Cooper, A.; Wu, D.; Taylor, W.; Beard, D.; Whelan, G.; Ostroff, A.
2011-01-01
Fisheries management programs, such as the National Fish Habitat Action Plan (NFHAP), urgently need a nationwide spatial framework and database for health assessment and policy development to protect and improve riverine systems. To meet this need, we developed a spatial framework and database using National Hydrography Dataset Plus (I-.100,000-scale); http://www.horizon-systems.com/nhdplus). This framework uses interconfluence river reaches and their local and network catchments as fundamental spatial river units and a series of ecological and political spatial descriptors as hierarchy structures to allow users to extract or analyze information at spatial scales that they define. This database consists of variables describing channel characteristics, network position/connectivity, climate, elevation, gradient, and size. It contains a series of catchment-natural and human-induced factors that are known to influence river characteristics. Our framework and database assembles all river reaches and their descriptors in one place for the first time for the conterminous United States. This framework and database provides users with the capability of adding data, conducting analyses, developing management scenarios and regulation, and tracking management progresses at a variety of spatial scales. This database provides the essential data needs for achieving the objectives of NFHAP and other management programs. The downloadable beta version database is available at http://ec2-184-73-40-15.compute-1.amazonaws.com/nfhap/main/.
Release of (and lessons learned from mining) a pioneering large toxicogenomics database.
Sandhu, Komal S; Veeramachaneni, Vamsi; Yao, Xiang; Nie, Alex; Lord, Peter; Amaratunga, Dhammika; McMillian, Michael K; Verheyen, Geert R
2015-07-01
We release the Janssen Toxicogenomics database. This rat liver gene-expression database was generated using Codelink microarrays, and has been used over the past years within Janssen to derive signatures for multiple end points and to classify proprietary compounds. The release consists of gene-expression responses to 124 compounds, selected to give a broad coverage of liver-active compounds. A selection of the compounds were also analyzed on Affymetrix microarrays. The release includes results of an in-house reannotation pipeline to Entrez gene annotations, to classify probes into different confidence classes. High confidence unambiguously annotated probes were used to create gene-level data which served as starting point for cross-platform comparisons. Connectivity map-based similarity methods show excellent agreement between Codelink and Affymetrix runs of the same samples. We also compared our dataset with the Japanese Toxicogenomics Project and observed reasonable agreement, especially for compounds with stronger gene signatures. We describe an R-package containing the gene-level data and show how it can be used for expression-based similarity searches. Comparing the same biological samples run on the Affymetrix and the Codelink platform, good correspondence is observed using connectivity mapping approaches. As expected, this correspondence is smaller when the data are compared with an independent dataset such as TG-GATE. We hope that this collection of gene-expression profiles will be incorporated in toxicogenomics pipelines of users.
Chao, Edmund Y S; Armiger, Robert S; Yoshida, Hiroaki; Lim, Jonathan; Haraguchi, Naoki
2007-03-08
The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the "Virtual Human" reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of these unique database and simulation technology. This integrated system, model library and database will impact on orthopaedic education, basic research, device development and application, and clinical patient care related to musculoskeletal joint system reconstruction, trauma management, and rehabilitation.
Chao, Edmund YS; Armiger, Robert S; Yoshida, Hiroaki; Lim, Jonathan; Haraguchi, Naoki
2007-01-01
The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the "Virtual Human" reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of these unique database and simulation technology. This integrated system, model library and database will impact on orthopaedic education, basic research, device development and application, and clinical patient care related to musculoskeletal joint system reconstruction, trauma management, and rehabilitation. PMID:17343764
Christian, Rahila U
2013-07-01
This is a commentary on a Cochrane review, published in the issue of EBCH, first published as: Coren E, Hossain R, Pardo Pardo J, Veras MMS, Chakraborty K, Harris H, Martin AJ. Interventions for promoting re-integration and reducing harmful behaviour and lifestyles in street-connected children and young people. Cochrane Database of Systematic Reviews 2013, Issue 2. Art. No.: CD009823. DOI: 10.1002/14651858.CD009823.pub2. Copyright © 2013 The Cochrane Collaboration. Published by John Wiley & Sons, Ltd.
Goiato, Marcelo Coelho; Pellizzer, Eduardo Piza; da Silva, Emily Vivianne Freitas; Bonatto, Liliane da Rocha; dos Santos, Daniela Micheline
2015-09-01
This systematic review aimed to evaluate if the internal connection is more efficient than the external connection and its associated influencing factors. A specific question was formulated according to the Population, Intervention, Control, and Outcome (PICO): Is internal connection more efficient than external connection in mechanical, biological, and esthetical point of views? An electronic search of the MEDLINE and the Web of Knowledge databases was performed for relevant studies published in English up to November 2013 by two independent reviewers. The keywords used in the search included a combination of "dental implant" and "internal connection" or "Morse connection" or "external connection." Selected studies were randomized clinical trials, prospective or retrospective studies, and in vitro studies with a clear aim of investigating the internal and/or external implant connection use. From an initial screening yield of 674 articles, 64 potentially relevant articles were selected after an evaluation of their titles and abstracts. Full texts of these articles were obtained with 29 articles fulfilling the inclusion criteria. Morse taper connection has the best sealing ability. Concerning crestal bone loss, internal connections presented better results than external connections. The limitation of the present study was the absence of randomized clinical trials that investigated if the internal connection was more efficient than the external connection. The external and internal connections have different mechanical, biological, and esthetical characteristics. Besides all systems that show proper success rates and effectiveness, crestal bone level maintenance is more important around internal connections than external connections. The Morse taper connection seems to be more efficient concerning biological aspects, allowing lower bacterial leakage and bone loss in single implants, including aesthetic regions. Additionally, this connection type can be successfully indicated for fixed partial prostheses and overdenture planning, since it exhibits high mechanical stability.
Multimodal connectivity of motor learning-related dorsal premotor cortex.
Hardwick, Robert M; Lesage, Elise; Eickhoff, Claudia R; Clos, Mareike; Fox, Peter; Eickhoff, Simon B
2015-12-01
The dorsal premotor cortex (dPMC) is a key region for motor learning and sensorimotor integration, yet we have limited understanding of its functional interactions with other regions. Previous work has started to examine functional connectivity in several brain areas using resting state functional connectivity (RSFC) and meta-analytical connectivity modelling (MACM). More recently, structural covariance (SC) has been proposed as a technique that may also allow delineation of functional connectivity. Here, we applied these three approaches to provide a comprehensive characterization of functional connectivity with a seed in the left dPMC that a previous meta-analysis of functional neuroimaging studies has identified as playing a key role in motor learning. Using data from two sources (the Rockland sample, containing resting state data and anatomical scans from 132 participants, and the BrainMap database, which contains peak activation foci from over 10,000 experiments), we conducted independent whole-brain functional connectivity mapping analyses of a dPMC seed. RSFC and MACM revealed similar connectivity maps spanning prefrontal, premotor, and parietal regions, while the SC map identified more widespread frontal regions. Analyses indicated a relatively consistent pattern of functional connectivity between RSFC and MACM that was distinct from that identified by SC. Notably, results indicate that the seed is functionally connected to areas involved in visuomotor control and executive functions, suggesting that the dPMC acts as an interface between motor control and cognition. Copyright © 2015 Elsevier Inc. All rights reserved.
An editor for pathway drawing and data visualization in the Biopathways Workbench.
Byrnes, Robert W; Cotter, Dawn; Maer, Andreia; Li, Joshua; Nadeau, David; Subramaniam, Shankar
2009-10-02
Pathway models serve as the basis for much of systems biology. They are often built using programs designed for the purpose. Constructing new models generally requires simultaneous access to experimental data of diverse types, to databases of well-characterized biological compounds and molecular intermediates, and to reference model pathways. However, few if any software applications provide all such capabilities within a single user interface. The Pathway Editor is a program written in the Java programming language that allows de-novo pathway creation and downloading of LIPID MAPS (Lipid Metabolites and Pathways Strategy) and KEGG lipid metabolic pathways, and of measured time-dependent changes to lipid components of metabolism. Accessed through Java Web Start, the program downloads pathways from the LIPID MAPS Pathway database (Pathway) as well as from the LIPID MAPS web server http://www.lipidmaps.org. Data arises from metabolomic (lipidomic), microarray, and protein array experiments performed by the LIPID MAPS consortium of laboratories and is arranged by experiment. Facility is provided to create, connect, and annotate nodes and processes on a drawing panel with reference to database objects and time course data. Node and interaction layout as well as data display may be configured in pathway diagrams as desired. Users may extend diagrams, and may also read and write data and non-lipidomic KEGG pathways to and from files. Pathway diagrams in XML format, containing database identifiers referencing specific compounds and experiments, can be saved to a local file for subsequent use. The program is built upon a library of classes, referred to as the Biopathways Workbench, that convert between different file formats and database objects. An example of this feature is provided in the form of read/construct/write access to models in SBML (Systems Biology Markup Language) contained in the local file system. Inclusion of access to multiple experimental data types and of pathway diagrams within a single interface, automatic updating through connectivity to an online database, and a focus on annotation, including reference to standardized lipid nomenclature as well as common lipid names, supports the view that the Pathway Editor represents a significant, practicable contribution to current pathway modeling tools.
The Human Brainnetome Atlas: A New Brain Atlas Based on Connectional Architecture.
Fan, Lingzhong; Li, Hai; Zhuo, Junjie; Zhang, Yu; Wang, Jiaojian; Chen, Liangfu; Yang, Zhengyi; Chu, Congying; Xie, Sangma; Laird, Angela R; Fox, Peter T; Eickhoff, Simon B; Yu, Chunshui; Jiang, Tianzi
2016-08-01
The human brain atlases that allow correlating brain anatomy with psychological and cognitive functions are in transition from ex vivo histology-based printed atlases to digital brain maps providing multimodal in vivo information. Many current human brain atlases cover only specific structures, lack fine-grained parcellations, and fail to provide functionally important connectivity information. Using noninvasive multimodal neuroimaging techniques, we designed a connectivity-based parcellation framework that identifies the subdivisions of the entire human brain, revealing the in vivo connectivity architecture. The resulting human Brainnetome Atlas, with 210 cortical and 36 subcortical subregions, provides a fine-grained, cross-validated atlas and contains information on both anatomical and functional connections. Additionally, we further mapped the delineated structures to mental processes by reference to the BrainMap database. It thus provides an objective and stable starting point from which to explore the complex relationships between structure, connectivity, and function, and eventually improves understanding of how the human brain works. The human Brainnetome Atlas will be made freely available for download at http://atlas.brainnetome.org, so that whole brain parcellations, connections, and functional data will be readily available for researchers to use in their investigations into healthy and pathological states. © The Author 2016. Published by Oxford University Press.
Coactivation of cognitive control networks during task switching.
Yin, Shouhang; Deák, Gedeon; Chen, Antao
2018-01-01
The ability to flexibly switch between tasks is considered an important component of cognitive control that involves frontal and parietal cortical areas. The present study was designed to characterize network dynamics across multiple brain regions during task switching. Functional magnetic resonance images (fMRI) were captured during a standard rule-switching task to identify switching-related brain regions. Multiregional psychophysiological interaction (PPI) analysis was used to examine effective connectivity between these regions. During switching trials, behavioral performance declined and activation of a generic cognitive control network increased. Concurrently, task-related connectivity increased within and between cingulo-opercular and fronto-parietal cognitive control networks. Notably, the left inferior frontal junction (IFJ) was most consistently coactivated with the 2 cognitive control networks. Furthermore, switching-dependent effective connectivity was negatively correlated with behavioral switch costs. The strength of effective connectivity between left IFJ and other regions in the networks predicted individual differences in switch costs. Task switching was supported by coactivated connections within cognitive control networks, with left IFJ potentially acting as a key hub between the fronto-parietal and cingulo-opercular networks. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Blin, Kai; Medema, Marnix H; Kottmann, Renzo; Lee, Sang Yup; Weber, Tilmann
2017-01-04
Secondary metabolites produced by microorganisms are the main source of bioactive compounds that are in use as antimicrobial and anticancer drugs, fungicides, herbicides and pesticides. In the last decade, the increasing availability of microbial genomes has established genome mining as a very important method for the identification of their biosynthetic gene clusters (BGCs). One of the most popular tools for this task is antiSMASH. However, so far, antiSMASH is limited to de novo computing results for user-submitted genomes and only partially connects these with BGCs from other organisms. Therefore, we developed the antiSMASH database, a simple but highly useful new resource to browse antiSMASH-annotated BGCs in the currently 3907 bacterial genomes in the database and perform advanced search queries combining multiple search criteria. antiSMASH-DB is available at http://antismash-db.secondarymetabolites.org/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
High Performance Descriptive Semantic Analysis of Semantic Graph Databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan
As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprisingmore » computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.« less
Extracting patterns of database and software usage from the bioinformatics literature
Duck, Geraint; Nenadic, Goran; Brass, Andy; Robertson, David L.; Stevens, Robert
2014-01-01
Motivation: As a natural consequence of being a computer-based discipline, bioinformatics has a strong focus on database and software development, but the volume and variety of resources are growing at unprecedented rates. An audit of database and software usage patterns could help provide an overview of developments in bioinformatics and community common practice, and comparing the links between resources through time could demonstrate both the persistence of existing software and the emergence of new tools. Results: We study the connections between bioinformatics resources and construct networks of database and software usage patterns, based on resource co-occurrence, that correspond to snapshots of common practice in the bioinformatics community. We apply our approach to pairings of phylogenetics software reported in the literature and argue that these could provide a stepping stone into the identification of scientific best practice. Availability and implementation: The extracted resource data, the scripts used for network generation and the resulting networks are available at http://bionerds.sourceforge.net/networks/ Contact: robert.stevens@manchester.ac.uk PMID:25161253
NASA Astrophysics Data System (ADS)
Protsyuk, Yu.; Pinigin, G.; Shulga, A.
2005-06-01
Results of the development and organization of the digital database of the Nikolaev Astronomical Observatory (NAO) are presented. At present, three telescopes are connected to the local area network of NAO. All the data obtained, and results of data processing are entered into the common database of NAO. The daily average volume of new astronomical information obtained from the CCD instruments ranges from 300 MB up to 2 GB, depending on the purposes and conditions of observations. The overwhelming majority of the data are stored in the FITS format. Development and further improvement of storage standards, procedures of data handling and data processing are being carried out. It is planned to create an astronomical web portal with the possibility to have interactive access to databases and telescopes. In the future, this resource may become a part of an international virtual observatory. There are the prototypes of search tools with the use of PHP and MySQL. Efforts for getting more links to the Internet are being made.
Montague, Elizabeth; Stanberry, Larissa; Higdon, Roger; Janko, Imre; Lee, Elaine; Anderson, Nathaniel; Choiniere, John; Stewart, Elizabeth; Yandl, Gregory; Broomall, William; Kolker, Natali
2014-01-01
Abstract Multi-omics data-driven scientific discovery crucially rests on high-throughput technologies and data sharing. Currently, data are scattered across single omics repositories, stored in varying raw and processed formats, and are often accompanied by limited or no metadata. The Multi-Omics Profiling Expression Database (MOPED, http://moped.proteinspire.org) version 2.5 is a freely accessible multi-omics expression database. Continual improvement and expansion of MOPED is driven by feedback from the Life Sciences Community. In order to meet the emergent need for an integrated multi-omics data resource, MOPED 2.5 now includes gene relative expression data in addition to protein absolute and relative expression data from over 250 large-scale experiments. To facilitate accurate integration of experiments and increase reproducibility, MOPED provides extensive metadata through the Data-Enabled Life Sciences Alliance (DELSA Global, http://delsaglobal.org) metadata checklist. MOPED 2.5 has greatly increased the number of proteomics absolute and relative expression records to over 500,000, in addition to adding more than four million transcriptomics relative expression records. MOPED has an intuitive user interface with tabs for querying different types of omics expression data and new tools for data visualization. Summary information including expression data, pathway mappings, and direct connection between proteins and genes can be viewed on Protein and Gene Details pages. These connections in MOPED provide a context for multi-omics expression data exploration. Researchers are encouraged to submit omics data which will be consistently processed into expression summaries. MOPED as a multi-omics data resource is a pivotal public database, interdisciplinary knowledge resource, and platform for multi-omics understanding. PMID:24910945
Science.gov: gateway to government science information.
Fitzpatrick, Roberta Bronson
2010-01-01
Science.gov is a portal to more than 40 scientific databases and 200 million pages of science information via a single query. It connects users to science information and research results from the U.S. government. This column will provide readers with an overview of the resource, as well as basic search hints.
Electronic Library and Other Technology "Connects" Anchorage Students.
ERIC Educational Resources Information Center
Davis, E. E. (Gene); Scott, Marilynn S.
1986-01-01
The Anchorage, Alaska, School District is dealing with the problem of teaching students about the "information age" through a unique program in their central library system. It was one of the first school districts in the nation to computerize its library and to provide access to computer databases to the students through telephones as…
Cognitive Functioning and the Probability of Falls among Seniors in Havana, Cuba
ERIC Educational Resources Information Center
Trujillo, Antonio J.; Hyder, Adnan A.; Steinhardt, Laura C.
2011-01-01
This article explores the connection between cognitive functioning and falls among seniors (greater than or equal to 60 years of age) in Havana, Cuba, after controlling for observable characteristics. Using the SABE (Salud, Bienestar, and Envejecimiento) cross-sectional database, we used an econometric strategy that takes advantage of available…
Distributing stand inventory data and maps over a wide area network
Thomas E. Burk
2000-01-01
High-speed networks connecting multiple levels of management are becoming commonplace among forest resources organizations. Such networks can be used to deliver timely spatial and aspatial data relevant to the management of stands to field personnel. A network infrastructure allows maintenance of cost-effective, centralized databases with the potential for updating by...
2015-03-25
lime glass, the polyhedron -center atoms are all silicon and each silicon atom is surrounded by four oxygen atoms (while each oxygen atom is connected...of metallic force-field functions (in the pure metallic environment) within the force-field function database used in the present work. Consequently
Local Places, Global Connections: Libraries in the Digital Age. What's Going On Series.
ERIC Educational Resources Information Center
Benton Foundation, Washington, DC.
Libraries have long been pivotal community institutions--public spaces where people can come together to learn, reflect, and interact. Today, information is rapidly spreading beyond books and journals to digital government archives, business databases, electronic sound and image collections, and the flow of electronic impulses over computer…
Impact of Commercial Search Engines and International Databases on Engineering Teaching and Research
ERIC Educational Resources Information Center
Chanson, Hubert
2007-01-01
For the last three decades, the engineering higher education and professional environments have been completely transformed by the "electronic/digital information revolution" that has included the introduction of personal computer, the development of email and world wide web, and broadband Internet connections at home. Herein the writer compares…
ERIC Educational Resources Information Center
Trautmann, Nancy; Fee, Jennifer; Kahler, Phil
2012-01-01
What bird species live in your area? Which migrate and which stay year-round? How do bird populations change over time? Citizen science provides the essential tools to address these questions and more. With ever-growing databases such as Project FeederWatch and eBird, students can connect with people around the world as they make observations,…
Cognitive Affordances of the Cyberinfrastructure for Science and Math Learning
ERIC Educational Resources Information Center
Martinez, Michael E.; Peters Burton, Erin E.
2011-01-01
The "cyberinfrastucture" is a broad informational network that entails connections to real-time data sensors as well as tools that permit visualization and other forms of analysis, and that facilitates access to vast scientific databases. This multifaceted network, already a major boon to scientific discovery, now shows exceptional promise in…
Scammon, Debra L; Tomoaia-Cotisel, Andrada; Day, Rachel L; Day, Julie; Kim, Jaewhan; Waitzman, Norman J; Farrell, Timothy W; Magill, Michael K
2013-12-01
To demonstrate the value of mixed methods in the study of practice transformation and illustrate procedures for connecting methods and for merging findings to enhance the meaning derived. An integrated network of university-owned, primary care practices at the University of Utah (Community Clinics or CCs). CC has adopted Care by Design, its version of the Patient Centered Medical Home. Convergent case study mixed methods design. Analysis of archival documents, internal operational reports, in-clinic observations, chart audits, surveys, semistructured interviews, focus groups, Centers for Medicare and Medicaid Services database, and the Utah All Payer Claims Database. Each data source enriched our understanding of the change process and understanding of reasons that certain changes were more difficult than others both in general and for particular clinics. Mixed methods enabled generation and testing of hypotheses about change and led to a comprehensive understanding of practice change. Mixed methods are useful in studying practice transformation. Challenges exist but can be overcome with careful planning and persistence. © Health Research and Educational Trust.
NASA Astrophysics Data System (ADS)
Munkhbaatar, B.; Lee, J.
2015-10-01
National land information system (NLIS) is an essential part of the Mongolian land reform. NLIS is a web based and centralized system which covers administration of cadastral database all over the country among land departments. Current ongoing NLIS implementation is vital to improve the cadastral system in Mongolia. This study is intended to define existing problems in current Mongolian cadastral system and propose administrative institutional and systematic implementation through NLIS. Once NLIS launches with proposed model of comprehensive cadastral system it will lead to not only economic and sustainable development but also contribute to citizens' satisfaction and lessen the burdensomeness of bureaucracy. Moreover, prevention of land conflicts, especially in metropolitan area as well as gathering land tax and fees. Furthermore after establishment of NLIS, it is advisable that connecting NLIS to other relevant state administrational organizations or institutions that have relevant database system. Connections with other relevant organizations will facilitate not only smooth and productive workflow but also offer reliable and more valuable information by its systemic integration with NLIS.
The CMS dataset bookkeeping service
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afaq, Anzar,; /Fermilab; Dolgert, Andrew
2007-10-01
The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS ismore » available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.« less
Visualizing and Understanding Socio-Environmental Dynamics in Baltimore
NASA Astrophysics Data System (ADS)
Zaitchik, B. F.; Omeara, K.; Guikema, S.; Scott, A.; Bessho, A.; Logan, T. M.
2015-12-01
The City of Baltimore, like any city, is the sum of its component neighborhoods, institutions, businesses, cultures, and, ultimately, its people. It is also an organism in its own right, with distinct geography, history, infrastructure, and environments that shape its residents even as it is shaped by them. Sometimes these interactions are obvious but often they are not; while basic economic patterns are widely documented, the distribution of socio-spatial and environmental connections often hides below the surface, as does the potential that those connections hold. Here we present results of a collaborative initiative on the geography, design, and policy of socio-environmental dynamics of Baltimore. Geospatial data derived from satellite imagery, demographic databases, social media feeds, infrastructure plans, and in situ environmental networks, among other sources, are applied to generate an interactive portrait of Baltimore City's social, health, and well-being dynamics. The layering of data serves as a platform for visualizing the interconnectedness of the City and as a database for modeling risk interactions, vulnerabilities, and strengths within and between communities. This presentation will provide an overview of project findings and highlight linkages to education and policy.
Scammon, Debra L; Tomoaia-Cotisel, Andrada; Day, Rachel L; Day, Julie; Kim, Jaewhan; Waitzman, Norman J; Farrell, Timothy W; Magill, Michael K
2013-01-01
Objective. To demonstrate the value of mixed methods in the study of practice transformation and illustrate procedures for connecting methods and for merging findings to enhance the meaning derived. Data Source/Study Setting. An integrated network of university-owned, primary care practices at the University of Utah (Community Clinics or CCs). CC has adopted Care by Design, its version of the Patient Centered Medical Home. Study Design. Convergent case study mixed methods design. Data Collection/Extraction Methods. Analysis of archival documents, internal operational reports, in-clinic observations, chart audits, surveys, semistructured interviews, focus groups, Centers for Medicare and Medicaid Services database, and the Utah All Payer Claims Database. Principal Findings. Each data source enriched our understanding of the change process and understanding of reasons that certain changes were more difficult than others both in general and for particular clinics. Mixed methods enabled generation and testing of hypotheses about change and led to a comprehensive understanding of practice change. Conclusions. Mixed methods are useful in studying practice transformation. Challenges exist but can be overcome with careful planning and persistence. PMID:24279836
The BRENDA enzyme information system-From a database to an expert system.
Schomburg, I; Jeske, L; Ulbrich, M; Placzek, S; Chang, A; Schomburg, D
2017-11-10
Enzymes, representing the largest and by far most complex group of proteins, play an essential role in all processes of life, including metabolism, gene expression, cell division, the immune system, and others. Their function, also connected to most diseases or stress control makes them interesting targets for research and applications in biotechnology, medical treatments, or diagnosis. Their functional parameters and other properties are collected, integrated, and made available to the scientific community in the BRaunschweig ENzyme DAtabase (BRENDA). In the last 30 years BRENDA has developed into one of the most highly used biological databases worldwide. The data contents, the process of data acquisition, data integration and control, the ways to access the data, and visualizations provided by the website are described and discussed. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Improved Infrastucture for Cdms and JPL Molecular Spectroscopy Catalogues
NASA Astrophysics Data System (ADS)
Endres, Christian; Schlemmer, Stephan; Drouin, Brian; Pearson, John; Müller, Holger S. P.; Schilke, P.; Stutzki, Jürgen
2014-06-01
Over the past years a new infrastructure for atomic and molecular databases has been developed within the framework of the Virtual Atomic and Molecular Data Centre (VAMDC). Standards for the representation of atomic and molecular data as well as a set of protocols have been established which allow now to retrieve data from various databases through one portal and to combine the data easily. Apart from spectroscopic databases such as the Cologne Database for Molecular Spectroscopy (CDMS), the Jet Propulsion Laboratory microwave, millimeter and submillimeter spectral line catalogue (JPL) and the HITRAN database, various databases on molecular collisions (BASECOL, KIDA) and reactions (UMIST) are connected. Together with other groups within the VAMDC consortium we are working on common user tools to simplify the access for new customers and to tailor data requests for users with specified needs. This comprises in particular tools to support the analysis of complex observational data obtained with the ALMA telescope. In this presentation requests to CDMS and JPL will be used to explain the basic concepts and the tools which are provided by VAMDC. In addition a new portal to CDMS will be presented which has a number of new features, in particular meaningful quantum numbers, references linked to data points, access to state energies and improved documentation. Fit files are accessible for download and queries to other databases are possible.
Structural and functional connectivity of the precuneus and thalamus to the default mode network.
Cunningham, Samantha I; Tomasi, Dardo; Volkow, Nora D
2017-02-01
Neuroimaging studies have identified functional interactions between the thalamus, precuneus, and default mode network (DMN) in studies of consciousness. However, less is known about the structural connectivity of the precuneus and thalamus to regions within the DMN. We used diffusion tensor imaging (DTI) to parcellate the precuneus and thalamus based on their probabilistic white matter connectivity to each other and DMN regions of interest (ROIs) in 37 healthy subjects from the Human Connectome Database. We further assessed resting-state functional connectivity (RSFC) among the precuneus, thalamus, and DMN ROIs. The precuneus was found to have the greatest structural connectivity with the thalamus, where connection fractional anisotropy (FA) increased with age. The precuneus also showed significant structural connectivity to the hippocampus and middle pre-frontal cortex, but minimal connectivity to the angular gyrus and midcingulate cortex. In contrast, the precuneus exhibited significant RSFC with the thalamus and the strongest RSFC with the AG. Significant symmetrical structural connectivity was found between the thalamus and hippocampus, mPFC, sFG, and precuneus that followed known thalamocortical pathways, while thalamic RSFC was strongest with the precuneus and hippocampus. Overall, these findings reveal high levels of structural and functional connectivity linking the thalamus, precuneus, and DMN. Differences between structural and functional connectivity (such as between the precuneus and AG) may be interpreted to reflect dynamic shifts in RSFC for cortical hub-regions involved with consciousness, but could also reflect the limitations of DTI to detect superficial white matter tracts that connect cortico-cortical regions. Hum Brain Mapp 38:938-956, 2017. © 2016 Wiley Periodicals, Inc. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
Sport, how people choose it: A network analysis approach.
Ferreri, Luca; Ivaldi, Marco; Daolio, Fabio; Giacobini, Mario; Rainoldi, Alberto; Tomassini, Marco
2015-01-01
In order to investigate the behaviour of athletes in choosing sports, we analyse data from part of the We-Sport database, a vertical social network that links athletes through sports. In particular, we explore connections between people sharing common sports and the role of age and gender by applying "network science" approaches and methods. The results show a disassortative tendency of athletes in choosing sports, a negative correlation between age and number of chosen sports and a positive correlation between age of connected athletes. Some interesting patterns of connection between age classes are depicted. In addition, we propose a method to classify sports, based on the analyses of the behaviour of people practising them. Thanks to this brand new classifications, we highlight the links of class of sports and their unexpected features. We emphasise some gender dependency affinity in choosing sport classes.
An improved design method for EPC middleware
NASA Astrophysics Data System (ADS)
Lou, Guohuan; Xu, Ran; Yang, Chunming
2014-04-01
For currently existed problems and difficulties during the small and medium enterprises use EPC (Electronic Product Code) ALE (Application Level Events) specification to achieved middleware, based on the analysis of principle of EPC Middleware, an improved design method for EPC middleware is presented. This method combines the powerful function of MySQL database, uses database to connect reader-writer with upper application system, instead of development of ALE application program interface to achieve a middleware with general function. This structure is simple and easy to implement and maintain. Under this structure, different types of reader-writers added can be configured conveniently and the expandability of the system is improved.
Integer sequence discovery from small graphs
Hoppe, Travis; Petrone, Anna
2015-01-01
We have exhaustively enumerated all simple, connected graphs of a finite order and have computed a selection of invariants over this set. Integer sequences were constructed from these invariants and checked against the Online Encyclopedia of Integer Sequences (OEIS). 141 new sequences were added and six sequences were extended. From the graph database, we were able to programmatically suggest relationships among the invariants. It will be shown that we can readily visualize any sequence of graphs with a given criteria. The code has been released as an open-source framework for further analysis and the database was constructed to be extensible to invariants not considered in this work. PMID:27034526
Design and implementation of a CORBA-based genome mapping system prototype.
Hu, J; Mungall, C; Nicholson, D; Archibald, A L
1998-01-01
CORBA (Common Object Request Broker Architecture), as an open standard, is considered to be a good solution for the development and deployment of applications in distributed heterogeneous environments. This technology can be applied in the bioinformatics area to enhance utilization, management and interoperation between biological resources. This paper investigates issues in developing CORBA applications for genome mapping information systems in the Internet environment with emphasis on database connectivity and graphical user interfaces. The design and implementation of a CORBA prototype for an animal genome mapping database are described. The prototype demonstration is available via: http://www.ri.bbsrc.ac.uk/ark_corba/. jian.hu@bbsrc.ac.uk
Forsell, M; Häggström, M; Johansson, O; Sjögren, P
2008-11-08
To develop a personal digital assistant (PDA) application for oral health assessment fieldwork, including back-office and database systems (MobilDent). System design, construction and implementation of PDA, back-office and database systems. System requirements for MobilDent were collected, analysed and translated into system functions. User interfaces were implemented and system architecture was outlined. MobilDent was based on a platform with. NET (Microsoft) components, using an SQL Server 2005 (Microsoft) for data storage with Windows Mobile (Microsoft) operating system. The PDA devices were Dell Axim. System functions and user interfaces were specified for MobilDent. User interfaces for PDA, back-office and database systems were based on. NET programming. The PDA user interface was based on Windows suitable to a PDA display, whereas the back-office interface was designed for a normal-sized computer screen. A synchronisation module (MS Active Sync, Microsoft) was used to enable download of field data from PDA to the database. MobilDent is a feasible application for oral health assessment fieldwork, and the oral health assessment database may prove a valuable source for care planning, educational and research purposes. Further development of the MobilDent system will include wireless connectivity with download-on-demand technology.
de Medeiros, Rodrigo Antonio; Pellizzer, Eduardo Piza; Vechiato Filho, Aljomar José; Dos Santos, Daniela Micheline; da Silva, Emily Vivianne Freitas; Goiato, Marcelo Coelho
2016-10-01
Different factors can influence marginal bone loss around dental implants, including the type of internal and external connection between the implant and the abutment. The evidence needed to evaluate these factors is unclear. The purpose of this systematic review was to evaluate marginal bone loss by radiographic analysis around dental implants with internal or external connections. A systematic review was conducted following the criteria defined by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Initially, a population, intervention, comparison, and outcome(s) (PICO) question was defined: does the connection type (internal or external) influence marginal bone loss in patients undergoing implantation? An electronic search of PubMed/MEDLINE and Scopus databases was performed for studies in English language published between January 2000 and December 2014 by 2 independent reviewers, who analyzed the marginal bone loss of dental implants with an internal and/or external connection. From an initial screening yield of 595 references and after considering inclusion and exclusion criteria, 17 articles were selected for this review. Among them, 10 studies compared groups of implants with internal and external connections; 1 study evaluated external connections; and 6 studies analyzed internal connections. A total of 2708 implants were placed in 864 patients. Regarding the connection type, 2347 implants had internal connections, and 361 implants had external connections. Most studies showed lower marginal bone loss values for internal connection implants than for external connection implants. Osseointegrated dental implants with internal connections exhibited lower marginal bone loss than implants with external connections. This finding is mainly the result of the platform switching concept, which is more frequently found in implants with internal connections. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Flexible network reconstruction from relational databases with Cytoscape and CytoSQL
2010-01-01
Background Molecular interaction networks can be efficiently studied using network visualization software such as Cytoscape. The relevant nodes, edges and their attributes can be imported in Cytoscape in various file formats, or directly from external databases through specialized third party plugins. However, molecular data are often stored in relational databases with their own specific structure, for which dedicated plugins do not exist. Therefore, a more generic solution is presented. Results A new Cytoscape plugin 'CytoSQL' is developed to connect Cytoscape to any relational database. It allows to launch SQL ('Structured Query Language') queries from within Cytoscape, with the option to inject node or edge features of an existing network as SQL arguments, and to convert the retrieved data to Cytoscape network components. Supported by a set of case studies we demonstrate the flexibility and the power of the CytoSQL plugin in converting specific data subsets into meaningful network representations. Conclusions CytoSQL offers a unified approach to let Cytoscape interact with relational databases. Thanks to the power of the SQL syntax, this tool can rapidly generate and enrich networks according to very complex criteria. The plugin is available at http://www.ptools.ua.ac.be/CytoSQL. PMID:20594316
Flexible network reconstruction from relational databases with Cytoscape and CytoSQL.
Laukens, Kris; Hollunder, Jens; Dang, Thanh Hai; De Jaeger, Geert; Kuiper, Martin; Witters, Erwin; Verschoren, Alain; Van Leemput, Koenraad
2010-07-01
Molecular interaction networks can be efficiently studied using network visualization software such as Cytoscape. The relevant nodes, edges and their attributes can be imported in Cytoscape in various file formats, or directly from external databases through specialized third party plugins. However, molecular data are often stored in relational databases with their own specific structure, for which dedicated plugins do not exist. Therefore, a more generic solution is presented. A new Cytoscape plugin 'CytoSQL' is developed to connect Cytoscape to any relational database. It allows to launch SQL ('Structured Query Language') queries from within Cytoscape, with the option to inject node or edge features of an existing network as SQL arguments, and to convert the retrieved data to Cytoscape network components. Supported by a set of case studies we demonstrate the flexibility and the power of the CytoSQL plugin in converting specific data subsets into meaningful network representations. CytoSQL offers a unified approach to let Cytoscape interact with relational databases. Thanks to the power of the SQL syntax, this tool can rapidly generate and enrich networks according to very complex criteria. The plugin is available at http://www.ptools.ua.ac.be/CytoSQL.
NASA Astrophysics Data System (ADS)
Bemm, Stefan; Sandmeier, Christine; Wilde, Martina; Jaeger, Daniel; Schwindt, Daniel; Terhorst, Birgit
2014-05-01
The area of the Swabian-Franconian cuesta landscape (Southern Germany) is highly prone to landslides. This was apparent in the late spring of 2013, when numerous landslides occurred as a consequence of heavy and long-lasting rainfalls. The specific climatic situation caused numerous damages with serious impact on settlements and infrastructure. Knowledge on spatial distribution of landslides, processes and characteristics are important to evaluate the potential risk that can occur from mass movements in those areas. In the frame of two projects about 400 landslides were mapped and detailed data sets were compiled during years 2011 to 2014 at the Franconian Alb. The studies are related to the project "Slope stability and hazard zones in the northern Bavarian cuesta" (DFG, German Research Foundation) as well as to the LfU (The Bavarian Environment Agency) within the project "Georisks and climate change - hazard indication map Jura". The central goal of the present study is to create a spatial database for landslides. The database should contain all fundamental parameters to characterize the mass movements and should provide the potential for secure data storage and data management, as well as statistical evaluations. The spatial database was created with PostgreSQL, an object-relational database management system and PostGIS, a spatial database extender for PostgreSQL, which provides the possibility to store spatial and geographic objects and to connect to several GIS applications, like GRASS GIS, SAGA GIS, QGIS and GDAL, a geospatial library (Obe et al. 2011). Database access for querying, importing, and exporting spatial and non-spatial data is ensured by using GUI or non-GUI connections. The database allows the use of procedural languages for writing advanced functions in the R, Python or Perl programming languages. It is possible to work directly with the (spatial) data entirety of the database in R. The inventory of the database includes (amongst others), informations on location, landslide types and causes, geomorphological positions, geometries, hazards and damages, as well as assessments related to the activity of landslides. Furthermore, there are stored spatial objects, which represent the components of a landslide, in particular the scarps and the accumulation areas. Besides, waterways, map sheets, contour lines, detailed infrastructure data, digital elevation models, aspect and slope data are included. Examples of spatial queries to the database are intersections of raster and vector data for calculating values for slope gradients or aspects of landslide areas and for creating multiple, overlaying sections for the comparison of slopes, as well as distances to the infrastructure or to the next receiving drainage. Furthermore, getting informations on landslide magnitudes, distribution and clustering, as well as potential correlations concerning geomorphological or geological conditions. The data management concept in this study can be implemented for any academic, public or private use, because it is independent from any obligatory licenses. The created spatial database offers a platform for interdisciplinary research and socio-economic questions, as well as for landslide susceptibility and hazard indication mapping. Obe, R.O., Hsu, L.S. 2011. PostGIS in action. - pp 492, Manning Publications, Stamford
The One Universal Graph — a free and open graph database
NASA Astrophysics Data System (ADS)
Ng, Liang S.; Champion, Corbin
2016-02-01
Recent developments in graph database mostly are huge projects involving big organizations, big operations and big capital, as the name Big Data attests. We proposed the concept of One Universal Graph (OUG) which states that all observable and known objects and concepts (physical, conceptual or digitally represented) can be connected with only one single graph; furthermore the OUG can be implemented with a very simple text file format with free software, capable of being executed on Android or smaller devices. As such the One Universal Graph Data Exchange (GOUDEX) modules can potentially be installed on hundreds of millions of Android devices and Intel compatible computers shipped annually. Coupled with its open nature and ability to connect to existing leading search engines and databases currently in operation, GOUDEX has the potential to become the largest and a better interface for users and programmers to interact with the data on the Internet. With a Web User Interface for users to use and program in native Linux environment, Free Crowdware implemented in GOUDEX can help inexperienced users learn programming with better organized documentation for free software, and is able to manage programmer's contribution down to a single line of code or a single variable in software projects. It can become the first practically realizable “Internet brain” on which a global artificial intelligence system can be implemented. Being practically free and open, One Universal Graph can have significant applications in robotics, artificial intelligence as well as social networks.
Grid-enabled measures: using Science 2.0 to standardize measures and share data.
Moser, Richard P; Hesse, Bradford W; Shaikh, Abdul R; Courtney, Paul; Morgan, Glen; Augustson, Erik; Kobrin, Sarah; Levin, Kerry Y; Helba, Cynthia; Garner, David; Dunn, Marsha; Coa, Kisha
2011-05-01
Scientists are taking advantage of the Internet and collaborative web technology to accelerate discovery in a massively connected, participative environment--a phenomenon referred to by some as Science 2.0. As a new way of doing science, this phenomenon has the potential to push science forward in a more efficient manner than was previously possible. The Grid-Enabled Measures (GEM) database has been conceptualized as an instantiation of Science 2.0 principles by the National Cancer Institute (NCI) with two overarching goals: (1) promote the use of standardized measures, which are tied to theoretically based constructs; and (2) facilitate the ability to share harmonized data resulting from the use of standardized measures. The first is accomplished by creating an online venue where a virtual community of researchers can collaborate together and come to consensus on measures by rating, commenting on, and viewing meta-data about the measures and associated constructs. The second is accomplished by connecting the constructs and measures to an ontological framework with data standards and common data elements such as the NCI Enterprise Vocabulary System (EVS) and the cancer Data Standards Repository (caDSR). This paper will describe the web 2.0 principles on which the GEM database is based, describe its functionality, and discuss some of the important issues involved with creating the GEM database such as the role of mutually agreed-on ontologies (i.e., knowledge categories and the relationships among these categories--for data sharing). Published by Elsevier Inc.
WAIS Searching of the Current Contents Database
NASA Astrophysics Data System (ADS)
Banholzer, P.; Grabenstein, M. E.
The Homer E. Newell Memorial Library of NASA's Goddard Space Flight Center is developing capabilities to permit Goddard personnel to access electronic resources of the Library via the Internet. The Library's support services contractor, Maxima Corporation, and their subcontractor, SANAD Support Technologies have recently developed a World Wide Web Home Page (http://www-library.gsfc.nasa.gov) to provide the primary means of access. The first searchable database to be made available through the HomePage to Goddard employees is Current Contents, from the Institute for Scientific Information (ISI). The initial implementation includes coverage of articles from the last few months of 1992 to present. These records are augmented with abstracts and references, and often are more robust than equivalent records in bibliographic databases that currently serve the astronomical community. Maxima/SANAD selected Wais Incorporated's WAIS product with which to build the interface to Current Contents. This system allows access from Macintosh, IBM PC, and Unix hosts, which is an important feature for Goddard's multiplatform environment. The forms interface is structured to allow both fielded (author, article title, journal name, id number, keyword, subject term, and citation) and unfielded WAIS searches. The system allows a user to: Retrieve individual journal article records. Retrieve Table of Contents of specific issues of journals. Connect to articles with similar subject terms or keywords. Connect to other issues of the same journal in the same year. Browse journal issues from an alphabetical list of indexed journal names.
ERIC Educational Resources Information Center
Star Snyder, Marjorie
2010-01-01
A comprehensive search of multiple databases for references to the connection between families and schools yields a rich representation from family therapy, school counseling, school psychology, and education literature supporting the idea that schools must serve not only students, but students' families as well. One of the common themes emerging…
Urban Mobility and Location-Based Social Networks: Social, Economic and Environmental Incentives
ERIC Educational Resources Information Center
Zhang, Ke
2016-01-01
Location-based social networks (LBSNs) have recently attracted the interest of millions of users who can now not only connect and interact with their friends--as it also happens in traditional online social networks--but can also voluntarily share their whereabouts in real time. A location database is the backbone of a location-based social…
Evaluation of SHEEO's State Policy Resource Connections (SPRC) Initiative. Final Report
ERIC Educational Resources Information Center
Ryherd, Ann Daley
2011-01-01
With the assistance of the Lumina Foundation, the State Higher Education Executive Officers (SHEEO) staff has been working to develop a broad, up-to-date database of policy relevant information for the states and to create analytical studies to help state leaders identify priorities and practices for improving policies and performance across the…
Providing Access to CD-ROM Databases in a Campus Setting. Part II: Networking CD-ROMs via a LAN.
ERIC Educational Resources Information Center
Koren, Judy
1992-01-01
The second part of a report on CD-ROM networking in libraries describes LAN (local area network) technology; networking software and towers; gateway software for connecting to campuswide networks; Macintosh LANs; and network licenses. Several product and software reviews are included, and a sidebar lists vendor addresses. (NRP)
RSS Made Easy with Engaged Patrons and Yahoo! Pipes
ERIC Educational Resources Information Center
Widner, Melissa
2010-01-01
Jasper County Public Library in Indiana had started using EngagedPatrons.org (EP) in 2007. EP is a low-cost technology solution providing online events calendars, RSS feeds, database support, and other web services to connect libraries to their users. It was created in 2006 by Glenn Peterson, who also designs the feature-rich websites for…
VIEWDATA--Interactive Television, with Particular Emphasis on the British Post Office's PRESTEL.
ERIC Educational Resources Information Center
Rimmer, Tony
An overview of "Viewdata," an interactive medium that connects the home or business television set with a central computer database through telephone lines, is presented in this paper. It notes how Viewdata differs from broadcast Teletext systems and reviews the technical aspects of the two media to clarify terminology used in the…
"LinkedIn" for Accounting and Business Students
ERIC Educational Resources Information Center
Albrecht, W. David
2011-01-01
LinkedIn is a social media application that every accounting and business student should join and use. LinkedIn is a database of 90,000,000 business professionals that enables each to connect and interact with their business associates. Five reasons are offered for why accounting students should join LinkedIn followed by 11 hints for use.
Dajani, Dina R; Uddin, Lucina Q
2016-01-01
There is a general consensus that autism spectrum disorder (ASD) is accompanied by alterations in brain connectivity. Much of the neuroimaging work has focused on assessing long-range connectivity disruptions in ASD. However, evidence from both animal models and postmortem examination of the human brain suggests that local connections may also be disrupted in individuals with the disorder. Here, we investigated how regional homogeneity (ReHo), a measure of similarity of a voxel's timeseries to its nearest neighbors, varies across age in individuals with ASD and typically developing (TD) individuals using a cross-sectional design. Resting-state fMRI data obtained from a publicly available database were analyzed to determine group differences in ReHo between three age cohorts: children, adolescents, and adults. In typical development, ReHo across the entire brain was higher in children than in adolescents and adults. In contrast, children with ASD exhibited marginally lower ReHo than TD children, while adolescents and adults with ASD exhibited similar levels of local connectivity as age-matched neurotypical individuals. During all developmental stages, individuals with ASD exhibited lower local connectivity in sensory processing brain regions and higher local connectivity in complex information processing regions. Further, higher local connectivity in ASD corresponded to more severe ASD symptomatology. These results demonstrate that local connectivity is disrupted in ASD across development, with the most pronounced differences occurring in childhood. Developmental changes in ReHo do not mirror findings from fMRI studies of long-range connectivity in ASD, pointing to a need for more nuanced accounts of brain connectivity alterations in the disorder. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
DNAAlignEditor: DNA alignment editor tool
Sanchez-Villeda, Hector; Schroeder, Steven; Flint-Garcia, Sherry; Guill, Katherine E; Yamasaki, Masanori; McMullen, Michael D
2008-01-01
Background With advances in DNA re-sequencing methods and Next-Generation parallel sequencing approaches, there has been a large increase in genomic efforts to define and analyze the sequence variability present among individuals within a species. For very polymorphic species such as maize, this has lead to a need for intuitive, user-friendly software that aids the biologist, often with naïve programming capability, in tracking, editing, displaying, and exporting multiple individual sequence alignments. To fill this need we have developed a novel DNA alignment editor. Results We have generated a nucleotide sequence alignment editor (DNAAlignEditor) that provides an intuitive, user-friendly interface for manual editing of multiple sequence alignments with functions for input, editing, and output of sequence alignments. The color-coding of nucleotide identity and the display of associated quality score aids in the manual alignment editing process. DNAAlignEditor works as a client/server tool having two main components: a relational database that collects the processed alignments and a user interface connected to database through universal data access connectivity drivers. DNAAlignEditor can be used either as a stand-alone application or as a network application with multiple users concurrently connected. Conclusion We anticipate that this software will be of general interest to biologists and population genetics in editing DNA sequence alignments and analyzing natural sequence variation regardless of species, and will be particularly useful for manual alignment editing of sequences in species with high levels of polymorphism. PMID:18366684
IMPACT web portal: oncology database integrating molecular profiles with actionable therapeutics.
Hintzsche, Jennifer D; Yoo, Minjae; Kim, Jihye; Amato, Carol M; Robinson, William A; Tan, Aik Choon
2018-04-20
With the advancement of next generation sequencing technology, researchers are now able to identify important variants and structural changes in DNA and RNA in cancer patient samples. With this information, we can now correlate specific variants and/or structural changes with actionable therapeutics known to inhibit these variants. We introduce the creation of the IMPACT Web Portal, a new online resource that connects molecular profiles of tumors to approved drugs, investigational therapeutics and pharmacogenetics associated drugs. IMPACT Web Portal contains a total of 776 drugs connected to 1326 target genes and 435 target variants, fusion, and copy number alterations. The online IMPACT Web Portal allows users to search for various genetic alterations and connects them to three levels of actionable therapeutics. The results are categorized into 3 levels: Level 1 contains approved drugs separated into two groups; Level 1A contains approved drugs with variant specific information while Level 1B contains approved drugs with gene level information. Level 2 contains drugs currently in oncology clinical trials. Level 3 provides pharmacogenetic associations between approved drugs and genes. IMPACT Web Portal allows for sequencing data to be linked to actionable therapeutics for translational and drug repurposing research. The IMPACT Web Portal online resource allows users to query genes and variants to approved and investigational drugs. We envision that this resource will be a valuable database for personalized medicine and drug repurposing. IMPACT Web Portal is freely available for non-commercial use at http://tanlab.ucdenver.edu/IMPACT .
Global synthesis suggests that food web connectance correlates to invasion resistance.
Smith-Ramesh, Lauren M; Moore, Alexandria C; Schmitz, Oswald J
2017-02-01
Biological invasions are a key component of global change, and understanding the drivers of global invasion patterns will aid in assessing and mitigating the impact of invasive species. While invasive species are most often studied in the context of one or two trophic levels, in reality species invade communities comprised of complex food webs. The complexity and integrity of the native food web may be a more important determinant of invasion success than the strength of interactions between a small subset of species within a larger food web. Previous efforts to understand the relationship between food web properties and species invasions have been primarily theoretical and have yielded mixed results. Here, we present a synthesis of empirical information on food web connectance and species invasion success gathered from different sources (estimates of food web connectance from the primary literature and estimates of invasion success from the Global Invasive Species Database as well as the primary literature). Our results suggest that higher-connectance food webs tend to host fewer invaders and exert stronger biotic resistance compared to low-connectance webs. We argue that while these correlations cannot be used to infer a causal link between food web connectance and habitat invasibility, the promising findings beg for further empirical research that deliberately tests for relationships between food web connectance and invasion. © 2016 John Wiley & Sons Ltd.
On the Connection of Gamma-Ray Bursts and X-Ray Flashes
NASA Astrophysics Data System (ADS)
Ripa, J.; Meszaros, A.
2017-12-01
Classification of gamma-ray bursts (GRBs) into groups has been intensively studied by various statistical tests since 1998. It has been suggested that next to the groups of short/hard and long/soft GRBs there could be another class of intermediate durations. For the Swift/BAT database Veres et al. 2010 (ApJ, 725, 1955) it was found that the intermediate-duration bursts might be related to X-ray flashes (XRFs). On the other hand, Ripa and Meszaros 2016 (Ap&SS, 361, 370) and Ripa et al. 2012 (ApJ, 756, 44) found that the intermediate-duration GRBs in the RHESSI database are spectrally too hard to be given by XRFs. Also, in the BATSE database the intermediate-duration GRBs can be only partly populated by XRFs. The key ideas of the Ripa and Meszaros 2016 (Ap&SS, 361, 370) article are summarized in this poster.
Sleep atlas and multimedia database.
Penzel, T; Kesper, K; Mayer, G; Zulley, J; Peter, J H
2000-01-01
The ENN sleep atlas and database was set up on a dedicated server connected to the internet thus providing all services such as WWW, ftp and telnet access. The database serves as a platform to promote the goals of the European Neurological Network, to exchange patient cases for second opinion between experts and to create a case-oriented multimedia sleep atlas with descriptive text, images and video-clips of all known sleep disorders. The sleep atlas consists of a small public and a large private part for members of the consortium. 20 patient cases were collected and presented with educational information similar to published case reports. Case reports are complemented with images, video-clips and biosignal recordings. A Java based viewer for biosignals provided in EDF format was installed in order to move free within the sleep recordings without the need to download the full recording on the client.
A publication database for optical long baseline interferometry
NASA Astrophysics Data System (ADS)
Malbet, Fabien; Mella, Guillaume; Lawson, Peter; Taillifet, Esther; Lafrasse, Sylvain
2010-07-01
Optical long baseline interferometry is a technique that has generated almost 850 refereed papers to date. The targets span a large variety of objects from planetary systems to extragalactic studies and all branches of stellar physics. We have created a database hosted by the JMMC and connected to the Optical Long Baseline Interferometry Newsletter (OLBIN) web site using MySQL and a collection of XML or PHP scripts in order to store and classify these publications. Each entry is defined by its ADS bibcode, includes basic ADS informations and metadata. The metadata are specified by tags sorted in categories: interferometric facilities, instrumentation, wavelength of operation, spectral resolution, type of measurement, target type, and paper category, for example. The whole OLBIN publication list has been processed and we present how the database is organized and can be accessed. We use this tool to generate statistical plots of interest for the community in optical long baseline interferometry.
Imbalance in subregional connectivity of the right temporoparietal junction in major depression.
Poeppl, Timm B; Müller, Veronika I; Hoffstaedter, Felix; Bzdok, Danilo; Laird, Angela R; Fox, Peter T; Langguth, Berthold; Rupprecht, Rainer; Sorg, Christian; Riedl, Valentin; Goya-Maldonado, Roberto; Gruber, Oliver; Eickhoff, Simon B
2016-08-01
Major depressive disorder (MDD) involves impairment in cognitive and interpersonal functioning. The right temporoparietal junction (RTPJ) is a key brain region subserving cognitive-attentional and social processes. Yet, findings on the involvement of the RTPJ in the pathophysiology of MDD have so far been controversial. Recent connectivity-based parcellation data revealed a topofunctional dualism within the RTPJ, linking its anterior and posterior part (aRTPJ/pRTPJ) to antagonistic brain networks for attentional and social processing, respectively. Comparing functional resting-state connectivity of the aRTPJ and pRTPJ in 72 MDD patients and 76 well-matched healthy controls, we found a seed (aRTPJ/pRTPJ) × diagnosis (MDD/controls) interaction in functional connectivity for eight regions. Employing meta-data from a large-scale neuroimaging database, functional characterization of these regions exhibiting differentially altered connectivity with the aRTPJ/pRTPJ revealed associations with cognitive (dorsolateral prefrontal cortex, parahippocampus) and behavioral (posterior medial frontal cortex) control, visuospatial processing (dorsal visual cortex), reward (subgenual anterior cingulate cortex, medial orbitofrontal cortex, posterior cingulate cortex), as well as memory retrieval and social cognition (precuneus). These findings suggest that an imbalance in connectivity of subregions, rather than disturbed connectivity of the RTPJ as a whole, characterizes the connectional disruption of the RTPJ in MDD. This imbalance may account for key symptoms of MDD in cognitive, emotional, and social domains. Hum Brain Mapp 37:2931-2942, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
The first 1000 days of the autistic brain: a systematic review of diffusion imaging studies.
Conti, Eugenia; Calderoni, Sara; Marchi, Viviana; Muratori, Filippo; Cioni, Giovanni; Guzzetta, Andrea
2015-01-01
There is overwhelming evidence that autism spectrum disorder (ASD) is related to altered brain connectivity. While these alterations are starting to be well characterized in subjects where the clinical picture is fully expressed, less is known on their earlier developmental course. In the present study we systematically reviewed current knowledge on structural connectivity in ASD infants and toddlers. We searched PubMed and Medline databases for all English language papers, published from year 2000, exploring structural connectivity in populations of infants and toddlers whose mean age was below 30 months. Of the 264 papers extracted, four were found to be eligible and were reviewed. Three of the four selected studies reported higher fractional anisotropy values in subjects with ASD compared to controls within commissural fibers, projections fibers, and association fibers, suggesting brain hyper-connectivity in the earliest phases of the disorder. Similar conclusions emerged from the other diffusion parameters assessed. These findings are reversed to what is generally found in studies exploring older patient groups and suggest a developmental course characterized by a shift toward hypo-connectivity starting at a time between two and four years of age.
Baeza-Velasco, Carolina; Sinibaldi, Lorenzo; Castori, Marco
2018-02-14
Attention-deficit/hyperactivity disorder (ADHD) and generalized joint hypermobility (JH) are two separated conditions, assessed, and managed by different specialists without overlapping interests. Recently, some researchers highlighted an unexpected association between these two clinical entities. This happens in a scenario of increasing awareness on the protean detrimental effects that congenital anomalies of the connective tissue may have on human health and development. To review pertinent literature to identify possible connections between ADHD and GJH, special emphasis was put on musculoskeletal pain and syndromic presentations of GJH, particularly the hypermobile Ehlers-Danlos syndrome. A comprehensive search of scientific databases and references lists was conducted, encompassing publications based on qualitative and quantitative research. Impaired coordination and proprioception, fatigue, chronic pain, and dysautonomia are identified as potential bridges between ADHD and JH. Based on these findings, a map of the pathophysiological and psychopathological pathways connecting both conditions is proposed. Although ADHD and JH are traditionally separated human attributes, their association may testify for the dyadic nature of mind-body connections during critical periods of post-natal development. Such a mixed picture has potentially important consequences in terms of disability and deserves more clinical and research attention.
Boosting CNN performance for lung texture classification using connected filtering
NASA Astrophysics Data System (ADS)
Tarando, Sebastián. Roberto; Fetita, Catalin; Kim, Young-Wouk; Cho, Hyoun; Brillet, Pierre-Yves
2018-02-01
Infiltrative lung diseases describe a large group of irreversible lung disorders requiring regular follow-up with CT imaging. Quantifying the evolution of the patient status imposes the development of automated classification tools for lung texture. This paper presents an original image pre-processing framework based on locally connected filtering applied in multiresolution, which helps improving the learning process and boost the performance of CNN for lung texture classification. By removing the dense vascular network from images used by the CNN for lung classification, locally connected filters provide a better discrimination between different lung patterns and help regularizing the classification output. The approach was tested in a preliminary evaluation on a 10 patient database of various lung pathologies, showing an increase of 10% in true positive rate (on average for all the cases) with respect to the state of the art cascade of CNNs for this task.
A Codasyl-Type Schema for Natural Language Medical Records
Sager, N.; Tick, L.; Story, G.; Hirschman, L.
1980-01-01
This paper describes a CODASYL (network) database schema for information derived from narrative clinical reports. The goal of this work is to create an automated process that accepts natural language documents as input and maps this information into a database of a type managed by existing database management systems. The schema described here represents the medical events and facts identified through the natural language processing. This processing decomposes each narrative into a set of elementary assertions, represented as MEDFACT records in the database. Each assertion in turn consists of a subject and a predicate classed according to a limited number of medical event types, e.g., signs/symptoms, laboratory tests, etc. The subject and predicate are represented by EVENT records which are owned by the MEDFACT record associated with the assertion. The CODASYL-type network structure was found to be suitable for expressing most of the relations needed to represent the natural language information. However, special mechanisms were developed for storing the time relations between EVENT records and for recording connections (such as causality) between certain MEDFACT records. This schema has been implemented using the UNIVAC DMS-1100 DBMS.
Databases post-processing in Tensoral
NASA Technical Reports Server (NTRS)
Dresselhaus, Eliot
1994-01-01
The Center for Turbulent Research (CTR) post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, introduced in this document and currently existing in prototype form, is the foundation of this effort. Tensoral provides a convenient and powerful protocol to connect users who wish to analyze fluids databases with the authors who generate them. In this document we introduce Tensoral and its prototype implementation in the form of a user's guide. This guide focuses on use of Tensoral for post-processing turbulence databases. The corresponding document - the Tensoral 'author's guide' - which focuses on how authors can make databases available to users via the Tensoral system - is currently unwritten. Section 1 of this user's guide defines Tensoral's basic notions: we explain the class of problems at hand and how Tensoral abstracts them. Section 2 defines Tensoral syntax for mathematical expressions. Section 3 shows how these expressions make up Tensoral statements. Section 4 shows how Tensoral statements and expressions are embedded into other computer languages (such as C or Vectoral) to make Tensoral programs. We conclude with a complete example program.
Measuring use patterns of online journals and databases
De Groote, Sandra L.; Dorsch, Josephine L.
2003-01-01
Purpose: This research sought to determine use of online biomedical journals and databases and to assess current user characteristics associated with the use of online resources in an academic health sciences center. Setting: The Library of the Health Sciences–Peoria is a regional site of the University of Illinois at Chicago (UIC) Library with 350 print journals, more than 4,000 online journals, and multiple online databases. Methodology: A survey was designed to assess online journal use, print journal use, database use, computer literacy levels, and other library user characteristics. A survey was sent through campus mail to all (471) UIC Peoria faculty, residents, and students. Results: Forty-one percent (188) of the surveys were returned. Ninety-eight percent of the students, faculty, and residents reported having convenient access to a computer connected to the Internet. While 53% of the users indicated they searched MEDLINE at least once a week, other databases showed much lower usage. Overall, 71% of respondents indicated a preference for online over print journals when possible. Conclusions: Users prefer online resources to print, and many choose to access these online resources remotely. Convenience and full-text availability appear to play roles in selecting online resources. The findings of this study suggest that databases without links to full text and online journal collections without links from bibliographic databases will have lower use. These findings have implications for collection development, promotion of library resources, and end-user training. PMID:12883574
Chien, Tsair-Wei; Chang, Yu; Wang, Hsien-Yi
2018-02-01
Many researchers used National Health Insurance database to publish medical papers which are often retrospective, population-based, and cohort studies. However, the author's research domain and academic characteristics are still unclear.By searching the PubMed database (Pubmed.com), we used the keyword of [Taiwan] and [National Health Insurance Research Database], then downloaded 2913 articles published from 1995 to 2017. Social network analysis (SNA), Gini coefficient, and Google Maps were applied to gather these data for visualizing: the most productive author; the pattern of coauthor collaboration teams; and the author's research domain denoted by abstract keywords and Pubmed MESH (medical subject heading) terms.Utilizing the 2913 papers from Taiwan's National Health Insurance database, we chose the top 10 research teams shown on Google Maps and analyzed one author (Dr. Kao) who published 149 papers in the database in 2015. In the past 15 years, we found Dr. Kao had 2987 connections with other coauthors from 13 research teams. The cooccurrence abstract keywords with the highest frequency are cohort study and National Health Insurance Research Database. The most coexistent MESH terms are tomography, X-ray computed, and positron-emission tomography. The strength of the author research distinct domain is very low (Gini < 0.40).SNA incorporated with Google Maps and Gini coefficient provides insight into the relationships between entities. The results obtained in this study can be applied for a comprehensive understanding of other productive authors in the field of academics.
Melloy, Patricia G
2015-01-01
A two-part laboratory exercise was developed to enhance classroom instruction on the significance of p53 mutations in cancer development. Students were asked to mine key information from an international database of p53 genetic changes related to cancer, the IARC TP53 database. Using this database, students designed several data mining activities to look at the changes in the p53 gene from a number of perspectives, including potential cancer-causing agents leading to particular changes and the prevalence of certain p53 variations in certain cancers. In addition, students gained a global perspective on cancer prevalence in different parts of the world. Students learned how to use the database in the first part of the exercise, and then used that knowledge to search particular cancers and cancer-causing agents of their choosing in the second part of the exercise. Students also connected the information gathered from the p53 exercise to a previous laboratory exercise looking at risk factors for cancer development. The goal of the experience was to increase student knowledge of the link between p53 genetic variation and cancer. Students also were able to walk a similar path through the website as a cancer researcher using the database to enhance bench work-based experiments with complementary large-scale database p53 variation information. © 2014 The International Union of Biochemistry and Molecular Biology.
NASA Astrophysics Data System (ADS)
Bartolini, S.; Becerril, L.; Martí, J.
2014-11-01
One of the most important issues in modern volcanology is the assessment of volcanic risk, which will depend - among other factors - on both the quantity and quality of the available data and an optimum storage mechanism. This will require the design of purpose-built databases that take into account data format and availability and afford easy data storage and sharing, and will provide for a more complete risk assessment that combines different analyses but avoids any duplication of information. Data contained in any such database should facilitate spatial and temporal analysis that will (1) produce probabilistic hazard models for future vent opening, (2) simulate volcanic hazards and (3) assess their socio-economic impact. We describe the design of a new spatial database structure, VERDI (Volcanic managEment Risk Database desIgn), which allows different types of data, including geological, volcanological, meteorological, monitoring and socio-economic information, to be manipulated, organized and managed. The root of the question is to ensure that VERDI will serve as a tool for connecting different kinds of data sources, GIS platforms and modeling applications. We present an overview of the database design, its components and the attributes that play an important role in the database model. The potential of the VERDI structure and the possibilities it offers in regard to data organization are here shown through its application on El Hierro (Canary Islands). The VERDI database will provide scientists and decision makers with a useful tool that will assist to conduct volcanic risk assessment and management.
Barron, Peter; Peter, Joanne; Sebidi, Jane; Bekker, Marcha; Allen, Robert; Parsons, Annie Neo; Benjamin, Peter; Pillay, Yogan
2018-01-01
MomConnect is a flagship programme of the South African National Department of Health that has reached over 1.5 million pregnant women. Using mobile technology, MomConnect provides pregnant and postpartum women with twice-weekly health information text messages as well as access to a helpdesk for patient queries and feedback. In just 3 years, MomConnect has been taken to scale to reach over 95% of public health facilities and has reached 63% of all pregnant women attending their first antenatal appointment. The helpdesk has received over 300 000 queries at an average of 250 per day from 6% of MomConnect users. The service is entirely free to its users. The rapid deployment of MomConnect has been facilitated by strong government leadership, and an ecosystem of mobile health implementers who had experience of much of the content and technology required. An early decision to design MomConnect for universal coverage has required the use of text-based technologies (short messaging service and Unstructured Supplementary Service Data) that are accessible via even the most basic mobile phones, but cumbersome to use and costly at scale. Unlike previous mobile messaging services in South Africa, MomConnect collects the user’s identification number and facility code during registration, enabling future linkages with other health and population databases and geolocated feedback. MomConnect has catalysed additional efforts to strengthen South Africa’s digital health architecture. The rapid growth in smartphone penetration presents new opportunities to reduce costs, increase real-time data collection and expand the reach and scope of MomConnect to serve health workers and other patient groups. PMID:29713503
Barron, Peter; Peter, Joanne; LeFevre, Amnesty E; Sebidi, Jane; Bekker, Marcha; Allen, Robert; Parsons, Annie Neo; Benjamin, Peter; Pillay, Yogan
2018-01-01
MomConnect is a flagship programme of the South African National Department of Health that has reached over 1.5 million pregnant women. Using mobile technology, MomConnect provides pregnant and postpartum women with twice-weekly health information text messages as well as access to a helpdesk for patient queries and feedback. In just 3 years, MomConnect has been taken to scale to reach over 95% of public health facilities and has reached 63% of all pregnant women attending their first antenatal appointment. The helpdesk has received over 300 000 queries at an average of 250 per day from 6% of MomConnect users. The service is entirely free to its users. The rapid deployment of MomConnect has been facilitated by strong government leadership, and an ecosystem of mobile health implementers who had experience of much of the content and technology required. An early decision to design MomConnect for universal coverage has required the use of text-based technologies (short messaging service and Unstructured Supplementary Service Data) that are accessible via even the most basic mobile phones, but cumbersome to use and costly at scale. Unlike previous mobile messaging services in South Africa, MomConnect collects the user's identification number and facility code during registration, enabling future linkages with other health and population databases and geolocated feedback. MomConnect has catalysed additional efforts to strengthen South Africa's digital health architecture. The rapid growth in smartphone penetration presents new opportunities to reduce costs, increase real-time data collection and expand the reach and scope of MomConnect to serve health workers and other patient groups.
Mysql Data-Base Applications for Dst-Like Physics Analysis
NASA Astrophysics Data System (ADS)
Tsenov, Roumen
2004-07-01
The data and analysis model developed and being used in the HARP experiment for studying hadron production at CERN Proton Synchrotron is discussed. Emphasis is put on usage of data-base (DB) back-ends for persistent storing and retrieving "alive" C++ objects encapsulating raw and reconstructed data. Concepts of "Data Summary Tape" (DST) as a logical collection of DB-persistent data of different types, and of "intermediate DST" (iDST) as a physical "tag" of DST, are introduced. iDST level of persistency allows a powerful, DST-level of analysis to be performed by applications running on an isolated machine (even laptop) with no connection to the experiment's main data storage. Implementation of these concepts is considered.
Mulcahey, Mary K; Gosselin, Michelle M; Fadale, Paul D
2013-06-19
The Internet is a common source of information for orthopaedic residents applying for sports medicine fellowships, with the web sites of the American Orthopaedic Society for Sports Medicine (AOSSM) and the San Francisco Match serving as central databases. We sought to evaluate the web sites for accredited orthopaedic sports medicine fellowships with regard to content and accessibility. We reviewed the existing web sites of the ninety-five accredited orthopaedic sports medicine fellowships included in the AOSSM and San Francisco Match databases from February to March 2012. A Google search was performed to determine the overall accessibility of program web sites and to supplement information obtained from the AOSSM and San Francisco Match web sites. The study sample consisted of the eighty-seven programs whose web sites connected to information about the fellowship. Each web site was evaluated for its informational value. Of the ninety-five programs, fifty-one (54%) had links listed in the AOSSM database. Three (3%) of all accredited programs had web sites that were linked directly to information about the fellowship. Eighty-eight (93%) had links listed in the San Francisco Match database; however, only five (5%) had links that connected directly to information about the fellowship. Of the eighty-seven programs analyzed in our study, all eighty-seven web sites (100%) provided a description of the program and seventy-six web sites (87%) included information about the application process. Twenty-one web sites (24%) included a list of current fellows. Fifty-six web sites (64%) described the didactic instruction, seventy (80%) described team coverage responsibilities, forty-seven (54%) included a description of cases routinely performed by fellows, forty-one (47%) described the role of the fellow in seeing patients in the office, eleven (13%) included call responsibilities, and seventeen (20%) described a rotation schedule. Two Google searches identified direct links for 67% to 71% of all accredited programs. Most accredited orthopaedic sports medicine fellowships lack easily accessible or complete web sites in the AOSSM or San Francisco Match databases. Improvement in the accessibility and quality of information on orthopaedic sports medicine fellowship web sites would facilitate the ability of applicants to obtain useful information.
US Gateway to SIMBAD Astronomical Database
NASA Technical Reports Server (NTRS)
Eichhorn, G.
1998-01-01
During the last year the US SIMBAD Gateway Project continued to provide services like user registration to the US users of the SIMBAD database in France. User registration is required by the SIMBAD project in France. Currently, there are almost 3000 US users registered. We also provide user support by answering questions from users and handling requests for lost passwords. We have worked with the CDS SIMBAD project to provide access to the SIMBAD database to US users on an Internet address basis. This will allow most US users to access SIMBAD without having to enter passwords. This new system was installed in August, 1998. The SIMBAD mirror database at SAO is fully operational. We worked with the CDS to adapt it to our computer system. We implemented automatic updating procedures that update the database and password files daily. This mirror database provides much better access to the US astronomical community. We also supported a demonstration of the SIMBAD database at the meeting of the American Astronomical Society in January. We shipped computer equipment to the meeting and provided support for the demonstration activities at the SIMBAD booth. We continued to improve the cross-linking between the SIMBAD project and the Astro- physics Data System. This cross-linking between these systems is very much appreciated by the users of both the SIMBAD database and the ADS Abstract Service. The mirror of the SIMBAD database at SAO makes this connection faster for the US astronomers. The close cooperation between the CDS in Strasbourg and SAO, facilitated by this project, is an important part of the astronomy-wide digital library initiative called Urania. It has proven to be a model in how different data centers can collaborate and enhance the value of their products by linking with other data centers.
Library of molecular associations: curating the complex molecular basis of liver diseases.
Buchkremer, Stefan; Hendel, Jasmin; Krupp, Markus; Weinmann, Arndt; Schlamp, Kai; Maass, Thorsten; Staib, Frank; Galle, Peter R; Teufel, Andreas
2010-03-20
Systems biology approaches offer novel insights into the development of chronic liver diseases. Current genomic databases supporting systems biology analyses are mostly based on microarray data. Although these data often cover genome wide expression, the validity of single microarray experiments remains questionable. However, for systems biology approaches addressing the interactions of molecular networks comprehensive but also highly validated data are necessary. We have therefore generated the first comprehensive database for published molecular associations in human liver diseases. It is based on PubMed published abstracts and aimed to close the gap between genome wide coverage of low validity from microarray data and individual highly validated data from PubMed. After an initial text mining process, the extracted abstracts were all manually validated to confirm content and potential genetic associations and may therefore be highly trusted. All data were stored in a publicly available database, Library of Molecular Associations http://www.medicalgenomics.org/databases/loma/news, currently holding approximately 1260 confirmed molecular associations for chronic liver diseases such as HCC, CCC, liver fibrosis, NASH/fatty liver disease, AIH, PBC, and PSC. We furthermore transformed these data into a powerful resource for molecular liver research by connecting them to multiple biomedical information resources. Together, this database is the first available database providing a comprehensive view and analysis options for published molecular associations on multiple liver diseases.
ERIC Educational Resources Information Center
Hlavaty, Greg; Townsend, Murphy
2010-01-01
Modern composition instructors often use and teach research methods for Internet search engines and electronic databases. It is not their intent to turn back the clock. However, if they can help students connect the world of Internet searches and the university library, they can promote information literacy in its broadest sense by developing…
ERIC Educational Resources Information Center
Borgman, Christine L.
1996-01-01
Reports on a survey of 70 research libraries in Croatia, Czech Republic, Hungary, Poland, Slovakia, and Slovenia. Results show that libraries are rapidly acquiring automated processing systems, CD-ROM databases, and connections to computer networks. Discusses specific data on system implementation and network services by country and by type of…
2007-09-19
extended object relations such as boundary, interior, open, closed , within, connected, and overlaps, which are invariant under elastic deformation...is required in a geo-spatial semantic web is challenging because the defining properties of geographic entities are very closely related to space. In...Objects under Primitive will be open (i.e., they will not contain their boundary points) and the objects under Complex will be closed . In addition to
Ubiquitous-health (U-Health) monitoring systems for elders and caregivers
NASA Astrophysics Data System (ADS)
Moon, Gyu; Lim, Kyung-won; Yoo, Young-min; An, Hye-min; Lee, Ki Seop; Szu, Harold
2011-06-01
This paper presents two aordable low-tack system for household biomedical wellness monitoring. The rst system, JIKIMI (pronounced caregiver in Korean), is a remote monitoring system that analyzes the behavior patterns of elders that live alone. JIKIMI is composed of an in-house sensing system, a set of wireless sensor nodes containing a pyroelectric infrared sensor to detect the motion of elders, an emergency button and a magnetic sensor that detects the opening and closing of doors. The system is also equipped with a server system, which is comprised of a database and web server. The server provides the mechanism for web-based monitoring to caregivers. The second system, Reader of Bottle Information (ROBI), is an assistant system which advises the contents of bottles for elders. ROBI is composed of bottles that have connected RFID tags and an advice system, which is composed of a wireless RFID reader, a gateway and a remote database server. The RFID tags are connected to the caps of the bottles are used in conjunction with the advice system These systems have been in use for three years and have proven to be useful for caregivers to provide more ecient and eective care services.
Climate Signals: An On-Line Digital Platform for Mapping Climate Change Impacts in Real Time
NASA Astrophysics Data System (ADS)
Cutting, H.
2016-12-01
Climate Signals is an on-line digital platform for cataloging and mapping the impacts of climate change. The CS platform specifies and details the chains of connections between greenhouse gas emissions and individual climate events. Currently in open-beta release, the platform is designed to to engage and serve the general public, news media, and policy-makers, particularly in real-time during extreme climate events. Climate Signals consists of a curated relational database of events and their links to climate change, a mapping engine, and a gallery of climate change monitors offering real-time data. For each event in the database, an infographic engine provides a custom attribution "tree" that illustrates the connections to climate change. In addition, links to key contextual resources are aggregated and curated for each event. All event records are fully annotated with detailed source citations and corresponding hyper links. The system of attribution used to link events to climate change in real-time is detailed here. This open-beta release is offered for public user testing and engagement. Launched in May 2016, the operation of this platform offers lessons for public engagement in climate change impacts.
NASA Technical Reports Server (NTRS)
Goldgof, Gregory M.
2005-01-01
Distributed systems allow scientists from around the world to plan missions concurrently, while being updated on the revisions of their colleagues in real time. However, permitting multiple clients to simultaneously modify a single data repository can quickly lead to data corruption or inconsistent states between users. Since our message broker, the Java Message Service, does not ensure that messages will be received in the order they were published, we must implement our own numbering scheme to guarantee that changes to mission plans are performed in the correct sequence. Furthermore, distributed architectures must ensure that as new users connect to the system, they synchronize with the database without missing any messages or falling into an inconsistent state. Robust systems must also guarantee that all clients will remain synchronized with the database even in the case of multiple client failure, which can occur at any time due to lost network connections or a user's own system instability. The final design for the distributed system behind the Mars rover mission planning software fulfills all of these requirements and upon completion will be deployed to MER at the end of 2005 as well as Phoenix (2007) and MSL (2009).
On the connection of gamma-ray bursts and X-ray flashes in the BATSE and RHESSI databases
NASA Astrophysics Data System (ADS)
Řípa, J.; Mészáros, A.
2016-12-01
Classification of gamma-ray bursts (GRBs) into groups has been intensively studied by various statistical tests in previous years. It has been suggested that there was a distinct group of GRBs, beyond the long and short ones, with intermediate durations. However, such a group is not securely confirmed yet. Strangely, concerning the spectral hardness, the observations from the Swift and RHESSI satellites give different results. For the Swift/BAT database it is found that the intermediate-duration bursts might well be related to so-called X-ray flashes (XRFs). On the other hand, for the RHESSI dataset the intermediate-duration bursts seem to be spectrally too hard to be given by XRFs. The connection of the intermediate-duration bursts and XRFs for the BATSE database is not clear as well. The purpose of this article is to check the relation between XRFs and GRBs for the BATSE and RHESSI databases, respectively. We use an empirical definition of XRFs introduced by other authors earlier. For the RHESSI database we also use a transformation between the detected counts and the fluences based on the simulated detector response function. The purpose is to compare the hardnesses of GRBs with the definition of XRFs. There is a 1.3-4.2 % fraction of XRFs in the whole BATSE database. The vast majority of the BATSE short bursts are not XRFs because only 0.7-5.7 % of the short bursts can be given by XRFs. However, there is a large uncertainty in the fraction of XRFs among the intermediate-duration bursts. The fraction of 1-85 % of the BATSE intermediate-duration bursts can be related to XRFs. For the long bursts this fraction is between 1.0 % and 3.4 %. The uncertainties in these fractions are large, however it can be claimed that all BATSE intermediate-duration bursts cannot be given by XRFs. At least 79 % of RHESSI short bursts, at least 53 % of RHESSI intermediate-duration bursts, and at least 45 % of RHESSI long bursts should not be given by XRFs. A simulation of XRFs observed by HETE-2 and Swift has shown that RHESSI would detect, and in fact detected, only one long-duration XRF out of 26 ones observed by those two satellites. We arrive at the conclusion that the intermediate-duration bursts in the BATSE database can be partly populated by XRFs, but the RHESSI intermediate-duration bursts are most likely not given by XRFs. The results, claiming that the Swift/BAT intermediate-duration bursts are closely related to XRFs do not hold for the BATSE and RHESSI databases.
Fan, Shi-Qi; Li, Sen; Liu, Jin-Ling; Yang, Jiao; Hu, Chao; Zhu, Jun-Ping; Xiao, Xiao-Qin; Liu, Wen-Long; He, Fu-Yuan
2017-01-01
The molecular connectivity index was adopted to explore the characteristics of supramolecular imprinting template of herbs distributed to liver meridian, in order to provide scientific basis for traditional Chinese medicines(TCMs) distributed to liver meridian. In this paper, with "12th five-year plan" national planning textbooks Science of Traditional Chinese Medicine and Chemistry of Traditional Chinese Medicine as the blueprint, literatures and TCMSP sub-databases in TCM pharmacology of northwest science and technology university of agriculture and forestry were retrieved to collect and summarize active constituents of TCM distributed to liver meridian, and calculate the molecular connectivity index. The average molecular connectivity index of ingredients distributed to liver meridian was 9.47, which was close to flavonoid glycosides' (9.17±2.11) and terpenes (9.30±3.62). Therefore, it is inferred that template molecule of liver meridian is similar to physicochemical property of flavonoid glycosides and terpenes, which could be best matched with imprinting template of liver meridian. Copyright© by the Chinese Pharmaceutical Association.
GIS-project: geodynamic globe for global monitoring of geological processes
NASA Astrophysics Data System (ADS)
Ryakhovsky, V.; Rundquist, D.; Gatinsky, Yu.; Chesalova, E.
2003-04-01
A multilayer geodynamic globe at the scale 1:10,000,000 was created at the end of the nineties in the GIS Center of the Vernadsky Museum. A special soft-and-hardware complex was elaborated for its visualization with a set of multitarget object directed databases. The globe includes separate thematic covers represented by digital sets of spatial geological, geochemical, and geophysical information (maps, schemes, profiles, stratigraphic columns, arranged databases etc.). At present the largest databases included in the globe program are connected with petrochemical and isotopic data on magmatic rocks of the World Ocean and with the large and supperlarge mineral deposits. Software by the Environmental Scientific Research Institute (ESRI), USA as well as ArcScan vectrorizator were used for covers digitizing and database adaptation (ARC/INFO 7.0, 8.0). All layers of the geoinformational project were obtained by scanning of separate objects and their transfer to the real geographic co-ordinates of an equiintermediate conic projection. Then the covers were projected on plane degree-system geographic co-ordinates. Some attributive databases were formed for each thematic layer, and in the last stage all covers were combined into the single information system. Separate digital covers represent mathematical descriptions of geological objects and relations between them, such as Earth's altimetry, active fault systems, seismicity etc. Some grounds of the cartographic generalization were taken into consideration in time of covers compilation with projection and co-ordinate systems precisely answered a given scale. The globe allows us to carry out in the interactive regime the formation of coordinated with each other object-oriented databases and thematic covers directly connected with them. They can be spread for all the Earth and the near-Earth space, and for the most well known parts of divergent and convergent boundaries of the lithosphere plates. Such covers and time series reflect in diagram form a total combination and dynamics of data on the geological structure, geophysical fields, seismicity, geomagnetism, composition of rock complexes, and metalloge-ny of different areas on the Earth's surface. They give us possibility to scale, detail, and develop 3D spatial visualization. Information filling the covers could be replenished as in the existing so in newly formed databases with new data. The integrated analyses of the data allows us more precisely to define our ideas on regularities in development of lithosphere and mantle unhomogeneities using some original technologies. It also enables us to work out 3D digital models for geodynamic development of tectonic zones in convergent and divergent plate boundaries with the purpose of integrated monitoring of mineral resources and establishing correlation between seismicity, magmatic activity, and metallogeny in time-spatial co-ordinates. The created multifold geoinformation system gives a chance to execute an integral analyses of geoinformation flows in the interactive regime and, in particular, to establish some regularities in the time-spatial distribution and dynamics of main structural units in the lithosphere, as well as illuminate the connection between stages of their development and epochs of large and supperlarge mineral deposit formation. Now we try to use the system for prediction of large oil and gas concentration in the main sedimentary basins. The work was supported by RFBR, (grants 93-07-14680, 96-07-89499, 99-07-90030, 00-15-98535, 02-07-90140) and MTC.
Chaves, Cristiane Ribeiro; Campbell, Melanie; Côrtes Gama, Ana Cristina
2017-03-01
This study aimed to determine the influence of native language on the auditory-perceptual assessment of voice, as completed by Brazilian and Anglo-Canadian listeners using Brazilian vocal samples and the grade, roughness, breathiness, asthenia, strain (GRBAS) scale. This is an analytical, observational, comparative, and transversal study conducted at the Speech Language Pathology Department of the Federal University of Minas Gerais in Brazil, and at the Communication Sciences and Disorders Department of the University of Alberta in Canada. The GRBAS scale, connected speech, and a sustained vowel were used in this study. The vocal samples were drawn randomly from a database of recorded speech of Brazilian adults, some with healthy voices and some with voice disorders. The database is housed at the Federal University of Minas Gerais. Forty-six samples of connected speech (recitation of days of the week), produced by 35 women and 11 men, and 46 samples of the sustained vowel /a/, produced by 37 women and 9 men, were used in this study. The listeners were divided into two groups of three speech therapists, according to nationality: Brazilian or Anglo-Canadian. The groups were matched according to the years of professional experience of participants. The weighted kappa was used to calculate the intra- and inter-rater agreements, with 95% confidence intervals, respectively. An analysis of the intra-rater agreement showed that Brazilians and Canadians had similar results in auditory-perceptual evaluation of sustained vowel and connected speech. The results of the inter-rater agreement of connected speech and sustained vowel indicated that Brazilians and Canadians had, respectively, moderate agreement on the overall severity (0.57 and 0.50), breathiness (0.45 and 0.45), and asthenia (0.50 and 0.46); poor correlation on roughness (0.19 and 0.007); and weak correlation on strain to connected speech (0.22), and moderate correlation to sustained vowel (0.50). In general, auditory-perceptual evaluation is not influenced by the native language on most dimensions of the perceptual parameters of the GRBAS scale. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
A natural language interface plug-in for cooperative query answering in biological databases.
Jamil, Hasan M
2012-06-11
One of the many unique features of biological databases is that the mere existence of a ground data item is not always a precondition for a query response. It may be argued that from a biologist's standpoint, queries are not always best posed using a structured language. By this we mean that approximate and flexible responses to natural language like queries are well suited for this domain. This is partly due to biologists' tendency to seek simpler interfaces and partly due to the fact that questions in biology involve high level concepts that are open to interpretations computed using sophisticated tools. In such highly interpretive environments, rigidly structured databases do not always perform well. In this paper, our goal is to propose a semantic correspondence plug-in to aid natural language query processing over arbitrary biological database schema with an aim to providing cooperative responses to queries tailored to users' interpretations. Natural language interfaces for databases are generally effective when they are tuned to the underlying database schema and its semantics. Therefore, changes in database schema become impossible to support, or a substantial reorganization cost must be absorbed to reflect any change. We leverage developments in natural language parsing, rule languages and ontologies, and data integration technologies to assemble a prototype query processor that is able to transform a natural language query into a semantically equivalent structured query over the database. We allow knowledge rules and their frequent modifications as part of the underlying database schema. The approach we adopt in our plug-in overcomes some of the serious limitations of many contemporary natural language interfaces, including support for schema modifications and independence from underlying database schema. The plug-in introduced in this paper is generic and facilitates connecting user selected natural language interfaces to arbitrary databases using a semantic description of the intended application. We demonstrate the feasibility of our approach with a practical example.
Hancock, David; Wilson, Michael; Velarde, Giles; Morrison, Norman; Hayes, Andrew; Hulme, Helen; Wood, A Joseph; Nashar, Karim; Kell, Douglas B; Brass, Andy
2005-11-03
maxdLoad2 is a relational database schema and Java application for microarray experimental annotation and storage. It is compliant with all standards for microarray meta-data capture; including the specification of what data should be recorded, extensive use of standard ontologies and support for data exchange formats. The output from maxdLoad2 is of a form acceptable for submission to the ArrayExpress microarray repository at the European Bioinformatics Institute. maxdBrowse is a PHP web-application that makes contents of maxdLoad2 databases accessible via web-browser, the command-line and web-service environments. It thus acts as both a dissemination and data-mining tool. maxdLoad2 presents an easy-to-use interface to an underlying relational database and provides a full complement of facilities for browsing, searching and editing. There is a tree-based visualization of data connectivity and the ability to explore the links between any pair of data elements, irrespective of how many intermediate links lie between them. Its principle novel features are: the flexibility of the meta-data that can be captured, the tools provided for importing data from spreadsheets and other tabular representations, the tools provided for the automatic creation of structured documents, the ability to browse and access the data via web and web-services interfaces. Within maxdLoad2 it is very straightforward to customise the meta-data that is being captured or change the definitions of the meta-data. These meta-data definitions are stored within the database itself allowing client software to connect properly to a modified database without having to be specially configured. The meta-data definitions (configuration file) can also be centralized allowing changes made in response to revisions of standards or terminologies to be propagated to clients without user intervention.maxdBrowse is hosted on a web-server and presents multiple interfaces to the contents of maxd databases. maxdBrowse emulates many of the browse and search features available in the maxdLoad2 application via a web-browser. This allows users who are not familiar with maxdLoad2 to browse and export microarray data from the database for their own analysis. The same browse and search features are also available via command-line and SOAP server interfaces. This both enables scripting of data export for use embedded in data repositories and analysis environments, and allows access to the maxd databases via web-service architectures. maxdLoad2 http://www.bioinf.man.ac.uk/microarray/maxd/ and maxdBrowse http://dbk.ch.umist.ac.uk/maxdBrowse are portable and compatible with all common operating systems and major database servers. They provide a powerful, flexible package for annotation of microarray experiments and a convenient dissemination environment. They are available for download and open sourced under the Artistic License.
Integrating Radar Image Data with Google Maps
NASA Technical Reports Server (NTRS)
Chapman, Bruce D.; Gibas, Sarah
2010-01-01
A public Web site has been developed as a method for displaying the multitude of radar imagery collected by NASA s Airborne Synthetic Aperture Radar (AIRSAR) instrument during its 16-year mission. Utilizing NASA s internal AIRSAR site, the new Web site features more sophisticated visualization tools that enable the general public to have access to these images. The site was originally maintained at NASA on six computers: one that held the Oracle database, two that took care of the software for the interactive map, and three that were for the Web site itself. Several tasks were involved in moving this complicated setup to just one computer. First, the AIRSAR database was migrated from Oracle to MySQL. Then the back-end of the AIRSAR Web site was updated in order to access the MySQL database. To do this, a few of the scripts needed to be modified; specifically three Perl scripts that query that database. The database connections were then updated from Oracle to MySQL, numerous syntax errors were corrected, and a query was implemented that replaced one of the stored Oracle procedures. Lastly, the interactive map was designed, implemented, and tested so that users could easily browse and access the radar imagery through the Google Maps interface.
Connectivity and functional profiling of abnormal brain structures in pedophilia
Poeppl, Timm B.; Eickhoff, Simon B.; Fox, Peter T.; Laird, Angela R.; Rupprecht, Rainer; Langguth, Berthold; Bzdok, Danilo
2015-01-01
Despite its 0.5–1% lifetime prevalence in men and its general societal relevance, neuroimaging investigations in pedophilia are scarce. Preliminary findings indicate abnormal brain structure and function. However, no study has yet linked structural alterations in pedophiles to both connectional and functional properties of the aberrant hotspots. The relationship between morphological alterations and brain function in pedophilia as well as their contribution to its psychopathology thus remain unclear. First, we assessed bimodal connectivity of structurally altered candidate regions using meta-analytic connectivity modeling (MACM) and resting-state correlations employing openly accessible data. We compared the ensuing connectivity maps to the activation likelihood estimation (ALE) maps of a recent quantitative meta-analysis of brain activity during processing of sexual stimuli. Second, we functionally characterized the structurally altered regions employing meta-data of a large-scale neuroimaging database. Candidate regions were functionally connected to key areas for processing of sexual stimuli. Moreover, we found that the functional role of structurally altered brain regions in pedophilia relates to nonsexual emotional as well as neurocognitive and executive functions, previously reported to be impaired in pedophiles. Our results suggest that structural brain alterations affect neural networks for sexual processing by way of disrupted functional connectivity, which may entail abnormal sexual arousal patterns. The findings moreover indicate that structural alterations account for common affective and neurocognitive impairments in pedophilia. The present multi-modal integration of brain structure and function analyses links sexual and nonsexual psychopathology in pedophilia. PMID:25733379
Hyperconnectivity is a fundamental response to neurological disruption.
Hillary, Frank G; Roman, Cristina A; Venkatesan, Umesh; Rajtmajer, Sarah M; Bajo, Ricardo; Castellanos, Nazareth D
2015-01-01
In the cognitive and clinical neurosciences, the past decade has been marked by dramatic growth in a literature examining brain "connectivity" using noninvasive methods. We offer a critical review of the blood oxygen level dependent functional MRI (BOLD fMRI) literature examining neural connectivity changes in neurological disorders with focus on brain injury and dementia. The goal is to demonstrate that there are identifiable shifts in local and large-scale network connectivity that can be predicted by the degree of pathology. We anticipate that the most common network response to neurological insult is hyperconnectivity but that this response depends upon demand and resource availability. To examine this hypothesis, we initially reviewed the results from 1,426 studies examining functional brain connectivity in individuals diagnosed with multiple sclerosis, traumatic brain injury, mild cognitive impairment, and Alzheimer's disease. Based upon inclusionary criteria, 126 studies were included for detailed analysis. RESULTS from 126 studies examining local and whole brain connectivity demonstrated increased connectivity in traumatic brain injury and multiple sclerosis. This finding is juxtaposed with findings in mild cognitive impairment and Alzheimer's disease where there is a shift to diminished connectivity as degeneration progresses. This summary of the functional imaging literature using fMRI methods reveals that hyperconnectivity is a common response to neurological disruption and that it may be differentially observable across brain regions. We discuss the factors contributing to both hyper- and hypoconnectivity results after neurological disruption and the implications these findings have for network plasticity. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Connectivity and functional profiling of abnormal brain structures in pedophilia.
Poeppl, Timm B; Eickhoff, Simon B; Fox, Peter T; Laird, Angela R; Rupprecht, Rainer; Langguth, Berthold; Bzdok, Danilo
2015-06-01
Despite its 0.5-1% lifetime prevalence in men and its general societal relevance, neuroimaging investigations in pedophilia are scarce. Preliminary findings indicate abnormal brain structure and function. However, no study has yet linked structural alterations in pedophiles to both connectional and functional properties of the aberrant hotspots. The relationship between morphological alterations and brain function in pedophilia as well as their contribution to its psychopathology thus remain unclear. First, we assessed bimodal connectivity of structurally altered candidate regions using meta-analytic connectivity modeling (MACM) and resting-state correlations employing openly accessible data. We compared the ensuing connectivity maps to the activation likelihood estimation (ALE) maps of a recent quantitative meta-analysis of brain activity during processing of sexual stimuli. Second, we functionally characterized the structurally altered regions employing meta-data of a large-scale neuroimaging database. Candidate regions were functionally connected to key areas for processing of sexual stimuli. Moreover, we found that the functional role of structurally altered brain regions in pedophilia relates to nonsexual emotional as well as neurocognitive and executive functions, previously reported to be impaired in pedophiles. Our results suggest that structural brain alterations affect neural networks for sexual processing by way of disrupted functional connectivity, which may entail abnormal sexual arousal patterns. The findings moreover indicate that structural alterations account for common affective and neurocognitive impairments in pedophilia. The present multimodal integration of brain structure and function analyses links sexual and nonsexual psychopathology in pedophilia. © 2015 Wiley Periodicals, Inc.
Hur, Manhoi; Campbell, Alexis Ann; Almeida-de-Macedo, Marcia; Li, Ling; Ransom, Nick; Jose, Adarsh; Crispin, Matt; Nikolau, Basil J; Wurtele, Eve Syrkin
2013-04-01
Discovering molecular components and their functionality is key to the development of hypotheses concerning the organization and regulation of metabolic networks. The iterative experimental testing of such hypotheses is the trajectory that can ultimately enable accurate computational modelling and prediction of metabolic outcomes. This information can be particularly important for understanding the biology of natural products, whose metabolism itself is often only poorly defined. Here, we describe factors that must be in place to optimize the use of metabolomics in predictive biology. A key to achieving this vision is a collection of accurate time-resolved and spatially defined metabolite abundance data and associated metadata. One formidable challenge associated with metabolite profiling is the complexity and analytical limits associated with comprehensively determining the metabolome of an organism. Further, for metabolomics data to be efficiently used by the research community, it must be curated in publicly available metabolomics databases. Such databases require clear, consistent formats, easy access to data and metadata, data download, and accessible computational tools to integrate genome system-scale datasets. Although transcriptomics and proteomics integrate the linear predictive power of the genome, the metabolome represents the nonlinear, final biochemical products of the genome, which results from the intricate system(s) that regulate genome expression. For example, the relationship of metabolomics data to the metabolic network is confounded by redundant connections between metabolites and gene-products. However, connections among metabolites are predictable through the rules of chemistry. Therefore, enhancing the ability to integrate the metabolome with anchor-points in the transcriptome and proteome will enhance the predictive power of genomics data. We detail a public database repository for metabolomics, tools and approaches for statistical analysis of metabolomics data, and methods for integrating these datasets with transcriptomic data to create hypotheses concerning specialized metabolisms that generate the diversity in natural product chemistry. We discuss the importance of close collaborations among biologists, chemists, computer scientists and statisticians throughout the development of such integrated metabolism-centric databases and software.
Hur, Manhoi; Campbell, Alexis Ann; Almeida-de-Macedo, Marcia; Li, Ling; Ransom, Nick; Jose, Adarsh; Crispin, Matt; Nikolau, Basil J.
2013-01-01
Discovering molecular components and their functionality is key to the development of hypotheses concerning the organization and regulation of metabolic networks. The iterative experimental testing of such hypotheses is the trajectory that can ultimately enable accurate computational modelling and prediction of metabolic outcomes. This information can be particularly important for understanding the biology of natural products, whose metabolism itself is often only poorly defined. Here, we describe factors that must be in place to optimize the use of metabolomics in predictive biology. A key to achieving this vision is a collection of accurate time-resolved and spatially defined metabolite abundance data and associated metadata. One formidable challenge associated with metabolite profiling is the complexity and analytical limits associated with comprehensively determining the metabolome of an organism. Further, for metabolomics data to be efficiently used by the research community, it must be curated in publically available metabolomics databases. Such databases require clear, consistent formats, easy access to data and metadata, data download, and accessible computational tools to integrate genome system-scale datasets. Although transcriptomics and proteomics integrate the linear predictive power of the genome, the metabolome represents the nonlinear, final biochemical products of the genome, which results from the intricate system(s) that regulate genome expression. For example, the relationship of metabolomics data to the metabolic network is confounded by redundant connections between metabolites and gene-products. However, connections among metabolites are predictable through the rules of chemistry. Therefore, enhancing the ability to integrate the metabolome with anchor-points in the transcriptome and proteome will enhance the predictive power of genomics data. We detail a public database repository for metabolomics, tools and approaches for statistical analysis of metabolomics data, and methods for integrating these dataset with transcriptomic data to create hypotheses concerning specialized metabolism that generates the diversity in natural product chemistry. We discuss the importance of close collaborations among biologists, chemists, computer scientists and statisticians throughout the development of such integrated metabolism-centric databases and software. PMID:23447050
The Strabo digital data system for Structural Geology and Tectonics
NASA Astrophysics Data System (ADS)
Tikoff, Basil; Newman, Julie; Walker, J. Doug; Williams, Randy; Michels, Zach; Andrews, Joseph; Bunse, Emily; Ash, Jason; Good, Jessica
2017-04-01
We are developing the Strabo data system for the structural geology and tectonics community. The data system will allow researchers to share primary data, apply new types of analytical procedures (e.g., statistical analysis), facilitate interaction with other geology communities, and allow new types of science to be done. The data system is based on a graph database, rather than relational database approach, to increase flexibility and allow geologically realistic relationships between observations and measurements. Development is occurring on: 1) A field-based application that runs on iOS and Android mobile devices and can function in either internet connected or disconnected environments; and 2) A desktop system that runs only in connected settings and directly addresses the back-end database. The field application also makes extensive use of images, such as photos or sketches, which can be hierarchically arranged with encapsulated field measurements/observations across all scales. The system also accepts Shapefile, GEOJSON, KML formats made in ArcGIS and QGIS, and will allow export to these formats as well. Strabo uses two main concepts to organize the data: Spots and Tags. A Spot is any observation that characterizes a specific area. Below GPS resolution, a Spot can be tied to an image (outcrop photo, thin section, etc.). Spots are related in a purely spatial manner (one spot encloses anther spot, which encloses another, etc.). Tags provide a linkage between conceptually related spots. Together, this organization works seamlessly with the workflow of most geologists. We are expanding this effort to include microstructural data, as well as to the disciplines of sedimentology and petrology.
Foocharoen, Chingching; Thavornpitak, Yupa; Mahakkanukrauh, Ajanee; Suwannaroj, Siraphop; Nanagara, Ratanavadee
2013-02-01
Reports of hospitalized systemic connective tissue disorders (SCNTD) are mostly disease-specific reports from institutional databases. To clarify the admission rate, disease determination, hospital mortality rate, length of stay and hospital charges among hospitalized patients diagnosed with SCNTD. The data were extracted from the 2010 national database of hospitalized patients provided by the Thai Health Coding Center, Bureau of Policy and Strategy, Ministry of Public Health, Thailand. Patients over 18 years having International Classification of Diseases (ICD)-10 codes for a primary diagnosis related to SCNTD were included. There were 6861 admissions coded as disorders related to SCNTD during the fiscal year 2010. The admission rate was 141 per 100,000 admissions. Among these, systemic lupus erythematosus (SLE) was the most common, followed by systemic sclerosis (SSc) and dermatomyositis/polymyositis (DM-PM). The overall mean length of hospital stay was 6.8 days. Small vessel vasculitis and Sjögren syndrome had the longest and the shortest hospital stays respectively (14.5 vs. 5.3 days). Hospital charges were highest among systemic vasculitis and DM-PM patients. The admission rate for SCNTD in Thailand was 141 per 100,000 admissions among which SLE was the most common. Overall hospital mortality was 4.1%. Although a lower prevalence was found among systemic vasculitis, it had a higher mortality rate, longer length of stay and greater therapeutic cost. © 2013 The Authors International Journal of Rheumatic Diseases © 2013 Asia Pacific League of Associations for Rheumatology and Wiley Publishing Asia Pty Ltd.
Genes2Networks: connecting lists of gene symbols using mammalian protein interactions databases.
Berger, Seth I; Posner, Jeremy M; Ma'ayan, Avi
2007-10-04
In recent years, mammalian protein-protein interaction network databases have been developed. The interactions in these databases are either extracted manually from low-throughput experimental biomedical research literature, extracted automatically from literature using techniques such as natural language processing (NLP), generated experimentally using high-throughput methods such as yeast-2-hybrid screens, or interactions are predicted using an assortment of computational approaches. Genes or proteins identified as significantly changing in proteomic experiments, or identified as susceptibility disease genes in genomic studies, can be placed in the context of protein interaction networks in order to assign these genes and proteins to pathways and protein complexes. Genes2Networks is a software system that integrates the content of ten mammalian interaction network datasets. Filtering techniques to prune low-confidence interactions were implemented. Genes2Networks is delivered as a web-based service using AJAX. The system can be used to extract relevant subnetworks created from "seed" lists of human Entrez gene symbols. The output includes a dynamic linkable three color web-based network map, with a statistical analysis report that identifies significant intermediate nodes used to connect the seed list. Genes2Networks is powerful web-based software that can help experimental biologists to interpret lists of genes and proteins such as those commonly produced through genomic and proteomic experiments, as well as lists of genes and proteins associated with disease processes. This system can be used to find relationships between genes and proteins from seed lists, and predict additional genes or proteins that may play key roles in common pathways or protein complexes.
Green, Traci C; Grimes Serrano, Jill M; Licari, Andrea; Budman, Simon H; Butler, Stephen F
2009-07-01
Evidence suggests gender differences in abuse of prescription opioids. This study aimed to describe characteristics of women who abuse prescription opioids in a treatment-seeking sample and to contrast gender differences among prescription opioid abusers. Data collected November 2005 to April 2008 derived from the Addiction Severity Index Multimedia Version Connect (ASI-MV Connect) database. Bivariate and multivariable logistic regression examined correlates of prescription opioid abuse stratified by gender. 29,906 assessments from 220 treatment centers were included, of which 12.8% (N=3821) reported past month prescription opioid abuse. Women were more likely than men to report use of any prescription opioid (29.8% females vs. 21.1% males, p<0.001) and abuse of any prescription opioid (15.4% females vs. 11.1% males, p<0.001) in the past month. Route of administration and source of prescription opioids displayed gender-specific tendencies. Women-specific correlates of recent prescription opioid abuse were problem drinking, age <54, inhalant use, residence outside of West US Census region, and history of drug overdose. Men-specific correlates were age <34, currently living with their children, residence in the South and Midwest, hallucinogen use, and recent depression. Women prescription opioid abusers were less likely to report a pain problem although they were more likely to report medical problems than women who abused other drugs. Gender-specific factors should be taken into account in efforts to screen and identify those at highest risk of prescription opioid abuse. Prevention and intervention efforts with a gender-specific approach are warranted.
Mulcahy, Nicholas J; Schubiger, Michèle N; Suddendorf, T
2013-02-01
Great apes appear to have limited knowledge of tool functionality when they are presented with tasks that involve a physical connection between a tool and a reward. For instance, they fail to understand that pulling a rope with a reward tied to its end is more beneficial than pulling a rope that only touches a reward. Apes show more success when both ropes have rewards tied to their ends but one rope is nonfunctional because it is clearly separated into aligned sections. It is unclear, however, whether this success is based on perceptual features unrelated to connectivity, such as perceiving the tool's separate sections as independent tools rather than one discontinuous tool. Surprisingly, there appears to be no study that has tested any type of connectivity problem using natural tools made from branches with which wild and captive apes often have extensive experience. It is possible that such ecologically valid tools may better help subjects understand connectivity that involves physical attachment. In this study, we tested orangutans with natural tools and a range of connectivity problems that involved the physical attachment of a reward on continuous and broken tools. We found that the orangutans understood tool connectivity involving physical attachment that apes from other studies failed when tested with similar tasks using artificial as opposed to natural tools. We found no evidence that the orangutans' success in broken tool conditions was based on perceptual features unrelated to connectivity. Our results suggest that artificial tools may limit apes' knowledge of connectivity involving physical attachment, whereas ecologically valid tools may have the opposite effect. PsycINFO Database Record (c) 2013 APA, all rights reserved
NASA Astrophysics Data System (ADS)
Buxner, Sanlyn; Shupla, C.; CoBabe-Ammann, E.; Dalton, H.; Shipp, S.
2013-10-01
The Planetary Science Education and Public Outreach (E/PO) Forum has helped to create two tools that are designed to help scientists and higher-education science faculty make stronger connections with their audiences: EarthSpace, an education clearinghouse for the undergraduate classroom; and NASA SMD Scientist Speaker’s Bureau, an online portal to help bring science - and scientists - to the public. Are you looking for Earth and space science higher education resources and materials? Come explore EarthSpace, a searchable database of undergraduate classroom materials for faculty teaching Earth and space sciences at both the introductory and upper division levels! In addition to classroom materials, EarthSpace provides news and information about educational research, best practices, and funding opportunities. All materials submitted to EarthSpace are peer reviewed, ensuring that the quality of the EarthSpace materials is high and also providing important feedback to authors. Your submission is a reviewed publication! Learn more, search for resources, join the listserv, sign up to review materials, and submit your own at http://www.lpi.usra.edu/earthspace. Join the new NASA SMD Scientist Speaker’s Bureau, an online portal to connect scientists interested in getting involved in E/PO projects (e.g., giving public talks, classroom visits, and virtual connections) with audiences! The Scientist Speaker’s Bureau helps educators and institutions connect with NASA scientists who are interested in giving presentations, based upon the topic, logistics, and audience. The information input into the database will be used to help match scientists (you!) with the requests being placed by educators. All Earth and space scientists funded by NASA - and/or engaged in active research using NASA’s science - are invited to become part of the Scientist Speaker’s Bureau. Submit your information into the short form at http://www.lpi.usra.edu/education/speaker.
Intelligence Fusion for Combined Operations
1994-06-03
Database ISE - Intelligence Support Element JASMIN - Joint Analysis System for Military Intelligence RC - Joint Intelligence Center JDISS - Joint Defense...has made accessable otherwise inaccessible networks such as connectivity to the German Joint Analysis System for Military Intelligence ( JASMIN ) and the...successfully any mission in the Battlespace is the essence of the C41 for the Warrior concept."’ It recognizes that the current C41 systems do not
ERIC Educational Resources Information Center
Salisbury, Lutishoor; Laincz, Jozef; Smith, Jeremy J.
2012-01-01
Many academic libraries and publishers have developed mobile-optimized versions of their web sites and catalogs. Almost all database vendors and major journal publishers have provided a way to connect to their resources via the Internet and the mobile web. In light of this pervasive use of the Internet, mobile devices and social networking, this…
Taking structure searches to the next dimension.
Schafferhans, Andrea; Rost, Burkhard
2014-07-08
Structure comparisons are now the first step when a new experimental high-resolution protein structure has been determined. In this issue of Structure, Wiederstein and colleagues describe their latest tool for comparing structures, which gives us the unprecedented power to discover crucial structural connections between whole complexes of proteins in the full structural database in real time. Copyright © 2014 Elsevier Ltd. All rights reserved.
Connecting Our Nation’s Crisis Information Management Systems
2008-12-01
Voice Alert is a communications solution that uses a combination of database and GIS mapping technologies to deliver outbound notifications.85 EOC...needing to be accessed through an extension is necessary. With many businesses, hotels , and other locations requiring an extension to reach...built around five major management activities of an incident.130 Command Operations Planning Logistics Finance/administration. The new
2013-08-08
theft in the CERT Insider Threat Database were associated with foreign social network connections. 1 Verizon. “The 2013 Data Breach Investigations...passwords, opening infected attachments or web sites, etc. 1 Verizon. “The 2013 Data Breach Investigations Report.” http...were experienced by 38% of respondents1 • The 2013 Verizon Data Breach Report2 reveals • 29% of breaches studied leveraged social tactics • A
Accelerating Information Retrieval from Profile Hidden Markov Model Databases.
Tamimi, Ahmad; Ashhab, Yaqoub; Tamimi, Hashem
2016-01-01
Profile Hidden Markov Model (Profile-HMM) is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of information retrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases.
US Gateway to SIMBAD Astronomical Database
NASA Technical Reports Server (NTRS)
Eichhorn, G.; Oliversen, R. (Technical Monitor)
1999-01-01
During the last year the US SIMBAD Gateway Project continued to provide services like user registration to the US users of the SIMBAD database in France. Currently there are over 3400 US users registered. We also provide user support by answering questions from users and handling requests for lost passwords when still necessary. We have implemented in cooperation with the CDS SIMBAD project access to the SIMBAD database for US users on an Internet address basis. This allows most US users to access SIMBAD without having to enter passwords. We have maintained the mirror copy of the SIMBAD database on a server at SAO. This has allowed much faster access for the US users. We also supported a demonstration of the SIMBAD database at the meeting of the American Astronomical Society in January. We shipped computer equipment to the meeting and provided support for the demonstration activities at the SIMBAD booth. We continued to improve the cross-linking between the SIMBAD project and the Astrophysics Data System. This cross-linking between these systems is very much appreciated by the users of both the SIMBAD database and the ADS Abstract Service. The mirror of the SIMBAD database at SAO makes this connection faster for the US astronomers. We exchange information between the ADS and SIMBAD on a daily basis. The close cooperation between the CDS in Strasbourg and SAO, facilitated by this project, is an important part of the astronomy-wide digital library initiative called Urania. It has proven to be a model in how different data centers can collaborate and enhance the value of their products by linking with other data centers.
Achieving Integration in Mixed Methods Designs—Principles and Practices
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-01-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. PMID:24279835
Remote online monitoring and measuring system for civil engineering structures
NASA Astrophysics Data System (ADS)
Kujawińska, Malgorzata; Sitnik, Robert; Dymny, Grzegorz; Karaszewski, Maciej; Michoński, Kuba; Krzesłowski, Jakub; Mularczyk, Krzysztof; Bolewicki, Paweł
2009-06-01
In this paper a distributed intelligent system for civil engineering structures on-line measurement, remote monitoring, and data archiving is presented. The system consists of a set of optical, full-field displacement sensors connected to a controlling server. The server conducts measurements according to a list of scheduled tasks and stores the primary data or initial results in a remote centralized database. Simultaneously the server performs checks, ordered by the operator, which may in turn result with an alert or a specific action. The structure of whole system is analyzed along with the discussion on possible fields of application and the ways to provide a relevant security during data transport. Finally, a working implementation consisting of a fringe projection, geometrical moiré, digital image correlation and grating interferometry sensors and Oracle XE database is presented. The results from database utilized for on-line monitoring of a threshold value of strain for an exemplary area of interest at the engineering structure are presented and discussed.
Information management systems for pharmacogenomics.
Thallinger, Gerhard G; Trajanoski, Slave; Stocker, Gernot; Trajanoski, Zlatko
2002-09-01
The value of high-throughput genomic research is dramatically enhanced by association with key patient data. These data are generally available but of disparate quality and not typically directly associated. A system that could bring these disparate data sources into a common resource connected with functional genomic data would be tremendously advantageous. However, the integration of clinical and accurate interpretation of the generated functional genomic data requires the development of information management systems capable of effectively capturing the data as well as tools to make that data accessible to the laboratory scientist or to the clinician. In this review these challenges and current information technology solutions associated with the management, storage and analysis of high-throughput data are highlighted. It is suggested that the development of a pharmacogenomic data management system which integrates public and proprietary databases, clinical datasets, and data mining tools embedded in a high-performance computing environment should include the following components: parallel processing systems, storage technologies, network technologies, databases and database management systems (DBMS), and application services.
Geo-spatial Service and Application based on National E-government Network Platform and Cloud
NASA Astrophysics Data System (ADS)
Meng, X.; Deng, Y.; Li, H.; Yao, L.; Shi, J.
2014-04-01
With the acceleration of China's informatization process, our party and government take a substantive stride in advancing development and application of digital technology, which promotes the evolution of e-government and its informatization. Meanwhile, as a service mode based on innovative resources, cloud computing may connect huge pools together to provide a variety of IT services, and has become one relatively mature technical pattern with further studies and massive practical applications. Based on cloud computing technology and national e-government network platform, "National Natural Resources and Geospatial Database (NRGD)" project integrated and transformed natural resources and geospatial information dispersed in various sectors and regions, established logically unified and physically dispersed fundamental database and developed national integrated information database system supporting main e-government applications. Cross-sector e-government applications and services are realized to provide long-term, stable and standardized natural resources and geospatial fundamental information products and services for national egovernment and public users.
Achieving integration in mixed methods designs-principles and practices.
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-12-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. © Health Research and Educational Trust.
CHOmine: an integrated data warehouse for CHO systems biology and modeling.
Gerstl, Matthias P; Hanscho, Michael; Ruckerbauer, David E; Zanghellini, Jürgen; Borth, Nicole
2017-01-01
The last decade has seen a surge in published genome-scale information for Chinese hamster ovary (CHO) cells, which are the main production vehicles for therapeutic proteins. While a single access point is available at www.CHOgenome.org, the primary data is distributed over several databases at different institutions. Currently research is frequently hampered by a plethora of gene names and IDs that vary between published draft genomes and databases making systems biology analyses cumbersome and elaborate. Here we present CHOmine, an integrative data warehouse connecting data from various databases and links to other ones. Furthermore, we introduce CHOmodel, a web based resource that provides access to recently published CHO cell line specific metabolic reconstructions. Both resources allow to query CHO relevant data, find interconnections between different types of data and thus provides a simple, standardized entry point to the world of CHO systems biology. http://www.chogenome.org. © The Author(s) 2017. Published by Oxford University Press.
MIPS PlantsDB: a database framework for comparative plant genome research.
Nussbaumer, Thomas; Martis, Mihaela M; Roessner, Stephan K; Pfeifer, Matthias; Bader, Kai C; Sharma, Sapna; Gundlach, Heidrun; Spannagl, Manuel
2013-01-01
The rapidly increasing amount of plant genome (sequence) data enables powerful comparative analyses and integrative approaches and also requires structured and comprehensive information resources. Databases are needed for both model and crop plant organisms and both intuitive search/browse views and comparative genomics tools should communicate the data to researchers and help them interpret it. MIPS PlantsDB (http://mips.helmholtz-muenchen.de/plant/genomes.jsp) was initially described in NAR in 2007 [Spannagl,M., Noubibou,O., Haase,D., Yang,L., Gundlach,H., Hindemitt, T., Klee,K., Haberer,G., Schoof,H. and Mayer,K.F. (2007) MIPSPlantsDB-plant database resource for integrative and comparative plant genome research. Nucleic Acids Res., 35, D834-D840] and was set up from the start to provide data and information resources for individual plant species as well as a framework for integrative and comparative plant genome research. PlantsDB comprises database instances for tomato, Medicago, Arabidopsis, Brachypodium, Sorghum, maize, rice, barley and wheat. Building up on that, state-of-the-art comparative genomics tools such as CrowsNest are integrated to visualize and investigate syntenic relationships between monocot genomes. Results from novel genome analysis strategies targeting the complex and repetitive genomes of triticeae species (wheat and barley) are provided and cross-linked with model species. The MIPS Repeat Element Database (mips-REdat) and Catalog (mips-REcat) as well as tight connections to other databases, e.g. via web services, are further important components of PlantsDB.
MIPS PlantsDB: a database framework for comparative plant genome research
Nussbaumer, Thomas; Martis, Mihaela M.; Roessner, Stephan K.; Pfeifer, Matthias; Bader, Kai C.; Sharma, Sapna; Gundlach, Heidrun; Spannagl, Manuel
2013-01-01
The rapidly increasing amount of plant genome (sequence) data enables powerful comparative analyses and integrative approaches and also requires structured and comprehensive information resources. Databases are needed for both model and crop plant organisms and both intuitive search/browse views and comparative genomics tools should communicate the data to researchers and help them interpret it. MIPS PlantsDB (http://mips.helmholtz-muenchen.de/plant/genomes.jsp) was initially described in NAR in 2007 [Spannagl,M., Noubibou,O., Haase,D., Yang,L., Gundlach,H., Hindemitt, T., Klee,K., Haberer,G., Schoof,H. and Mayer,K.F. (2007) MIPSPlantsDB–plant database resource for integrative and comparative plant genome research. Nucleic Acids Res., 35, D834–D840] and was set up from the start to provide data and information resources for individual plant species as well as a framework for integrative and comparative plant genome research. PlantsDB comprises database instances for tomato, Medicago, Arabidopsis, Brachypodium, Sorghum, maize, rice, barley and wheat. Building up on that, state-of-the-art comparative genomics tools such as CrowsNest are integrated to visualize and investigate syntenic relationships between monocot genomes. Results from novel genome analysis strategies targeting the complex and repetitive genomes of triticeae species (wheat and barley) are provided and cross-linked with model species. The MIPS Repeat Element Database (mips-REdat) and Catalog (mips-REcat) as well as tight connections to other databases, e.g. via web services, are further important components of PlantsDB. PMID:23203886
GPU-based cloud service for Smith-Waterman algorithm using frequency distance filtration scheme.
Lee, Sheng-Ta; Lin, Chun-Yuan; Hung, Che Lun
2013-01-01
As the conventional means of analyzing the similarity between a query sequence and database sequences, the Smith-Waterman algorithm is feasible for a database search owing to its high sensitivity. However, this algorithm is still quite time consuming. CUDA programming can improve computations efficiently by using the computational power of massive computing hardware as graphics processing units (GPUs). This work presents a novel Smith-Waterman algorithm with a frequency-based filtration method on GPUs rather than merely accelerating the comparisons yet expending computational resources to handle such unnecessary comparisons. A user friendly interface is also designed for potential cloud server applications with GPUs. Additionally, two data sets, H1N1 protein sequences (query sequence set) and human protein database (database set), are selected, followed by a comparison of CUDA-SW and CUDA-SW with the filtration method, referred to herein as CUDA-SWf. Experimental results indicate that reducing unnecessary sequence alignments can improve the computational time by up to 41%. Importantly, by using CUDA-SWf as a cloud service, this application can be accessed from any computing environment of a device with an Internet connection without time constraints.
The Tropical Biominer Project: mining old sources for new drugs.
Artiguenave, François; Lins, André; Maciel, Wesley Dias; Junior, Antonio Celso Caldeira; Nacif-Coelho, Carla; de Souza Linhares, Maria Margarida Ribeiro; de Oliveira, Guilherme Correa; Barbosa, Luis Humberto Rezende; Lopes, Júlio César Dias; Junior, Claudionor Nunes Coelho
2005-01-01
The Tropical Biominer Project is a recent initiative from the Federal University of Minas Gerais (UFMG) and the Oswaldo Cruz foundation, with the participation of the Biominas Foundation (Belo Horizonte, Minas Gerais, Brazil) and the start-up Homologix. The main objective of the project is to build a new resource for the chemogenomics research, on chemical compounds, with a strong emphasis on natural molecules. Adopted technologies include the search of information from structured, semi-structured, and non-structured documents (the last two from the web) and datamining tools in order to gather information from different sources. The database is the support for developing applications to find new potential treatments for parasitic infections by using virtual screening tools. We present here the midpoint of the project: the conception and implementation of the Tropical Biominer Database. This is a Federated Database designed to store data from different resources. Connected to the database, a web crawler is able to gather information from distinct, patented web sites and store them after automatic classification using datamining tools. Finally, we demonstrate the interest of the approach, by formulating new hypotheses on specific targets of a natural compound, violacein, using inferences from a Virtual Screening procedure.
A radiology department intranet: development and applications.
Willing, S J; Berland, L L
1999-01-01
An intranet is a "private Internet" that uses the protocols of the World Wide Web to share information resources within a company or with the company's business partners and clients. The hardware requirements for an intranet begin with a dedicated Web server permanently connected to the departmental network. The heart of a Web server is the hypertext transfer protocol (HTTP) service, which receives a page request from a client's browser and transmits the page back to the client. Although knowledge of hypertext markup language (HTML) is not essential for authoring a Web page, a working familiarity with HTML is useful, as is knowledge of programming and database management. Security can be ensured by using scripts to write information in hidden fields or by means of "cookies." Interfacing databases and database management systems with the Web server and conforming the user interface to HTML syntax can be achieved by means of the common gateway interface (CGI), Active Server Pages (ASP), or other methods. An intranet in a radiology department could include the following types of content: on-call schedules, work schedules and a calendar, a personnel directory, resident resources, memorandums and discussion groups, software for a radiology information system, and databases.
JEnsembl: a version-aware Java API to Ensembl data systems.
Paterson, Trevor; Law, Andy
2012-11-01
The Ensembl Project provides release-specific Perl APIs for efficient high-level programmatic access to data stored in various Ensembl database schema. Although Perl scripts are perfectly suited for processing large volumes of text-based data, Perl is not ideal for developing large-scale software applications nor embedding in graphical interfaces. The provision of a novel Java API would facilitate type-safe, modular, object-orientated development of new Bioinformatics tools with which to access, analyse and visualize Ensembl data. The JEnsembl API implementation provides basic data retrieval and manipulation functionality from the Core, Compara and Variation databases for all species in Ensembl and EnsemblGenomes and is a platform for the development of a richer API to Ensembl datasources. The JEnsembl architecture uses a text-based configuration module to provide evolving, versioned mappings from database schema to code objects. A single installation of the JEnsembl API can therefore simultaneously and transparently connect to current and previous database instances (such as those in the public archive) thus facilitating better analysis repeatability and allowing 'through time' comparative analyses to be performed. Project development, released code libraries, Maven repository and documentation are hosted at SourceForge (http://jensembl.sourceforge.net).
Performance model for grid-connected photovoltaic inverters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyson, William Earl; Galbraith, Gary M.; King, David L.
2007-09-01
This document provides an empirically based performance model for grid-connected photovoltaic inverters used for system performance (energy) modeling and for continuous monitoring of inverter performance during system operation. The versatility and accuracy of the model were validated for a variety of both residential and commercial size inverters. Default parameters for the model can be obtained from manufacturers specification sheets, and the accuracy of the model can be further refined using measurements from either well-instrumented field measurements in operational systems or using detailed measurements from a recognized testing laboratory. An initial database of inverter performance parameters was developed based on measurementsmore » conducted at Sandia National Laboratories and at laboratories supporting the solar programs of the California Energy Commission.« less
National Map Data Base On Landslide Prerequisites In Clay and Silt Areas - Development of Prototype
NASA Astrophysics Data System (ADS)
Viberg, Leif
Swedish geotechnical institute, SGI, has in co-operation with Swedish geologic survey, Lantmateriet (land surveying) and Swedish Rescue Service developed a theme database on landslide prerequisites in clay and silt areas. The work is carried out on commission of the Swedish government. A report with suggestions for production of the database has been delivered to the government. The database is a prototype, which has been tested in an area in northern Sweden. Recommended presentation map scale is about 1:50 000. Distribution of the database via Internet is discussed. The aim of the database is to use it as a modern planning tool in combination with other databases, e g databases on flooding prognoses. The main use is supposed to be in early planning stages, e g for new building and infrastructure development and for risk analyses. The database can also be used in more acute cases, e g for risk analyses and rescue operations in connection with flooding over large areas. Users are supposed to be municipal and county planners and rescue services, infrastructure planners, consultants and assurance companies. The database is constructed by combination of two existing databases: Elevation data and soil map data. The investigation area is divided into three zones with different stability criteria: 1. Clay and silt in sloping ground or adjoining water. 2. Clay and silt in flat ground. 3. Rock and other soils than clay and silt. The geometrical and soil criteria for the zones are specified in an algoritm, that will do the job to sort out the different zones. The algoritm is thereby using data from the elevation and soil databases. The investigation area is divided into cells (raster format) with 5 x 5 m side length. Different algoritms had to be developed before reasonable calculation time was reached. The theme may be presented on screen or as a map plot. A prototype map has been produced for the test area. A description is accompanying the map. The database is suggested to be produced in landslide prone areas in Sweden and approximately 200-300 map sheets (25 x 25 km) are required.
A novel tele-medical environment appropriate for use in tele-advisory and tele-surgery cases.
Chatzipapadopoulos, F; Georgantas, N; Kotaras, A; Mamais, G; Tombros, S; Tselikis, G
1996-08-01
Tele-medical systems have been recently introduced in the field of networking as promising applications that can significantly improve the offering of medical treatment by providing services such as tele-advising, tele-surgery and remote monitoring in places where the presence of doctors or any medical specialists is difficult or time consuming. Some already existent networking models can be used for the establishment of a connection between the communicating sides. The offered network's security is also a significant factor. The present paper describes a software environment implementing a particular aspect of a tele-medical system. The developed system includes features such as direct communication between doctors and medical assistants, medical information acquisition and storing and high band information transfer in real-time. TCP/IP point-to-point protocol has been used for the implementation of the non bandwidth-critical connections. The application introduces novel features with the use of ATM connection for supporting the time-critical service of video transfer to and from the medical database.
Lemos, Cleidiel Aparecido Araujo; Verri, Fellippo Ramos; Bonfante, Estevam Augusto; Santiago Júnior, Joel Ferreira; Pellizzer, Eduardo Piza
2018-03-01
The systematic review and meta-analysis aimed to answer the PICO question: "Do patients that received external connection implants show similar marginal bone loss, implant survival and complication rates as internal connection implants?". Meta-analyses of marginal bone loss, survival rates of implants and complications rates were performed for the included studies. Study eligibility criteria included (1) randomized controlled trials (RCTs) and/or prospective, (2) studies with at least 10 patients, (3) direct comparison between connection types and (4) publications in English language. The Cochrane risk of bias tool was used to assess the quality and risk of bias in RCTs, while Newcastle-Ottawa scale was used for non-RCTs. A comprehensive search strategy was designed to identify published studies on PubMed/MEDLINE, Scopus, and The Cochrane Library databases up to October 2017. The search identified 661 references. Eleven studies (seven RCTs and four prospective studies) were included, with a total of 530 patients (mean age, 53.93 years), who had received a total of 1089 implants (461 external-connection and 628 internal-connection implants). The internal-connection implants exhibited lower marginal bone loss than external-connection implants (P<0.00001; Mean Difference (MD): 0.44mm; 95% Confidence interval (CI): 0.26-0.63mm). No significant difference was observed in implant survival (P=0.65; Risk Ratio (RR): 0.83; 95% CI: 0.38-1.84), and complication rates (P=0.43; RR: 1.15; 95% CI: 0.81-1.65). Internal connections had lower marginal bone loss when compared to external connections. However, the implant-abutment connection had no influence on the implant's survival and complication rates. Based on the GRADE approach the evidence was classified as very low to moderate due to the study design, inconsistency, and publication bias. Thus, future research is highly encouraged. Internal connection implants should be preferred over external connection implants, especially when different risk factors that may contribute to increased marginal bone loss are present. Copyright © 2017 Elsevier Ltd. All rights reserved.
WEB-GIS Decision Support System for CO2 storage
NASA Astrophysics Data System (ADS)
Gaitanaru, Dragos; Leonard, Anghel; Radu Gogu, Constantin; Le Guen, Yvi; Scradeanu, Daniel; Pagnejer, Mihaela
2013-04-01
Environmental decision support systems (DSS) paradigm evolves and changes as more knowledge and technology become available to the environmental community. Geographic Information Systems (GIS) can be used to extract, assess and disseminate some types of information, which are otherwise difficult to access by traditional methods. In the same time, with the help of the Internet and accompanying tools, creating and publishing online interactive maps has become easier and rich with options. The Decision Support System (MDSS) developed for the MUSTANG (A MUltiple Space and Time scale Approach for the quaNtification of deep saline formations for CO2 storaGe) project is a user friendly web based application that uses the GIS capabilities. MDSS can be exploited by the experts for CO2 injection and storage in deep saline aquifers. The main objective of the MDSS is to help the experts to take decisions based large structured types of data and information. In order to achieve this objective the MDSS has a geospatial objected-orientated database structure for a wide variety of data and information. The entire application is based on several principles leading to a series of capabilities and specific characteristics: (i) Open-Source - the entire platform (MDSS) is based on open-source technologies - (1) database engine, (2) application server, (3) geospatial server, (4) user interfaces, (5) add-ons, etc. (ii) Multiple database connections - MDSS is capable to connect to different databases that are located on different server machines. (iii)Desktop user experience - MDSS architecture and design follows the structure of a desktop software. (iv)Communication - the server side and the desktop are bound together by series functions that allows the user to upload, use, modify and download data within the application. The architecture of the system involves one database and a modular application composed by: (1) a visualization module, (2) an analysis module, (3) a guidelines module, and (4) a risk assessment module. The Database component is build by using the PostgreSQL and PostGIS open source technology. The visualization module allows the user to view data of CO2 injection sites in different ways: (1) geospatial visualization, (2) table view, (3) 3D visualization. The analysis module will allow the user to perform certain analysis like Injectivity, Containment and Capacity analysis. The Risk Assessment module focus on the site risk matrix approach. The Guidelines module contains the methodologies of CO2 injection and storage into deep saline aquifers guidelines.
ERIC Educational Resources Information Center
Luo, Fang; Zhang, Yunyun
2017-01-01
This study examined the effects of family SES on children's mathematics achievement for urban, rural, and migrant families in China. The data comprised 6050 children (44% female, 56% male) in grades 4 and 5 from a national database in China. The results showed that parental education level and family income were directly related to children's…
Richard C. Knopf; Kathleen L. Andereck; Karen Tucker; Bill Bottomly; Randy J. Virden
2004-01-01
Purpose of Study This paper demonstrates how a Benefits-Based Management paradigm has been useful in guiding management plan development for an internationally significant natural resource â the Gunnison Gorge National Conservation Area (GGNCA) in Colorado. Through a program of survey research, a database on benefits desired by various stakeholder groups was created....
On-line classification of pollutants in water using wireless portable electronic noses.
Herrero, José Luis; Lozano, Jesús; Santos, José Pedro; Suárez, José Ignacio
2016-06-01
A portable electronic nose with database connection for on-line classification of pollutants in water is presented in this paper. It is a hand-held, lightweight and powered instrument with wireless communications capable of standalone operation. A network of similar devices can be configured for distributed measurements. It uses four resistive microsensors and headspace as sampling method for extracting the volatile compounds from glass vials. The measurement and control program has been developed in LabVIEW using the database connection toolkit to send the sensors data to a server for training and classification with Artificial Neural Networks (ANNs). The use of a server instead of the microprocessor of the e-nose increases the capacity of memory and the computing power of the classifier and allows external users to perform data classification. To address this challenge, this paper also proposes a web-based framework (based on RESTFul web services, Asynchronous JavaScript and XML and JavaScript Object Notation) that allows remote users to train ANNs and request classification values regardless user's location and the type of device used. Results show that the proposed prototype can discriminate the samples measured (Blank water, acetone, toluene, ammonia, formaldehyde, hydrogen peroxide, ethanol, benzene, dichloromethane, acetic acid, xylene and dimethylacetamide) with a 94% classification success rate. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wang, Lubin; Zou, Feng; Shao, Yongcong; Ye, Enmao; Jin, Xiao; Tan, Shuwen; Hu, Dewen; Yang, Zheng
2014-12-01
The default mode network (DMN) plays an important role in the physiopathology of schizophrenia. Previous studies have suggested that the cerebellum participates in higher-order cognitive networks such as the DMN. However, the specific contribution of the cerebellum to the DMN abnormalities in schizophrenia has yet to be established. In this study, we investigated cerebellar functional connectivity differences between 60 patients with schizophrenia and 60 healthy controls from a public resting-state fMRI database. Seed-based correlation analysis was performed by using seeds from the left Crus I, right Crus I and Lobule IX, which have previously been identified as being involved in the DMN. Our results revealed that, compared with the healthy controls, the patients showed significantly reduced cerebellar functional connectivity with the thalamus and several frontal regions including the middle frontal gyrus, anterior cingulate cortex, and supplementary motor area. Moreover, the positive correlations between the strength of frontocerebellar and thalamocerebellar functional connectivity observed in the healthy subjects were diminished in the patients. Our findings implicate disruptive changes of the fronto-thalamo-cerebellar circuit in schizophrenia, which may provide further evidence for the "cognitive dysmetria" concept of schizophrenia. Copyright © 2014 Elsevier B.V. All rights reserved.
Breuckmann, Frank; Gambichler, Thilo; Altmeyer, Peter; Kreuter, Alexander
2004-01-01
Background Broad-band UVA, long-wave UVA1 and PUVA treatment have been described as an alternative/adjunct therapeutic option in a number of inflammatory and malignant skin diseases. Nevertheless, controlled studies investigating the efficacy of UVA irradiation in connective tissue diseases and related disorders are rare. Methods Searching the PubMed database the current article systematically reviews established and innovative therapeutic approaches of broad-band UVA irradiation, UVA1 phototherapy and PUVA photochemotherapy in a variety of different connective tissue disorders. Results Potential pathways include immunomodulation of inflammation, induction of collagenases and initiation of apoptosis. Even though holding the risk of carcinogenesis, photoaging or UV-induced exacerbation, UVA phototherapy seems to exhibit a tolerable risk/benefit ratio at least in systemic sclerosis, localized scleroderma, extragenital lichen sclerosus et atrophicus, sclerodermoid graft-versus-host disease, lupus erythematosus and a number of sclerotic rarities. Conclusions Based on the data retrieved from the literature, therapeutic UVA exposure seems to be effective in connective tissue diseases and related disorders. However, more controlled investigations are needed in order to establish a clear-cut catalogue of indications. PMID:15380024
Practical Issues of Wireless Mobile Devices Usage with Downlink Optimization
NASA Astrophysics Data System (ADS)
Krejcar, Ondrej; Janckulik, Dalibor; Motalova, Leona
Mobile device makers produce tens of new complex mobile devices per year to put users a special mobile device with a possibility to do anything, anywhere, anytime. These devices can operate full scale applications with nearly the same comfort as their desktop equivalents only with several limitations. One of such limitation is insufficient download on wireless connectivity in case of the large multimedia files. Main area of paper is in a possibilities description of solving this problem as well as the test of several new mobile devices along with server interface tests and common software descriptions. New devices have a full scale of wireless connectivity which can be used not only to communication with outer land. Several such possibilities of use are described. Mobile users will have also an online connection to internet all time powered on. Internet is mainly the web pages but the web services use is still accelerate up. The paper deal also with a possibility of maximum user amounts to have a connection at same time to current server type. At last the new kind of database access - Linq technology is compare to ADO.NET in response time meaning.
Network-based drug discovery by integrating systems biology and computational technologies
Leung, Elaine L.; Cao, Zhi-Wei; Jiang, Zhi-Hong; Zhou, Hua
2013-01-01
Network-based intervention has been a trend of curing systemic diseases, but it relies on regimen optimization and valid multi-target actions of the drugs. The complex multi-component nature of medicinal herbs may serve as valuable resources for network-based multi-target drug discovery due to its potential treatment effects by synergy. Recently, robustness of multiple systems biology platforms shows powerful to uncover molecular mechanisms and connections between the drugs and their targeting dynamic network. However, optimization methods of drug combination are insufficient, owning to lacking of tighter integration across multiple ‘-omics’ databases. The newly developed algorithm- or network-based computational models can tightly integrate ‘-omics’ databases and optimize combinational regimens of drug development, which encourage using medicinal herbs to develop into new wave of network-based multi-target drugs. However, challenges on further integration across the databases of medicinal herbs with multiple system biology platforms for multi-target drug optimization remain to the uncertain reliability of individual data sets, width and depth and degree of standardization of herbal medicine. Standardization of the methodology and terminology of multiple system biology and herbal database would facilitate the integration. Enhance public accessible databases and the number of research using system biology platform on herbal medicine would be helpful. Further integration across various ‘-omics’ platforms and computational tools would accelerate development of network-based drug discovery and network medicine. PMID:22877768
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.
2015-02-10
In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizesmore » the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)« less
Hadadi, Noushin; Hafner, Jasmin; Shajkofci, Adrian; Zisaki, Aikaterini; Hatzimanikatis, Vassily
2016-10-21
Because the complexity of metabolism cannot be intuitively understood or analyzed, computational methods are indispensable for studying biochemistry and deepening our understanding of cellular metabolism to promote new discoveries. We used the computational framework BNICE.ch along with cheminformatic tools to assemble the whole theoretical reactome from the known metabolome through expansion of the known biochemistry presented in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. We constructed the ATLAS of Biochemistry, a database of all theoretical biochemical reactions based on known biochemical principles and compounds. ATLAS includes more than 130 000 hypothetical enzymatic reactions that connect two or more KEGG metabolites through novel enzymatic reactions that have never been reported to occur in living organisms. Moreover, ATLAS reactions integrate 42% of KEGG metabolites that are not currently present in any KEGG reaction into one or more novel enzymatic reactions. The generated repository of information is organized in a Web-based database ( http://lcsb-databases.epfl.ch/atlas/ ) that allows the user to search for all possible routes from any substrate compound to any product. The resulting pathways involve known and novel enzymatic steps that may indicate unidentified enzymatic activities and provide potential targets for protein engineering. Our approach of introducing novel biochemistry into pathway design and associated databases will be important for synthetic biology and metabolic engineering.
Simons, Johannes WIM
2009-01-01
Background We have previously shown that deviations from the average transcription profile of a group of functionally related genes are not only heritable, but also demonstrate specific patterns associated with age, gender and differentiation, thereby implicating genome-wide nuclear programming as the cause. To determine whether these results could be reproduced, a different micro-array database (obtained from two types of muscle tissue, derived from 81 human donors aged between 16 to 89 years) was studied. Results This new database also revealed the existence of age, gender and tissue-specific features in a small group of functionally related genes. In order to further analyze this phenomenon, a method was developed for quantifying the contribution of different factors to the variability in gene expression, and for generating a database limited to residual values reflecting constitutional differences between individuals. These constitutional differences, presumably epigenetic in origin, contribute to about 50% of the observed residual variance which is connected with a network of interrelated changes in gene expression with some genes displaying a decrease or increase in residual variation with age. Conclusion Epigenetic variation in gene expression without a clear concomitant relation to gene function appears to be a widespread phenomenon. This variation is connected with interactions between genes, is gender and tissue specific and is related to cellular aging. This finding, together with the method developed for analysis, might contribute to the elucidation of the role of nuclear programming in differentiation, aging and carcinogenesis Reviewers This article was reviewed by Thiago M. Venancio (nominated by Aravind Iyer), Hua Li (nominated by Arcady Mushegian) and Arcady Mushegian and J.P.de Magelhaes (nominated by G. Church). PMID:19796384
Forensic Tools to Track and Connect Physical Samples to Related Data
NASA Astrophysics Data System (ADS)
Molineux, A.; Thompson, A. C.; Baumgardner, R. W.
2016-12-01
Identifiers, such as local sample numbers, are critical to successfully connecting physical samples and related data. However, identifiers must be globally unique. The International Geo Sample Number (IGSN) generated when registering the sample in the System for Earth Sample Registration (SESAR) provides a globally unique alphanumeric code associated with basic metadata, related samples and their current physical storage location. When registered samples are published, users can link the figured samples to the basic metadata held at SESAR. The use cases we discuss include plant specimens from a Permian core, Holocene corals and derived powders, and thin sections with SEM stubs. Much of this material is now published. The plant taxonomic study from the core is a digital pdf and samples can be directly linked from the captions to the SESAR record. The study of stable isotopes from the corals is not yet digitally available, but individual samples are accessible. Full data and media records for both studies are located in our database where higher quality images, field notes, and section diagrams may exist. Georeferences permit mapping in current and deep time plate configurations. Several aspects emerged during this study. The first, ensure adequate and consistent details are registered with SESAR. Second, educate and encourage the researcher to obtain IGSNs. Third, publish the archive numbers, assigned prior to publication, alongside the IGSN. This provides access to further data through an Integrated Publishing Toolkit (IPT)/aggregators/or online repository databases, thus placing the initial sample in a much richer context for future studies. Fourth, encourage software developers to customize community software to extract data from a database and use it to register samples in bulk. This would improve workflow and provide a path for registration of large legacy collections.
Barneh, Farnaz; Jafari, Mohieddin; Mirzaie, Mehdi
2016-11-01
Network pharmacology elucidates the relationship between drugs and targets. As the identified targets for each drug increases, the corresponding drug-target network (DTN) evolves from solely reflection of the pharmaceutical industry trend to a portrait of polypharmacology. The aim of this study was to evaluate the potentials of DrugBank database in advancing systems pharmacology. We constructed and analyzed DTN from drugs and targets associations in the DrugBank 4.0 database. Our results showed that in bipartite DTN, increased ratio of identified targets for drugs augmented density and connectivity of drugs and targets and decreased modular structure. To clear up the details in the network structure, the DTNs were projected into two networks namely, drug similarity network (DSN) and target similarity network (TSN). In DSN, various classes of Food and Drug Administration-approved drugs with distinct therapeutic categories were linked together based on shared targets. Projected TSN also showed complexity because of promiscuity of the drugs. By including investigational drugs that are currently being tested in clinical trials, the networks manifested more connectivity and pictured the upcoming pharmacological space in the future years. Diverse biological processes and protein-protein interactions were manipulated by new drugs, which can extend possible target combinations. We conclude that network-based organization of DrugBank 4.0 data not only reveals the potential for repurposing of existing drugs, also allows generating novel predictions about drugs off-targets, drug-drug interactions and their side effects. Our results also encourage further effort for high-throughput identification of targets to build networks that can be integrated into disease networks. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Evaluating the Potential of Commercial GIS for Accelerator Configuration Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
T.L. Larrieu; Y.R. Roblin; K. White
2005-10-10
The Geographic Information System (GIS) is a tool used by industries needing to track information about spatially distributed assets. A water utility, for example, must know not only the precise location of each pipe and pump, but also the respective pressure rating and flow rate of each. In many ways, an accelerator such as CEBAF (Continuous Electron Beam Accelerator Facility) can be viewed as an ''electron utility''. Whereas the water utility uses pipes and pumps, the ''electron utility'' uses magnets and RF cavities. At Jefferson lab we are exploring the possibility of implementing ESRI's ArcGIS as the framework for buildingmore » an all-encompassing accelerator configuration database that integrates location, configuration, maintenance, and connectivity details of all hardware and software. The possibilities of doing so are intriguing. From the GIS, software such as the model server could always extract the most-up-to-date layout information maintained by the Survey & Alignment for lattice modeling. The Mechanical Engineering department could use ArcGIS tools to generate CAD drawings of machine segments from the same database. Ultimately, the greatest benefit of the GIS implementation could be to liberate operators and engineers from the limitations of the current system-by-system view of machine configuration and allow a more integrated regional approach. The commercial GIS package provides a rich set of tools for database-connectivity, versioning, distributed editing, importing and exporting, and graphical analysis and querying, and therefore obviates the need for much custom development. However, formidable challenges to implementation exist and these challenges are not only technical and manpower issues, but also organizational ones. The GIS approach would crosscut organizational boundaries and require departments, which heretofore have had free reign to manage their own data, to cede some control and agree to a centralized framework.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burcat, A.; Ruscic, B.; Chemistry
2005-07-29
The thermochemical database of species involved in combustion processes is and has been available for free use for over 25 years. It was first published in print in 1984, approximately 8 years after it was first assembled, and contained 215 species at the time. This is the 7th printed edition and most likely will be the last one in print in the present format, which involves substantial manual labor. The database currently contains more than 1300 species, specifically organic molecules and radicals, but also inorganic species connected to combustion and air pollution. Since 1991 this database is freely available onmore » the internet, at the Technion-IIT ftp server, and it is continuously expanded and corrected. The database is mirrored daily at an official mirror site, and at random at about a dozen unofficial mirror and 'finger' sites. The present edition contains numerous corrections and many recalculations of data of provisory type by the G3//B3LYP method, a high-accuracy composite ab initio calculation. About 300 species are newly calculated and are not yet published elsewhere. In anticipation of the full coupling, which is under development, the database started incorporating the available (as yet unpublished) values from Active Thermochemical Tables. The electronic version now also contains an XML file of the main database to allow transfer to other formats and ease finding specific information of interest. The database is used by scientists, educators, engineers and students at all levels, dealing primarily with combustion and air pollution, jet engines, rocket propulsion, fireworks, but also by researchers involved in upper atmosphere kinetics, astrophysics, abrasion metallurgy, etc. This introductory article contains explanations of the database and the means to use it, its sources, ways of calculation, and assessments of the accuracy of data.« less
Ushijima, Masaru; Mashima, Tetsuo; Tomida, Akihiro; Dan, Shingo; Saito, Sakae; Furuno, Aki; Tsukahara, Satomi; Seimiya, Hiroyuki; Yamori, Takao; Matsuura, Masaaki
2013-03-01
Genome-wide transcriptional expression analysis is a powerful strategy for characterizing the biological activity of anticancer compounds. It is often instructive to identify gene sets involved in the activity of a given drug compound for comparison with different compounds. Currently, however, there is no comprehensive gene expression database and related application system that is; (i) specialized in anticancer agents; (ii) easy to use; and (iii) open to the public. To develop a public gene expression database of antitumor agents, we first examined gene expression profiles in human cancer cells after exposure to 35 compounds including 25 clinically used anticancer agents. Gene signatures were extracted that were classified as upregulated or downregulated after exposure to the drug. Hierarchical clustering showed that drugs with similar mechanisms of action, such as genotoxic drugs, were clustered. Connectivity map analysis further revealed that our gene signature data reflected modes of action of the respective agents. Together with the database, we developed analysis programs that calculate scores for ranking changes in gene expression and for searching statistically significant pathways from the Kyoto Encyclopedia of Genes and Genomes database in order to analyze the datasets more easily. Our database and the analysis programs are available online at our website (http://scads.jfcr.or.jp/db/cs/). Using these systems, we successfully showed that proteasome inhibitors are selectively classified as endoplasmic reticulum stress inducers and induce atypical endoplasmic reticulum stress. Thus, our public access database and related analysis programs constitute a set of efficient tools to evaluate the mode of action of novel compounds and identify promising anticancer lead compounds. © 2012 Japanese Cancer Association.
An SQL query generator for CLIPS
NASA Technical Reports Server (NTRS)
Snyder, James; Chirica, Laurian
1990-01-01
As expert systems become more widely used, their access to large amounts of external information becomes increasingly important. This information exists in several forms such as statistical, tabular data, knowledge gained by experts and large databases of information maintained by companies. Because many expert systems, including CLIPS, do not provide access to this external information, much of the usefulness of expert systems is left untapped. The scope of this paper is to describe a database extension for the CLIPS expert system shell. The current industry standard database language is SQL. Due to SQL standardization, large amounts of information stored on various computers, potentially at different locations, will be more easily accessible. Expert systems should be able to directly access these existing databases rather than requiring information to be re-entered into the expert system environment. The ORACLE relational database management system (RDBMS) was used to provide a database connection within the CLIPS environment. To facilitate relational database access a query generation system was developed as a CLIPS user function. The queries are entered in a CLlPS-like syntax and are passed to the query generator, which constructs and submits for execution, an SQL query to the ORACLE RDBMS. The query results are asserted as CLIPS facts. The query generator was developed primarily for use within the ICADS project (Intelligent Computer Aided Design System) currently being developed by the CAD Research Unit in the California Polytechnic State University (Cal Poly). In ICADS, there are several parallel or distributed expert systems accessing a common knowledge base of facts. Expert system has a narrow domain of interest and therefore needs only certain portions of the information. The query generator provides a common method of accessing this information and allows the expert system to specify what data is needed without specifying how to retrieve it.
Lü, Yiran; Hao, Shuxin; Zhang, Guoqing; Liu, Jie; Liu, Yue; Xu, Dongqun
2018-01-01
To implement the online statistical analysis function in information system of air pollution and health impact monitoring, and obtain the data analysis information real-time. Using the descriptive statistical method as well as time-series analysis and multivariate regression analysis, SQL language and visual tools to implement online statistical analysis based on database software. Generate basic statistical tables and summary tables of air pollution exposure and health impact data online; Generate tendency charts of each data part online and proceed interaction connecting to database; Generate butting sheets which can lead to R, SAS and SPSS directly online. The information system air pollution and health impact monitoring implements the statistical analysis function online, which can provide real-time analysis result to its users.
Pandolfi, Fanny; Edwards, Sandra A; Maes, Dominiek; Kyriazakis, Ilias
2018-01-01
This study aimed to provide an overview of the interconnections between biosecurity, health, welfare, and performance in commercial pig farms in Great Britain. We collected on-farm data about the level of biosecurity and animal performance in 40 fattening pig farms and 28 breeding pig farms between 2015 and 2016. We identified interconnections between these data, slaughterhouse health indicators, and welfare indicator records in fattening pig farms. After achieving the connections between databases, a secondary data analysis was performed to assess the interconnections between biosecurity, health, welfare, and performance using correlation analysis, principal component analysis, and hierarchical clustering. Although we could connect the different data sources the final sample size was limited, suggesting room for improvement in database connection to conduct secondary data analyses. The farm biosecurity scores ranged from 40 to 90 out of 100, with internal biosecurity scores being lower than external biosecurity scores. Our analysis suggested several interconnections between health, welfare, and performance. The initial correlation analysis showed that the prevalence of lameness and severe tail lesions was associated with the prevalence of enzootic pneumonia-like lesions and pyaemia, and the prevalence of severe body marks was associated with several disease indicators, including peritonitis and milk spots ( r > 0.3; P < 0.05). Higher average daily weight gain (ADG) was associated with lower prevalence of pleurisy ( r > 0.3; P < 0.05), but no connection was identified between mortality and health indicators. A subsequent cluster analysis enabled identification of patterns which considered concurrently indicators of health, welfare, and performance. Farms from cluster 1 had lower biosecurity scores, lower ADG, and higher prevalence of several disease and welfare indicators. Farms from cluster 2 had higher biosecurity scores than cluster 1, but a higher prevalence of pigs requiring hospitalization and lameness which confirmed the correlation between biosecurity and the prevalence of pigs requiring hospitalization ( r > 0.3; P < 0.05). Farms from cluster 3 had higher biosecurity, higher ADG, and lower prevalence for some disease and welfare indicators. The study suggests a smaller impact of biosecurity on issues such as mortality, prevalence of lameness, and pig requiring hospitalization. The correlations and the identified clusters suggested the importance of animal welfare for the pig industry.
Aging and functional brain networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomasi D.; Tomasi, D.; Volkow, N.D.
2011-07-11
Aging is associated with changes in human brain anatomy and function and cognitive decline. Recent studies suggest the aging decline of major functional connectivity hubs in the 'default-mode' network (DMN). Aging effects on other networks, however, are largely unknown. We hypothesized that aging would be associated with a decline of short- and long-range functional connectivity density (FCD) hubs in the DMN. To test this hypothesis, we evaluated resting-state data sets corresponding to 913 healthy subjects from a public magnetic resonance imaging database using functional connectivity density mapping (FCDM), a voxelwise and data-driven approach, together with parallel computing. Aging was associatedmore » with pronounced long-range FCD decreases in DMN and dorsal attention network (DAN) and with increases in somatosensory and subcortical networks. Aging effects in these networks were stronger for long-range than for short-range FCD and were also detected at the level of the main functional hubs. Females had higher short- and long-range FCD in DMN and lower FCD in the somatosensory network than males, but the gender by age interaction effects were not significant for any of the networks or hubs. These findings suggest that long-range connections may be more vulnerable to aging effects than short-range connections and that, in addition to the DMN, the DAN is also sensitive to aging effects, which could underlie the deterioration of attention processes that occurs with aging.« less
Difference to Inference: teaching logical and statistical reasoning through on-line interactivity.
Malloy, T E
2001-05-01
Difference to Inference is an on-line JAVA program that simulates theory testing and falsification through research design and data collection in a game format. The program, based on cognitive and epistemological principles, is designed to support learning of the thinking skills underlying deductive and inductive logic and statistical reasoning. Difference to Inference has database connectivity so that game scores can be counted as part of course grades.
3DView: Space physics data visualizer
NASA Astrophysics Data System (ADS)
Génot, V.; Beigbeder, L.; Popescu, D.; Dufourg, N.; Gangloff, M.; Bouchemit, M.; Caussarieu, S.; Toniutti, J.-P.; Durand, J.; Modolo, R.; André, N.; Cecconi, B.; Jacquey, C.; Pitout, F.; Rouillard, A.; Pinto, R.; Erard, S.; Jourdane, N.; Leclercq, L.; Hess, S.; Khodachenko, M.; Al-Ubaidi, T.; Scherf, M.; Budnik, E.
2018-04-01
3DView creates visualizations of space physics data in their original 3D context. Time series, vectors, dynamic spectra, celestial body maps, magnetic field or flow lines, and 2D cuts in simulation cubes are among the variety of data representation enabled by 3DView. It offers direct connections to several large databases and uses VO standards; it also allows the user to upload data. 3DView's versatility covers a wide range of space physics contexts.
Global Picture Archiving and Communication Systems (GPACS): An Overview
1994-04-01
a separate entity in most hospitals because of the integration problems that exilb. Eventually these systems should be connected so they appear to the...extremely important to efficient image data transfer include the protocol being used between the two transferring entities . Image data is currently...images, sound or video. The actual database consists of a collection of persistent data that is used by an application system of some entity , in this
Forensic Carving of Network Packets and Associated Data Structures
2011-01-01
establishment of prior connection activity and services used; identification of other systems present on the system’s LAN or WLAN; geolocation of the...identification of other systems present on the system?s LAN or WLAN; geolocation of the host computer system; and cross-drive analysis. We show that network...Finally, our work in geolocation was assisted by geo- location databases created by companies such as Google (Google Mobile, 2011) and Skyhook
2012-06-01
catechin -0.964 sanguinarine -0.944 5152487 -0.942 5213008 -0.942 sulfadoxine -0.897 scopoletin -0.844 oligomycin -0.826 ursodeoxycholic acid ...following 11 drugs (highlighted in Table 1) for further evaluation: decitabine, sulfadoxine, oligomycin, ursodeoxycholic acid , tioguanine, topiramate...amphotericin B -0.725 5182598 -0.725 tiaprofenic acid -0.72 canavanine -0.71 DL-PPMP -0.706 diflorasone -0.702 sulindac sulfide -0.702
Project Listen Compute Show (LCS) - Marine
2004-02-01
Figure 15. Block diagram of a BB-5. Notice the discrete components between the FPGA and the display connection. All of these are scheduled to be... scheduled to form the core of the next generation projection product. This architecture is expected to scale to true HDTV resolution of 1920 by 1080...flight schedule obtained from a SABRE database in order to offer on-time status. We have developed more sophisticated mechanisms for dealing with
Automated compound classification using a chemical ontology.
Bobach, Claudia; Böhme, Timo; Laube, Ulf; Püschel, Anett; Weber, Lutz
2012-12-29
Classification of chemical compounds into compound classes by using structure derived descriptors is a well-established method to aid the evaluation and abstraction of compound properties in chemical compound databases. MeSH and recently ChEBI are examples of chemical ontologies that provide a hierarchical classification of compounds into general compound classes of biological interest based on their structural as well as property or use features. In these ontologies, compounds have been assigned manually to their respective classes. However, with the ever increasing possibilities to extract new compounds from text documents using name-to-structure tools and considering the large number of compounds deposited in databases, automated and comprehensive chemical classification methods are needed to avoid the error prone and time consuming manual classification of compounds. In the present work we implement principles and methods to construct a chemical ontology of classes that shall support the automated, high-quality compound classification in chemical databases or text documents. While SMARTS expressions have already been used to define chemical structure class concepts, in the present work we have extended the expressive power of such class definitions by expanding their structure-based reasoning logic. Thus, to achieve the required precision and granularity of chemical class definitions, sets of SMARTS class definitions are connected by OR and NOT logical operators. In addition, AND logic has been implemented to allow the concomitant use of flexible atom lists and stereochemistry definitions. The resulting chemical ontology is a multi-hierarchical taxonomy of concept nodes connected by directed, transitive relationships. A proposal for a rule based definition of chemical classes has been made that allows to define chemical compound classes more precisely than before. The proposed structure-based reasoning logic allows to translate chemistry expert knowledge into a computer interpretable form, preventing erroneous compound assignments and allowing automatic compound classification. The automated assignment of compounds in databases, compound structure files or text documents to their related ontology classes is possible through the integration with a chemical structure search engine. As an application example, the annotation of chemical structure files with a prototypic ontology is demonstrated.
Automated compound classification using a chemical ontology
2012-01-01
Background Classification of chemical compounds into compound classes by using structure derived descriptors is a well-established method to aid the evaluation and abstraction of compound properties in chemical compound databases. MeSH and recently ChEBI are examples of chemical ontologies that provide a hierarchical classification of compounds into general compound classes of biological interest based on their structural as well as property or use features. In these ontologies, compounds have been assigned manually to their respective classes. However, with the ever increasing possibilities to extract new compounds from text documents using name-to-structure tools and considering the large number of compounds deposited in databases, automated and comprehensive chemical classification methods are needed to avoid the error prone and time consuming manual classification of compounds. Results In the present work we implement principles and methods to construct a chemical ontology of classes that shall support the automated, high-quality compound classification in chemical databases or text documents. While SMARTS expressions have already been used to define chemical structure class concepts, in the present work we have extended the expressive power of such class definitions by expanding their structure-based reasoning logic. Thus, to achieve the required precision and granularity of chemical class definitions, sets of SMARTS class definitions are connected by OR and NOT logical operators. In addition, AND logic has been implemented to allow the concomitant use of flexible atom lists and stereochemistry definitions. The resulting chemical ontology is a multi-hierarchical taxonomy of concept nodes connected by directed, transitive relationships. Conclusions A proposal for a rule based definition of chemical classes has been made that allows to define chemical compound classes more precisely than before. The proposed structure-based reasoning logic allows to translate chemistry expert knowledge into a computer interpretable form, preventing erroneous compound assignments and allowing automatic compound classification. The automated assignment of compounds in databases, compound structure files or text documents to their related ontology classes is possible through the integration with a chemical structure search engine. As an application example, the annotation of chemical structure files with a prototypic ontology is demonstrated. PMID:23273256
An architecture for integrating distributed and cooperating knowledge-based Air Force decision aids
NASA Technical Reports Server (NTRS)
Nugent, Richard O.; Tucker, Richard W.
1988-01-01
MITRE has been developing a Knowledge-Based Battle Management Testbed for evaluating the viability of integrating independently-developed knowledge-based decision aids in the Air Force tactical domain. The primary goal for the testbed architecture is to permit a new system to be added to a testbed with little change to the system's software. Each system that connects to the testbed network declares that it can provide a number of services to other systems. When a system wants to use another system's service, it does not address the server system by name, but instead transmits a request to the testbed network asking for a particular service to be performed. A key component of the testbed architecture is a common database which uses a relational database management system (RDBMS). The RDBMS provides a database update notification service to requesting systems. Normally, each system is expected to monitor data relations of interest to it. Alternatively, a system may broadcast an announcement message to inform other systems that an event of potential interest has occurred. Current research is aimed at dealing with issues resulting from integration efforts, such as dealing with potential mismatches of each system's assumptions about the common database, decentralizing network control, and coordinating multiple agents.
Schomburg, Ida; Chang, Antje; Placzek, Sandra; Söhngen, Carola; Rother, Michael; Lang, Maren; Munaretto, Cornelia; Ulas, Susanne; Stelzer, Michael; Grote, Andreas; Scheer, Maurice; Schomburg, Dietmar
2013-01-01
The BRENDA (BRaunschweig ENzyme DAtabase) enzyme portal (http://www.brenda-enzymes.org) is the main information system of functional biochemical and molecular enzyme data and provides access to seven interconnected databases. BRENDA contains 2.7 million manually annotated data on enzyme occurrence, function, kinetics and molecular properties. Each entry is connected to a reference and the source organism. Enzyme ligands are stored with their structures and can be accessed via their names, synonyms or via a structure search. FRENDA (Full Reference ENzyme DAta) and AMENDA (Automatic Mining of ENzyme DAta) are based on text mining methods and represent a complete survey of PubMed abstracts with information on enzymes in different organisms, tissues or organelles. The supplemental database DRENDA provides more than 910 000 new EC number-disease relations in more than 510 000 references from automatic search and a classification of enzyme-disease-related information. KENDA (Kinetic ENzyme DAta), a new amendment extracts and displays kinetic values from PubMed abstracts. The integration of the EnzymeDetector offers an automatic comparison, evaluation and prediction of enzyme function annotations for prokaryotic genomes. The biochemical reaction database BKM-react contains non-redundant enzyme-catalysed and spontaneous reactions and was developed to facilitate and accelerate the construction of biochemical models.
Balaur, Irina; Saqi, Mansoor; Barat, Ana; Lysenko, Artem; Mazein, Alexander; Rawlings, Christopher J; Ruskin, Heather J; Auffray, Charles
2017-10-01
The development of colorectal cancer (CRC)-the third most common cancer type-has been associated with deregulations of cellular mechanisms stimulated by both genetic and epigenetic events. StatEpigen is a manually curated and annotated database, containing information on interdependencies between genetic and epigenetic signals, and specialized currently for CRC research. Although StatEpigen provides a well-developed graphical user interface for information retrieval, advanced queries involving associations between multiple concepts can benefit from more detailed graph representation of the integrated data. This can be achieved by using a graph database (NoSQL) approach. Data were extracted from StatEpigen and imported to our newly developed EpiGeNet, a graph database for storage and querying of conditional relationships between molecular (genetic and epigenetic) events observed at different stages of colorectal oncogenesis. We illustrate the enhanced capability of EpiGeNet for exploration of different queries related to colorectal tumor progression; specifically, we demonstrate the query process for (i) stage-specific molecular events, (ii) most frequently observed genetic and epigenetic interdependencies in colon adenoma, and (iii) paths connecting key genes reported in CRC and associated events. The EpiGeNet framework offers improved capability for management and visualization of data on molecular events specific to CRC initiation and progression.
CampusGIS of the University of Cologne: a tool for orientation, navigation, and management
NASA Astrophysics Data System (ADS)
Baaser, U.; Gnyp, M. L.; Hennig, S.; Hoffmeister, D.; Köhn, N.; Laudien, R.; Bareth, G.
2006-10-01
The working group for GIS and Remote Sensing at the Department of Geography at the University of Cologne has established a WebGIS called CampusGIS of the University of Cologne. The overall task of the CampusGIS is the connection of several existing databases at the University of Cologne with spatial data. These existing databases comprise data about staff, buildings, rooms, lectures, and general infrastructure like bus stops etc. These information were yet not linked to their spatial relation. Therefore, a GIS-based method is developed to link all the different databases to spatial entities. Due to the philosophy of the CampusGIS, an online-GUI is programmed which enables users to search for staff, buildings, or institutions. The query results are linked to the GIS database which allows the visualization of the spatial location of the searched entity. This system was established in 2005 and is operational since early 2006. In this contribution, the focus is on further developments. First results of (i) including routing services in, (ii) programming GUIs for mobile devices for, and (iii) including infrastructure management tools in the CampusGIS are presented. Consequently, the CampusGIS is not only available for spatial information retrieval and orientation. It also serves for on-campus navigation and administrative management.
JEnsembl: a version-aware Java API to Ensembl data systems
Paterson, Trevor; Law, Andy
2012-01-01
Motivation: The Ensembl Project provides release-specific Perl APIs for efficient high-level programmatic access to data stored in various Ensembl database schema. Although Perl scripts are perfectly suited for processing large volumes of text-based data, Perl is not ideal for developing large-scale software applications nor embedding in graphical interfaces. The provision of a novel Java API would facilitate type-safe, modular, object-orientated development of new Bioinformatics tools with which to access, analyse and visualize Ensembl data. Results: The JEnsembl API implementation provides basic data retrieval and manipulation functionality from the Core, Compara and Variation databases for all species in Ensembl and EnsemblGenomes and is a platform for the development of a richer API to Ensembl datasources. The JEnsembl architecture uses a text-based configuration module to provide evolving, versioned mappings from database schema to code objects. A single installation of the JEnsembl API can therefore simultaneously and transparently connect to current and previous database instances (such as those in the public archive) thus facilitating better analysis repeatability and allowing ‘through time’ comparative analyses to be performed. Availability: Project development, released code libraries, Maven repository and documentation are hosted at SourceForge (http://jensembl.sourceforge.net). Contact: jensembl-develop@lists.sf.net, andy.law@roslin.ed.ac.uk, trevor.paterson@roslin.ed.ac.uk PMID:22945789
pGenN, a gene normalization tool for plant genes and proteins in scientific literature.
Ding, Ruoyao; Arighi, Cecilia N; Lee, Jung-Youn; Wu, Cathy H; Vijay-Shanker, K
2015-01-01
Automatically detecting gene/protein names in the literature and connecting them to databases records, also known as gene normalization, provides a means to structure the information buried in free-text literature. Gene normalization is critical for improving the coverage of annotation in the databases, and is an essential component of many text mining systems and database curation pipelines. In this manuscript, we describe a gene normalization system specifically tailored for plant species, called pGenN (pivot-based Gene Normalization). The system consists of three steps: dictionary-based gene mention detection, species assignment, and intra species normalization. We have developed new heuristics to improve each of these phases. We evaluated the performance of pGenN on an in-house expertly annotated corpus consisting of 104 plant relevant abstracts. Our system achieved an F-value of 88.9% (Precision 90.9% and Recall 87.2%) on this corpus, outperforming state-of-art systems presented in BioCreative III. We have processed over 440,000 plant-related Medline abstracts using pGenN. The gene normalization results are stored in a local database for direct query from the pGenN web interface (proteininformationresource.org/pgenn/). The annotated literature corpus is also publicly available through the PIR text mining portal (proteininformationresource.org/iprolink/).
Exercises in Anatomy, Connectivity, and Morphology using Neuromorpho.org and the Allen Brain Atlas.
Chu, Philip; Peck, Joshua; Brumberg, Joshua C
2015-01-01
Laboratory instruction of neuroscience is often limited by the lack of physical resources and supplies (e.g., brains specimens, dissection kits, physiological equipment). Online databases can serve as supplements to material labs by providing professionally collected images of brain specimens and their underlying cellular populations with resolution and quality that is extremely difficult to access for strictly pedagogical purposes. We describe a method using two online databases, the Neuromorpho.org and the Allen Brain Atlas (ABA), that freely provide access to data from working brain scientists that can be modified for laboratory instruction/exercises. Neuromorpho.org is the first neuronal morphology database that provides qualitative and quantitative data from reconstructed cells analyzed in published scientific reports. The Neuromorpho.org database contains cross species and multiple neuronal phenotype datasets which allows for comparative examinations. The ABA provides modules that allow students to study the anatomy of the rodent brain, as well as observe the different cellular phenotypes that exist using histochemical labeling. Using these tools in conjunction, advanced students can ask questions about qualitative and quantitative neuronal morphology, then examine the distribution of the same cell types across the entire brain to gain a full appreciation of the magnitude of the brain's complexity.
A new online database of nuclear electromagnetic moments
NASA Astrophysics Data System (ADS)
Mertzimekis, Theo J.
2017-09-01
Nuclear electromagnetic (EM) moments, i.e., the magnetic dipole and the electric quadrupole moments, provide important information of nuclear structure. As in other types of experimental data available to the community, measurements of nuclear EM moments have been organized systematically in compilations since the dawn of nuclear science. However, the wealth of recent moments measurements with radioactive beams, as well as earlier existing measurements, lack an online, easy-to-access, systematically organized presence to disseminate information to researchers. In addition, available printed compilations suffer a rather long life cycle, being left behind experimental measurements published in journals or elsewhere. A new, online database (
Legehar, Ashenafi; Xhaard, Henri; Ghemtio, Leo
2016-01-01
The disposition of a pharmaceutical compound within an organism, i.e. its Absorption, Distribution, Metabolism, Excretion, Toxicity (ADMET) properties and adverse effects, critically affects late stage failure of drug candidates and has led to the withdrawal of approved drugs. Computational methods are effective approaches to reduce the number of safety issues by analyzing possible links between chemical structures and ADMET or adverse effects, but this is limited by the size, quality, and heterogeneity of the data available from individual sources. Thus, large, clean and integrated databases of approved drug data, associated with fast and efficient predictive tools are desirable early in the drug discovery process. We have built a relational database (IDAAPM) to integrate available approved drug data such as drug approval information, ADMET and adverse effects, chemical structures and molecular descriptors, targets, bioactivity and related references. The database has been coupled with a searchable web interface and modern data analytics platform (KNIME) to allow data access, data transformation, initial analysis and further predictive modeling. Data were extracted from FDA resources and supplemented from other publicly available databases. Currently, the database contains information regarding about 19,226 FDA approval applications for 31,815 products (small molecules and biologics) with their approval history, 2505 active ingredients, together with as many ADMET properties, 1629 molecular structures, 2.5 million adverse effects and 36,963 experimental drug-target bioactivity data. IDAAPM is a unique resource that, in a single relational database, provides detailed information on FDA approved drugs including their ADMET properties and adverse effects, the corresponding targets with bioactivity data, coupled with a data analytics platform. It can be used to perform basic to complex drug-target ADMET or adverse effects analysis and predictive modeling. IDAAPM is freely accessible at http://idaapm.helsinki.fi and can be exploited through a KNIME workflow connected to the database.Graphical abstractFDA approved drug data integration for predictive modeling.
Ologs: a categorical framework for knowledge representation.
Spivak, David I; Kent, Robert E
2012-01-01
In this paper we introduce the olog, or ontology log, a category-theoretic model for knowledge representation (KR). Grounded in formal mathematics, ologs can be rigorously formulated and cross-compared in ways that other KR models (such as semantic networks) cannot. An olog is similar to a relational database schema; in fact an olog can serve as a data repository if desired. Unlike database schemas, which are generally difficult to create or modify, ologs are designed to be user-friendly enough that authoring or reconfiguring an olog is a matter of course rather than a difficult chore. It is hoped that learning to author ologs is much simpler than learning a database definition language, despite their similarity. We describe ologs carefully and illustrate with many examples. As an application we show that any primitive recursive function can be described by an olog. We also show that ologs can be aligned or connected together into a larger network using functors. The various methods of information flow and institutions can then be used to integrate local and global world-views. We finish by providing several different avenues for future research.
Soft computing approach to 3D lung nodule segmentation in CT.
Badura, P; Pietka, E
2014-10-01
This paper presents a novel, multilevel approach to the segmentation of various types of pulmonary nodules in computed tomography studies. It is based on two branches of computational intelligence: the fuzzy connectedness (FC) and the evolutionary computation. First, the image and auxiliary data are prepared for the 3D FC analysis during the first stage of an algorithm - the masks generation. Its main goal is to process some specific types of nodules connected to the pleura or vessels. It consists of some basic image processing operations as well as dedicated routines for the specific cases of nodules. The evolutionary computation is performed on the image and seed points in order to shorten the FC analysis and improve its accuracy. After the FC application, the remaining vessels are removed during the postprocessing stage. The method has been validated using the first dataset of studies acquired and described by the Lung Image Database Consortium (LIDC) and by its latest release - the LIDC-IDRI (Image Database Resource Initiative) database. Copyright © 2014 Elsevier Ltd. All rights reserved.
eCOMPAGT – efficient Combination and Management of Phenotypes and Genotypes for Genetic Epidemiology
Schönherr, Sebastian; Weißensteiner, Hansi; Coassin, Stefan; Specht, Günther; Kronenberg, Florian; Brandstätter, Anita
2009-01-01
Background High-throughput genotyping and phenotyping projects of large epidemiological study populations require sophisticated laboratory information management systems. Most epidemiological studies include subject-related personal information, which needs to be handled with care by following data privacy protection guidelines. In addition, genotyping core facilities handling cooperative projects require a straightforward solution to monitor the status and financial resources of the different projects. Description We developed a database system for an efficient combination and management of phenotypes and genotypes (eCOMPAGT) deriving from genetic epidemiological studies. eCOMPAGT securely stores and manages genotype and phenotype data and enables different user modes with different rights. Special attention was drawn on the import of data deriving from TaqMan and SNPlex genotyping assays. However, the database solution is adjustable to other genotyping systems by programming additional interfaces. Further important features are the scalability of the database and an export interface to statistical software. Conclusion eCOMPAGT can store, administer and connect phenotype data with all kinds of genotype data and is available as a downloadable version at . PMID:19432954
LONI visualization environment.
Dinov, Ivo D; Valentino, Daniel; Shin, Bae Cheol; Konstantinidis, Fotios; Hu, Guogang; MacKenzie-Graham, Allan; Lee, Erh-Fang; Shattuck, David; Ma, Jeff; Schwartz, Craig; Toga, Arthur W
2006-06-01
Over the past decade, the use of informatics to solve complex neuroscientific problems has increased dramatically. Many of these research endeavors involve examining large amounts of imaging, behavioral, genetic, neurobiological, and neuropsychiatric data. Superimposing, processing, visualizing, or interpreting such a complex cohort of datasets frequently becomes a challenge. We developed a new software environment that allows investigators to integrate multimodal imaging data, hierarchical brain ontology systems, on-line genetic and phylogenic databases, and 3D virtual data reconstruction models. The Laboratory of Neuro Imaging visualization environment (LONI Viz) consists of the following components: a sectional viewer for imaging data, an interactive 3D display for surface and volume rendering of imaging data, a brain ontology viewer, and an external database query system. The synchronization of all components according to stereotaxic coordinates, region name, hierarchical ontology, and genetic labels is achieved via a comprehensive BrainMapper functionality, which directly maps between position, structure name, database, and functional connectivity information. This environment is freely available, portable, and extensible, and may prove very useful for neurobiologists, neurogenetisists, brain mappers, and for other clinical, pedagogical, and research endeavors.
PubDNA Finder: a web database linking full-text articles to sequences of nucleic acids.
García-Remesal, Miguel; Cuevas, Alejandro; Pérez-Rey, David; Martín, Luis; Anguita, Alberto; de la Iglesia, Diana; de la Calle, Guillermo; Crespo, José; Maojo, Víctor
2010-11-01
PubDNA Finder is an online repository that we have created to link PubMed Central manuscripts to the sequences of nucleic acids appearing in them. It extends the search capabilities provided by PubMed Central by enabling researchers to perform advanced searches involving sequences of nucleic acids. This includes, among other features (i) searching for papers mentioning one or more specific sequences of nucleic acids and (ii) retrieving the genetic sequences appearing in different articles. These additional query capabilities are provided by a searchable index that we created by using the full text of the 176 672 papers available at PubMed Central at the time of writing and the sequences of nucleic acids appearing in them. To automatically extract the genetic sequences occurring in each paper, we used an original method we have developed. The database is updated monthly by automatically connecting to the PubMed Central FTP site to retrieve and index new manuscripts. Users can query the database via the web interface provided. PubDNA Finder can be freely accessed at http://servet.dia.fi.upm.es:8080/pubdnafinder
Berthold, Michael R.; Hedrick, Michael P.; Gilson, Michael K.
2015-01-01
Today’s large, public databases of protein–small molecule interaction data are creating important new opportunities for data mining and integration. At the same time, new graphical user interface-based workflow tools offer facile alternatives to custom scripting for informatics and data analysis. Here, we illustrate how the large protein-ligand database BindingDB may be incorporated into KNIME workflows as a step toward the integration of pharmacological data with broader biomolecular analyses. Thus, we describe a collection of KNIME workflows that access BindingDB data via RESTful webservices and, for more intensive queries, via a local distillation of the full BindingDB dataset. We focus in particular on the KNIME implementation of knowledge-based tools to generate informed hypotheses regarding protein targets of bioactive compounds, based on notions of chemical similarity. A number of variants of this basic approach are tested for seven existing drugs with relatively ill-defined therapeutic targets, leading to replication of some previously confirmed results and discovery of new, high-quality hits. Implications for future development are discussed. Database URL: www.bindingdb.org PMID:26384374
Ologs: A Categorical Framework for Knowledge Representation
Spivak, David I.; Kent, Robert E.
2012-01-01
In this paper we introduce the olog, or ontology log, a category-theoretic model for knowledge representation (KR). Grounded in formal mathematics, ologs can be rigorously formulated and cross-compared in ways that other KR models (such as semantic networks) cannot. An olog is similar to a relational database schema; in fact an olog can serve as a data repository if desired. Unlike database schemas, which are generally difficult to create or modify, ologs are designed to be user-friendly enough that authoring or reconfiguring an olog is a matter of course rather than a difficult chore. It is hoped that learning to author ologs is much simpler than learning a database definition language, despite their similarity. We describe ologs carefully and illustrate with many examples. As an application we show that any primitive recursive function can be described by an olog. We also show that ologs can be aligned or connected together into a larger network using functors. The various methods of information flow and institutions can then be used to integrate local and global world-views. We finish by providing several different avenues for future research. PMID:22303434
Clinical records anonymisation and text extraction (CRATE): an open-source software system.
Cardinal, Rudolf N
2017-04-26
Electronic medical records contain information of value for research, but contain identifiable and often highly sensitive confidential information. Patient-identifiable information cannot in general be shared outside clinical care teams without explicit consent, but anonymisation/de-identification allows research uses of clinical data without explicit consent. This article presents CRATE (Clinical Records Anonymisation and Text Extraction), an open-source software system with separable functions: (1) it anonymises or de-identifies arbitrary relational databases, with sensitivity and precision similar to previous comparable systems; (2) it uses public secure cryptographic methods to map patient identifiers to research identifiers (pseudonyms); (3) it connects relational databases to external tools for natural language processing; (4) it provides a web front end for research and administrative functions; and (5) it supports a specific model through which patients may consent to be contacted about research. Creation and management of a research database from sensitive clinical records with secure pseudonym generation, full-text indexing, and a consent-to-contact process is possible and practical using entirely free and open-source software.
Shoukourian, S K; Vasilyan, A M; Avagyan, A A; Shukurian, A K
1999-01-01
A formalized "top to bottom" design approach was described in [1] for distributed applications built on databases, which were considered as a medium between virtual and real user environments for a specific medical application. Merging different components within a unified distributed application posits new essential problems for software. Particularly protection tools, which are sufficient separately, become deficient during the integration due to specific additional links and relationships not considered formerly. E.g., it is impossible to protect a shared object in the virtual operating room using only DBMS protection tools, if the object is stored as a record in DB tables. The solution of the problem should be found only within the more general application framework. Appropriate tools are absent or unavailable. The present paper suggests a detailed outline of a design and testing toolset for access differentiation systems (ADS) in distributed medical applications which use databases. The appropriate formal model as well as tools for its mapping to a DMBS are suggested. Remote users connected via global networks are considered too.
Huang, Haiyan; Liu, Chun-Chi; Zhou, Xianghong Jasmine
2010-04-13
The rapid accumulation of gene expression data has offered unprecedented opportunities to study human diseases. The National Center for Biotechnology Information Gene Expression Omnibus is currently the largest database that systematically documents the genome-wide molecular basis of diseases. However, thus far, this resource has been far from fully utilized. This paper describes the first study to transform public gene expression repositories into an automated disease diagnosis database. Particularly, we have developed a systematic framework, including a two-stage Bayesian learning approach, to achieve the diagnosis of one or multiple diseases for a query expression profile along a hierarchical disease taxonomy. Our approach, including standardizing cross-platform gene expression data and heterogeneous disease annotations, allows analyzing both sources of information in a unified probabilistic system. A high level of overall diagnostic accuracy was shown by cross validation. It was also demonstrated that the power of our method can increase significantly with the continued growth of public gene expression repositories. Finally, we showed how our disease diagnosis system can be used to characterize complex phenotypes and to construct a disease-drug connectivity map.
Australia's continental-scale acoustic tracking database and its automated quality control process
NASA Astrophysics Data System (ADS)
Hoenner, Xavier; Huveneers, Charlie; Steckenreuter, Andre; Simpfendorfer, Colin; Tattersall, Katherine; Jaine, Fabrice; Atkins, Natalia; Babcock, Russ; Brodie, Stephanie; Burgess, Jonathan; Campbell, Hamish; Heupel, Michelle; Pasquer, Benedicte; Proctor, Roger; Taylor, Matthew D.; Udyawer, Vinay; Harcourt, Robert
2018-01-01
Our ability to predict species responses to environmental changes relies on accurate records of animal movement patterns. Continental-scale acoustic telemetry networks are increasingly being established worldwide, producing large volumes of information-rich geospatial data. During the last decade, the Integrated Marine Observing System's Animal Tracking Facility (IMOS ATF) established a permanent array of acoustic receivers around Australia. Simultaneously, IMOS developed a centralised national database to foster collaborative research across the user community and quantify individual behaviour across a broad range of taxa. Here we present the database and quality control procedures developed to collate 49.6 million valid detections from 1891 receiving stations. This dataset consists of detections for 3,777 tags deployed on 117 marine species, with distances travelled ranging from a few to thousands of kilometres. Connectivity between regions was only made possible by the joint contribution of IMOS infrastructure and researcher-funded receivers. This dataset constitutes a valuable resource facilitating meta-analysis of animal movement, distributions, and habitat use, and is important for relating species distribution shifts with environmental covariates.
Eze, Chuka; Olawepo, John Olajide; Iwelunmor, Juliet; Sarpong, Daniel F; Ogidi, Amaka Grace; Patel, Dina; Oko, John Okpanachi; Onoka, Chima; Ezeanolue, Echezona Edozie
2018-01-01
Background Community-based strategies to test for HIV, hepatitis B virus (HBV), and sickle cell disease (SCD) have expanded opportunities to increase the proportion of pregnant women who are aware of their diagnosis. In order to use this information to implement evidence-based interventions, these results have to be available to skilled health providers at the point of delivery. Most electronic health platforms are dependent on the availability of reliable Internet connectivity and, thus, have limited use in many rural and resource-limited settings. Objective Here we describe our work on the development and deployment of an integrated mHealth platform that is able to capture medical information, including test results, and encrypt it into a patient-held smartcard that can be read at the point of delivery without the need for an Internet connection. Methods We engaged a team of implementation scientists, public health experts, and information technology specialists in a requirement-gathering process to inform the design of a prototype for a platform that uses smartcard technology, database deployment, and mobile phone app development. Key design decisions focused on usability, scalability, and security. Results We successfully designed an integrated mHealth platform and deployed it in 4 health facilities across Benue State, Nigeria. We developed the Vitira Health platform to store test results of HIV, HBV, and SCD in a database, and securely encrypt the results on a Quick Response code embedded on a smartcard. We used a mobile app to read the contents on the smartcard without the need for Internet connectivity. Conclusions Our findings indicate that it is possible to develop a patient-held smartcard and an mHealth platform that contains vital health information that can be read at the point of delivery using a mobile phone-based app without an Internet connection. Trial Registration ClinicalTrials.gov NCT03027258; https://clinicaltrials.gov/ct2/show/NCT03027258 (Archived by WebCite at http://www.webcitation.org/6owR2D0kE) PMID:29335234
Seebregts, Christopher; Dane, Pierre; Parsons, Annie Neo; Fogwill, Thomas; Rogers, Debbie; Bekker, Marcha; Shaw, Vincent; Barron, Peter
2018-01-01
MomConnect is a national initiative coordinated by the South African National Department of Health that sends text-based mobile phone messages free of charge to pregnant women who voluntarily register at any public healthcare facility in South Africa. We describe the system design and architecture of the MomConnect technical platform, planned as a nationally scalable and extensible initiative. It uses a health information exchange that can connect any standards-compliant electronic front-end application to any standards-compliant electronic back-end database. The implementation of the MomConnect technical platform, in turn, is a national reference application for electronic interoperability in line with the South African National Health Normative Standards Framework. The use of open content and messaging standards enables the architecture to include any application adhering to the selected standards. Its national implementation at scale demonstrates both the use of this technology and a key objective of global health information systems, which is to achieve implementation scale. The system’s limited clinical information, initially, allowed the architecture to focus on the base standards and profiles for interoperability in a resource-constrained environment with limited connectivity and infrastructural capacity. Maintenance of the system requires mobilisation of national resources. Future work aims to use the standard interfaces to include data from additional applications as well as to extend and interface the framework with other public health information systems in South Africa. The development of this platform has also shown the benefits of interoperability at both an organisational and technical level in South Africa. PMID:29713506
Estrada, Joey Nuñez; Gilreath, Tamika D; Sanchez, Cathia Y; Astor, Ron Avi
2017-01-01
Recent studies have found that military-connected students confront many challenges-such as secondary traumatization-that may stem from a parent's deployment and frequent relocations. It is possible that multiple moves and deployments of family service members are associated with military-connected students' gang membership and involvement with school violence behaviors. In this study, a total of 13,484 students completed the core and military modules of the California Healthy Kids Survey. Logistic regressions examined the odds of a student being a member of a gang given their grade, gender, race/ethnicity, school violence behaviors, military-connectedness, changes in schools, and familial deployments. Results indicated that of the nearly 8% of students sampled who reported being in a gang, those with a parent or sibling currently serving in the military reported a higher prevalence rate of gang membership than students with no military connection. Students who reported being in fights or carrying weapons to school were at least twice more likely to be a gang member than students who reported not having been in fights or carrying weapons. Changing schools 4 or more times in a 5-year period and experiencing at least 1 familial deployment were also associated with an increased likelihood of gang membership. The findings of this study offer incentive to further explicate the gang and school violence experiences of military-connected students. This study supports schools in understanding the characteristics of the military-connected students and families they serve so they can implement appropriate interventions to curb gang and school violence behaviors. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Seebregts, Christopher; Dane, Pierre; Parsons, Annie Neo; Fogwill, Thomas; Rogers, Debbie; Bekker, Marcha; Shaw, Vincent; Barron, Peter
2018-01-01
MomConnect is a national initiative coordinated by the South African National Department of Health that sends text-based mobile phone messages free of charge to pregnant women who voluntarily register at any public healthcare facility in South Africa. We describe the system design and architecture of the MomConnect technical platform, planned as a nationally scalable and extensible initiative. It uses a health information exchange that can connect any standards-compliant electronic front-end application to any standards-compliant electronic back-end database. The implementation of the MomConnect technical platform, in turn, is a national reference application for electronic interoperability in line with the South African National Health Normative Standards Framework. The use of open content and messaging standards enables the architecture to include any application adhering to the selected standards. Its national implementation at scale demonstrates both the use of this technology and a key objective of global health information systems, which is to achieve implementation scale. The system's limited clinical information, initially, allowed the architecture to focus on the base standards and profiles for interoperability in a resource-constrained environment with limited connectivity and infrastructural capacity. Maintenance of the system requires mobilisation of national resources. Future work aims to use the standard interfaces to include data from additional applications as well as to extend and interface the framework with other public health information systems in South Africa. The development of this platform has also shown the benefits of interoperability at both an organisational and technical level in South Africa.
Blinowska, Katarzyna J; Rakowski, Franciszek; Kaminski, Maciej; De Vico Fallani, Fabrizio; Del Percio, Claudio; Lizio, Roberta; Babiloni, Claudio
2017-04-01
This exploratory study provided a proof of concept of a new procedure using multivariate electroencephalographic (EEG) topographic markers of cortical connectivity to discriminate normal elderly (Nold) and Alzheimer's disease (AD) individuals. The new procedure was tested on an existing database formed by resting state eyes-closed EEG data (19 exploring electrodes of 10-20 system referenced to linked-ear reference electrodes) recorded in 42 AD patients with dementia (age: 65.9years±8.5 standard deviation, SD) and 42 Nold non-consanguineous caregivers (age: 70.6years±8.5 SD). In this procedure, spectral EEG coherence estimated reciprocal functional connectivity while non-normalized directed transfer function (NDTF) estimated effective connectivity. Principal component analysis and computation of Mahalanobis distance integrated and combined these EEG topographic markers of cortical connectivity. The area under receiver operating curve (AUC) indexed the classification accuracy. A good classification of Nold and AD individuals was obtained by combining the EEG markers derived from NDTF and coherence (AUC=86%, sensitivity=0.85, specificity=0.70). These encouraging results motivate a cross-validation study of the new procedure in age- and education-matched Nold, stable and progressing mild cognitive impairment individuals, and de novo AD patients with dementia. If cross-validated, the new procedure will provide cheap, broadly available, repeatable over time, and entirely non-invasive EEG topographic markers reflecting abnormal cortical connectivity in AD patients diagnosed by direct or indirect measurement of cerebral amyloid β and hyperphosphorylated tau peptides. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Performance of conical abutment (Morse Taper) connection implants: a systematic review.
Schmitt, Christian M; Nogueira-Filho, Getulio; Tenenbaum, Howard C; Lai, Jim Yuan; Brito, Carlos; Döring, Hendrik; Nonhoff, Jörg
2014-02-01
In this systematic review, we aimed to compare conical versus nonconical implant-abutment connection systems in terms of their in vitro and in vivo performances. An electronic search was performed using PubMed, Embase, and Medline databases with the logical operators: "dental implant" AND "dental abutment" AND ("conical" OR "taper" OR "cone"). Names of the most common conical implant-abutment connection systems were used as additional key words to detect further data. The search was limited to articles published up to November 2012. Recent publications were also searched manually in order to find any relevant studies that might have been missed using the search criteria noted above. Fifty-two studies met the inclusion criteria and were included in this systematic review. As the data and methods, as well as types of implants used was so heterogeneous, this mitigated against the performance of meta-analysis. In vitro studies indicated that conical and nonconical abutments showed sufficient resistance to maximal bending forces and fatigue loading. However, conical abutments showed superiority in terms of seal performance, microgap formation, torque maintenance, and abutment stability. In vivo studies (human and animal) indicated that conical and nonconical systems are comparable in terms of implant success and survival rates with less marginal bone loss around conical connection implants in most cases. This review indicates that implant systems using a conical implant-abutment connection, provides better results in terms of abutment fit, stability, and seal performance. These design features could lead to improvements over time versus nonconical connection systems. © 2013 Wiley Periodicals, Inc.
Herasevich, Vitaly; Pickering, Brian W; Dong, Yue; Peters, Steve G; Gajic, Ognjen
2010-03-01
To develop and validate an informatics infrastructure for syndrome surveillance, decision support, reporting, and modeling of critical illness. Using open-schema data feeds imported from electronic medical records (EMRs), we developed a near-real-time relational database (Multidisciplinary Epidemiology and Translational Research in Intensive Care Data Mart). Imported data domains included physiologic monitoring, medication orders, laboratory and radiologic investigations, and physician and nursing notes. Open database connectivity supported the use of Boolean combinations of data that allowed authorized users to develop syndrome surveillance, decision support, and reporting (data "sniffers") routines. Random samples of database entries in each category were validated against corresponding independent manual reviews. The Multidisciplinary Epidemiology and Translational Research in Intensive Care Data Mart accommodates, on average, 15,000 admissions to the intensive care unit (ICU) per year and 200,000 vital records per day. Agreement between database entries and manual EMR audits was high for sex, mortality, and use of mechanical ventilation (kappa, 1.0 for all) and for age and laboratory and monitored data (Bland-Altman mean difference +/- SD, 1(0) for all). Agreement was lower for interpreted or calculated variables, such as specific syndrome diagnoses (kappa, 0.5 for acute lung injury), duration of ICU stay (mean difference +/- SD, 0.43+/-0.2), or duration of mechanical ventilation (mean difference +/- SD, 0.2+/-0.9). Extraction of essential ICU data from a hospital EMR into an open, integrative database facilitates process control, reporting, syndrome surveillance, decision support, and outcome research in the ICU.
Polyamines in foods: development of a food database
Ali, Mohamed Atiya; Poortvliet, Eric; Strömberg, Roger; Yngve, Agneta
2011-01-01
Background Knowing the levels of polyamines (putrescine, spermidine, and spermine) in different foods is of interest due to the association of these bioactive nutrients to health and diseases. There is a lack of relevant information on their contents in foods. Objective To develop a food polyamine database from published data by which polyamine intake and food contribution to this intake can be estimated, and to determine the levels of polyamines in Swedish dairy products. Design Extensive literature search and laboratory analysis of selected Swedish dairy products. Polyamine contents in foods were collected using an extensive literature search of databases. Polyamines in different types of Swedish dairy products (milk with different fat percentages, yogurt, cheeses, and sour milk) were determined using high performance liquid chromatography (HPLC) equipped with a UV detector. Results Fruits and cheese were the highest sources of putrescine, while vegetables and meat products were found to be rich in spermidine and spermine, respectively. The content of polyamines in cheese varied considerably between studies. In analyzed Swedish dairy products, matured cheese had the highest total polyamine contents with values of 52.3, 1.2, and 2.6 mg/kg for putrescine, spermidine, and spermine, respectively. Low fat milk had higher putrescine and spermidine, 1.2 and 1.0 mg/kg, respectively, than the other types of milk. Conclusions The database aids other researchers in their quest for information regarding polyamine intake from foods. Connecting the polyamine contents in food with the Swedish Food Database allows for estimation of polyamine contents per portion. PMID:21249159
US Astronomers Access to SIMBAD in Strasbourg
NASA Technical Reports Server (NTRS)
Oliversen, Ronald (Technical Monitor); Eichhorn, Guenther
2004-01-01
During the last year the US SIMBAD Gateway Project continued to provide services like user registration to the US users of the SIMBAD database in France. Currently there are over 4500 US users registered. We also provided user support by answering questions from users and handling requests for lost passwords when still necessary. Even though almost all users now access SIMBAD without a password, based on hostnames/IP addresses, there are still some users that need individual passwords. We continued to maintain the mirror copy of the SIMBAD database on a server at SAO. This allows much faster access for the US users. During the past year we again moved this mirror to a faster server to improve access for the US users. We again supported a demonstration of the SIMBAD database at the meeting of the American Astronomical Society in January. We provided support for the demonstration activities at the SIMBAD booth. We paid part of the fee for the SIMBAD demonstration. We continued to improve the cross-linking between the SIMBAD project and the Astrophysics Data System. This cross-linking between these systems is very much appreciated by the users of both the SIMBAD database and the ADS Abstract Service. The mirror of the SIMBAD database at SA0 makes this connection faster for the US astronomers. We exchange information between the ADS and SIMBAD on a daily basis. During the last year we also installed a mirror copy of the Vizier system from the CDS, in addition to the SIMBAD mirror.
Katayama, Toshiaki; Arakawa, Kazuharu; Nakao, Mitsuteru; Ono, Keiichiro; Aoki-Kinoshita, Kiyoko F; Yamamoto, Yasunori; Yamaguchi, Atsuko; Kawashima, Shuichi; Chun, Hong-Woo; Aerts, Jan; Aranda, Bruno; Barboza, Lord Hendrix; Bonnal, Raoul Jp; Bruskiewich, Richard; Bryne, Jan C; Fernández, José M; Funahashi, Akira; Gordon, Paul Mk; Goto, Naohisa; Groscurth, Andreas; Gutteridge, Alex; Holland, Richard; Kano, Yoshinobu; Kawas, Edward A; Kerhornou, Arnaud; Kibukawa, Eri; Kinjo, Akira R; Kuhn, Michael; Lapp, Hilmar; Lehvaslaiho, Heikki; Nakamura, Hiroyuki; Nakamura, Yasukazu; Nishizawa, Tatsuya; Nobata, Chikashi; Noguchi, Tamotsu; Oinn, Thomas M; Okamoto, Shinobu; Owen, Stuart; Pafilis, Evangelos; Pocock, Matthew; Prins, Pjotr; Ranzinger, René; Reisinger, Florian; Salwinski, Lukasz; Schreiber, Mark; Senger, Martin; Shigemoto, Yasumasa; Standley, Daron M; Sugawara, Hideaki; Tashiro, Toshiyuki; Trelles, Oswaldo; Vos, Rutger A; Wilkinson, Mark D; York, William; Zmasek, Christian M; Asai, Kiyoshi; Takagi, Toshihisa
2010-08-21
Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies.
2010-01-01
Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies. PMID:20727200
2017-07-01
Reports an error in "Default mode functional connectivity is associated with social functioning in schizophrenia" by Jaclyn M. Fox, Samantha V. Abram, James L. Reilly, Shaun Eack, Morris B. Goldman, John G. Csernansky, Lei Wang and Matthew J. Smith ( Journal of Abnormal Psychology , 2017[May], Vol 126[4], 392-405). In the article, the email address of corresponding author Matthew J. Smith was set as matthewsmith@northwestern.edu. It should have been mattjsmi@umich.edu. The online version of this article has been corrected. (The following abstract of the original article appeared in record 2017-14073-001.) Individuals with schizophrenia display notable deficits in social functioning. Research indicates that neural connectivity within the default mode network (DMN) is related to social cognition and social functioning in healthy and clinical populations. However, the association between DMN connectivity, social cognition, and social functioning has not been studied in schizophrenia. For the present study, the authors used resting-state neuroimaging data to evaluate connectivity between the main DMN hubs (i.e., the medial prefrontal cortex [mPFC] and the posterior cingulate cortex-anterior precuneus [PPC]) in individuals with schizophrenia (n = 28) and controls (n = 32). The authors also examined whether DMN connectivity was associated with social functioning via social attainment (measured by the Specific Levels of Functioning Scale) and social competence (measured by the Social Skills Performance Assessment), and if social cognition mediates the association between DMN connectivity and these measures of social functioning. Results revealed that DMN connectivity did not differ between individuals with schizophrenia and controls. However, connectivity between the mPFC and PCC hubs was significantly associated with social competence and social attainment in individuals with schizophrenia but not in controls as reflected by a significant group-by-connectivity interaction. Social cognition did not mediate the association between DMN connectivity and social functioning in individuals with schizophrenia. The findings suggest that fronto-parietal DMN connectivity in particular may be differentially associated with social functioning in schizophrenia and controls. As a result, DMN connectivity may be used as a neuroimaging marker to monitor treatment response or as a potential target for interventions that aim to enhance social functioning in schizophrenia. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Building community resilience to violent extremism through genuine partnerships.
Ellis, B Heidi; Abdi, Saida
2017-04-01
What is community resilience in relation to violent extremism, and how can we build it? This article explores strategies to harness community assets that may contribute to preventing youth from embracing violent extremism, drawing from models of community resilience as defined in relation to disaster preparedness. Research suggests that social connection is at the heart of resilient communities and any strategy to increase community resilience must both harness and enhance existing social connections, and endeavor to not damage or diminish them. First, the role of social connection within and between communities is explored. Specifically, the ways in which social bonding and social bridging can diminish risk for violence, including violent extremism, is examined. Second, research on the role of social connection between communities and institutions or governing bodies (termed social linking) is described. This research is discussed in terms of how the process of government partnering with community members can both provide systems for early intervention for violent extremism, as well as strengthen bonding and bridging social networks and in this way contribute broadly to building community resilience. Finally, community-based participatory research, a model of community engagement and partnership in research, is presented as a road map for building true partnerships and community engagement. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
2012-10-01
use of R packages implemented in Bioconductor. Each dataset was normalized from raw data using the Frozen RMA (fRMA) algorithm . We applied the same...because development of the specific algorithms and fine tuning of the analytic strategy to accomplish this task was not immediately straightforward. We...express firefly luciferase using a retrovirus that encodes a fusion of luciferase and neomycin phosphotransferase (LucNeo), will be implanted and followed
A Comparison of a Relational and Nested-Relational IDEF0 Data Model
1990-03-01
develop, some of the problems inherent iu the hierarchical 5 model were circumvented by the more sophisticated network model. Like the hierarchical model...uetwork database consists of a collection of records connected via links. Unlike the hierarchical model, the network model allows arbitrary graphs as...opposed to trees. Thus, each node may have everal owners and may, in turn, own any number of other records. The network model provides a lchanism by
RAG-3D: A search tool for RNA 3D substructures
Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; ...
2015-08-24
In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less
RAG-3D: a search tool for RNA 3D substructures
Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar
2015-01-01
To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547
Gallagher, Sarah Ivy; Matthews, Debora Candace
2017-01-01
Background: The aim of this systematic review was to evaluate whether patients with gingival recession would benefit from an acellular dermal matrix graft (ADMG) in ways that are comparable to the gold standard of the subepithelial connective tissue graft (SCTG). Materials and Methods: A systematic review and meta-analysis comparing ADMG to SCTG for the treatment of Miller Class I and II recession defects was conducted according to PRISMA guidelines. PubMed, Excerpta Medica Database, and Cochrane Central Register of Controlled Trials databases were searched up to March 2016 for controlled trials with minimum 6 months duration. The primary outcome was root coverage; secondary outcomes included attachment level change, keratinized tissue (KT) change, and patient-based outcomes. Both authors independently assessed the quality of each included trial and extracted the relevant data. Results: From 158 potential titles, 17 controlled trials were included in the meta-analysis. There were no differences between ADMG and SCTG for mean root coverage, percent root coverage, and clinical attachment level gain. ADMG was statistically better than SCTG for gain in width of KT (−0.43 mm; 95% confidence interval: −0.72, −0.15). Only one study compared patient-based outcomes. Conclusion: This review found that an ADMG would be a suitable root coverage substitute for an SCTG when avoidance of the second surgical site is preferred. PMID:29551861
On the connection of permafrost and debris flow activity in Austria
NASA Astrophysics Data System (ADS)
Huber, Thomas; Kaitna, Roland
2016-04-01
Debris flows represent a severe hazard in alpine regions and typically result from a critical combination of relief energy, water, and sediment. Hence, besides water-related trigger conditions, the availability of abundant sediment is a major control on debris flows activity in alpine regions. Increasing temperatures due to global warming are expected to affect periglacial regions and by that the distribution of alpine permafrost and the depth of the active layer, which in turn might lead to increased debris flow activity and increased interference with human interests. In this contribution we assess the importance of permafrost on documented debris flows in the past by connecting the modeled permafrost distribution with a large database of historic debris flows in Austria. The permafrost distribution is estimated based on a published model approach and mainly depends of altitude, relief, and exposition. The database of debris flows includes more than 4000 debris flow events in around 1900 watersheds. We find that 27 % of watersheds experiencing debris flow activity have a modeled permafrost area smaller than 5 % of total area. Around 7 % of the debris flow prone watersheds have an area larger than 5 %. Interestingly, our first results indicate that watersheds without permafrost experience significantly less, but more intense debris flow events than watersheds with modeled permafrost occurrence. Our study aims to contribute to a better understanding of geomorphic activity and the impact of climate change in alpine environments.
RAG-3D: A search tool for RNA 3D substructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef
In this study, to address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally describedmore » in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding.« less
Research Projects that use Citizen-Science Data with NGSS
NASA Astrophysics Data System (ADS)
Walker, C. E.
2014-12-01
We are exploring how to utilize the vast Globe at Night database for use in K-12, keeping in mind the guidelines set by the Next Generation Science Standards (NGSS). Areas we are focusing on include data mining, suitable research questions, data sets to compare with Globe at Night, and analysis tools, as well as how best to engage teachers and students in the research. Globe at Night, a citizen-science program on monitoring light pollution, has a database with the potential to connect with factors embedded in NGSS: students could construct explanations and design solutions to light pollution issues, engage in argument from evidence and obtain, evaluate and communicate information. Projects could be multidisciplinary in nature, connecting the effects of light pollution on human health, wildlife, energy consumption and astronomy. We welcome feedback to help determine the direction and emphasis for the next phase of Globe at Night. The presentation will include the nature of the research in the context of NGSS, building on frameworks being developed with the Cornell Ornithology Lab, the National Park Service (NPS) and Fieldscope. NPS staff have the means to make a contiguous map of light pollution across the U.S.. Fieldscope staff are developing the analysis tools online. And the Ornithology Lab has citizen-science data on various birds. The Globe at Night citizen-science campaign can be found at www.globeatnight.org.
Tag Content Access Control with Identity-based Key Exchange
NASA Astrophysics Data System (ADS)
Yan, Liang; Rong, Chunming
2010-09-01
Radio Frequency Identification (RFID) technology that used to identify objects and users has been applied to many applications such retail and supply chain recently. How to prevent tag content from unauthorized readout is a core problem of RFID privacy issues. Hash-lock access control protocol can make tag to release its content only to reader who knows the secret key shared between them. However, in order to get this shared secret key required by this protocol, reader needs to communicate with a back end database. In this paper, we propose to use identity-based secret key exchange approach to generate the secret key required for hash-lock access control protocol. With this approach, not only back end database connection is not needed anymore, but also tag cloning problem can be eliminated at the same time.
Toward an integrated knowledge environment to support modern oncology.
Blake, Patrick M; Decker, David A; Glennon, Timothy M; Liang, Yong Michael; Losko, Sascha; Navin, Nicholas; Suh, K Stephen
2011-01-01
Around the world, teams of researchers continue to develop a wide range of systems to capture, store, and analyze data including treatment, patient outcomes, tumor registries, next-generation sequencing, single-nucleotide polymorphism, copy number, gene expression, drug chemistry, drug safety, and toxicity. Scientists mine, curate, and manually annotate growing mountains of data to produce high-quality databases, while clinical information is aggregated in distant systems. Databases are currently scattered, and relationships between variables coded in disparate datasets are frequently invisible. The challenge is to evolve oncology informatics from a "systems" orientation of standalone platforms and silos into an "integrated knowledge environments" that will connect "knowable" research data with patient clinical information. The aim of this article is to review progress toward an integrated knowledge environment to support modern oncology with a focus on supporting scientific discovery and improving cancer care.
[Creativity and psychiatric disorders: recent neuroscientific insights].
Thys, E; Sabbe, B; de Hert, M
2011-01-01
Creativity is an important human characteristic on which many of mankind's achievements are based. For centuries practitioners of various disciplines have deliberated over the possible connection between creativity and psychopathology. Even today the issue is still being investigated, mainly by groups working more or less independently; these range from art experts to psychiatrists and neuroscientists. In this article we bring together the foremost recent neuroscientific findings on the subject. We searched for relevant articles via electronic databases using a broad-band search strategy and concentrating mainly on neuroscientific publications. Our study of relevant articles showed that both the definition and the measurability of creativity are still problematic. Psychometric and psychodiagnostic research supports a link between creativity and the psychopathology of bipolar, schizophrenic and especially schizotypal disorders; the results of imaging techniques are less consistent and genetic research reveals a link between creativity and proneness to psychosis. There seems to be a connection between creativity and psychopathology in the bipolar-schizophrenic continuum. This connection is even more evident within the individual components of creativity and symptom groups of these pathologies. There is a need for accurate definitions, measuring instruments and multidisciplinary collaboration.
2017-01-01
The continuous technological advances in favor of mHealth represent a key factor in the improvement of medical emergency services. This systematic review presents the identification, study, and classification of the most up-to-date approaches surrounding the deployment of architectures for mHealth. Our review includes 25 articles obtained from databases such as IEEE Xplore, Scopus, SpringerLink, ScienceDirect, and SAGE. This review focused on studies addressing mHealth systems for outdoor emergency situations. In 60% of the articles, the deployment architecture relied in the connective infrastructure associated with emergent technologies such as cloud services, distributed services, Internet-of-things, machine-to-machine, vehicular ad hoc network, and service-oriented architecture. In 40% of the literature review, the deployment architecture for mHealth considered traditional connective infrastructure. Only 20% of the studies implemented an energy consumption protocol to extend system lifetime. We concluded that there is a need for more integrated solutions specifically for outdoor scenarios. Energy consumption protocols are needed to be implemented and evaluated. Emergent connective technologies are redefining the information management and overcome traditional technologies. PMID:29075430
Gonzalez, Enrique; Peña, Raul; Avila, Alfonso; Vargas-Rosales, Cesar; Munoz-Rodriguez, David
2017-01-01
The continuous technological advances in favor of mHealth represent a key factor in the improvement of medical emergency services. This systematic review presents the identification, study, and classification of the most up-to-date approaches surrounding the deployment of architectures for mHealth. Our review includes 25 articles obtained from databases such as IEEE Xplore, Scopus, SpringerLink, ScienceDirect, and SAGE. This review focused on studies addressing mHealth systems for outdoor emergency situations. In 60% of the articles, the deployment architecture relied in the connective infrastructure associated with emergent technologies such as cloud services, distributed services, Internet-of-things, machine-to-machine, vehicular ad hoc network, and service-oriented architecture. In 40% of the literature review, the deployment architecture for mHealth considered traditional connective infrastructure. Only 20% of the studies implemented an energy consumption protocol to extend system lifetime. We concluded that there is a need for more integrated solutions specifically for outdoor scenarios. Energy consumption protocols are needed to be implemented and evaluated. Emergent connective technologies are redefining the information management and overcome traditional technologies.
Maintaining Privacy in Pervasive Computing - Enabling Acceptance of Sensor-based Services
NASA Astrophysics Data System (ADS)
Soppera, A.; Burbridge, T.
During the 1980s, Mark Weiser [1] predicted a world in which computing was so pervasive that devices embedded in the environment could sense their relationship to us and to each other. These tiny ubiquitous devices would continually feed information from the physical world into the information world. Twenty years ago, this vision was the exclusive territory of academic computer scientists and science fiction writers. Today this subject has become of interest to business, government, and society. Governmental authorities exercise their power through the networked environment. Credit card databases maintain our credit history and decide whether we are allowed to rent a house or obtain a loan. Mobile telephones can locate us in real time so that we do not miss calls. Within another 10 years, all sorts of devices will be connected through the network. Our fridge, our food, together with our health information, may all be networked for the purpose of maintaining diet and well-being. The Internet will move from being an infrastructure to connect computers, to being an infrastructure to connect everything [2, 3].
Steffensen, Jon Lund; Dufault-Thompson, Keith; Zhang, Ying
2018-01-01
The metabolism of individual organisms and biological communities can be viewed as a network of metabolites connected to each other through chemical reactions. In metabolic networks, chemical reactions transform reactants into products, thereby transferring elements between these metabolites. Knowledge of how elements are transferred through reactant/product pairs allows for the identification of primary compound connections through a metabolic network. However, such information is not readily available and is often challenging to obtain for large reaction databases or genome-scale metabolic models. In this study, a new algorithm was developed for automatically predicting the element-transferring reactant/product pairs using the limited information available in the standard representation of metabolic networks. The algorithm demonstrated high efficiency in analyzing large datasets and provided accurate predictions when benchmarked with manually curated data. Applying the algorithm to the visualization of metabolic networks highlighted pathways of primary reactant/product connections and provided an organized view of element-transferring biochemical transformations. The algorithm was implemented as a new function in the open source software package PSAMM in the release v0.30 (https://zhanglab.github.io/psamm/).
Dajani, Dina R.; Uddin, Lucina Q.
2015-01-01
Lay Abstract There is a general consensus that autism spectrum disorder (ASD) is accompanied by alterations in brain connectivity. Much of the neuroimaging work has focused on assessing long-range connectivity disruptions in ASD. However, evidence from both animal models and postmortem examination of the human brain suggests that local connections may also be disrupted in individuals with ASD. Here we investigated the development of local connectivity across three age cohorts of individuals with ASD and typically developing (TD) individuals. We find that in typical development, children exhibit high levels of local connectivity across the brain, while adolescents exhibit lower levels of local connectivity, similar to adult levels. On the other hand, children with ASD exhibit marginally lower local connectivity than TD children, and adolescents and adults with ASD exhibit levels of local connectivity comparable to that observed in neurotypical individuals. During all developmental stages -- childhood, adolescence, and adulthood -- individuals with ASD exhibited lower local connectivity in brain regions involved in sensory processing and higher local connectivity in brain regions involved in complex information processing. Further, higher local connectivity in ASD corresponded to more severe ASD symptomatology. Thus we demonstrate that local connectivity is disrupted in autism across development, with the most pronounced differences occurring in childhood. Scientific Abstract There is a general consensus that autism spectrum disorder (ASD) is accompanied by alterations in brain connectivity. Much of the neuroimaging work has focused on assessing long-range connectivity disruptions in ASD. However, evidence from both animal models and postmortem examination of the human brain suggests that local connections may also be disrupted in individuals with the disorder. Here we investigated how regional homogeneity (ReHo), a measure of similarity of a voxel’s timeseries to its nearest neighbors, varies across age in individuals with ASD and typically developing (TD) individuals using a cross-sectional design. Resting-state fMRI data obtained from a publicly available database were analyzed to determine group differences in ReHo between three age cohorts: children, adolescents, and adults. In typical development, ReHo across the entire brain was higher in children than in adolescents and adults. In contrast, children with ASD exhibited marginally lower ReHo than TD children, while adolescents and adults with ASD exhibited similar levels of local connectivity as age-matched neurotypical individuals. During all developmental stages, individuals with ASD exhibited lower local connectivity in sensory processing brain regions and higher local connectivity in complex information processing regions. Further, higher local connectivity in ASD corresponded to more severe ASD symptomatology. These results demonstrate that local connectivity is disrupted in ASD across development, with the most pronounced differences occurring in childhood. Developmental changes in ReHo do not mirror findings from fMRI studies of long-range connectivity in ASD, pointing to a need for more nuanced accounts of brain connectivity alterations in the disorder. PMID:26058882
Retinal hemorrhage detection by rule-based and machine learning approach.
Di Xiao; Shuang Yu; Vignarajan, Janardhan; Dong An; Mei-Ling Tay-Kearney; Kanagasingam, Yogi
2017-07-01
Robust detection of hemorrhages (HMs) in color fundus image is important in an automatic diabetic retinopathy grading system. Detection of the hemorrhages that are close to or connected with retinal blood vessels was found to be challenge. However, most methods didn't put research on it, even some of them mentioned this issue. In this paper, we proposed a novel hemorrhage detection method based on rule-based and machine learning methods. We focused on the improvement of detection of the hemorrhages that are close to or connected with retinal blood vessels, besides detecting the independent hemorrhage regions. A preliminary test for detecting HM presence was conducted on the images from two databases. We achieved sensitivity and specificity of 93.3% and 88% as well as 91.9% and 85.6% on the two datasets.
Validation and extraction of molecular-geometry information from small-molecule databases.
Long, Fei; Nicholls, Robert A; Emsley, Paul; Graǽulis, Saulius; Merkys, Andrius; Vaitkus, Antanas; Murshudov, Garib N
2017-02-01
A freely available small-molecule structure database, the Crystallography Open Database (COD), is used for the extraction of molecular-geometry information on small-molecule compounds. The results are used for the generation of new ligand descriptions, which are subsequently used by macromolecular model-building and structure-refinement software. To increase the reliability of the derived data, and therefore the new ligand descriptions, the entries from this database were subjected to very strict validation. The selection criteria made sure that the crystal structures used to derive atom types, bond and angle classes are of sufficiently high quality. Any suspicious entries at a crystal or molecular level were removed from further consideration. The selection criteria included (i) the resolution of the data used for refinement (entries solved at 0.84 Å resolution or higher) and (ii) the structure-solution method (structures must be from a single-crystal experiment and all atoms of generated molecules must have full occupancies), as well as basic sanity checks such as (iii) consistency between the valences and the number of connections between atoms, (iv) acceptable bond-length deviations from the expected values and (v) detection of atomic collisions. The derived atom types and bond classes were then validated using high-order moment-based statistical techniques. The results of the statistical analyses were fed back to fine-tune the atom typing. The developed procedure was repeated four times, resulting in fine-grained atom typing, bond and angle classes. The procedure will be repeated in the future as and when new entries are deposited in the COD. The whole procedure can also be applied to any source of small-molecule structures, including the Cambridge Structural Database and the ZINC database.
Cai, Rong-Lin; Shen, Guo-Ming; Wang, Hao; Guan, Yuan-Yuan
2018-01-01
Functional magnetic resonance imaging (fMRI) is a novel method for studying the changes of brain networks due to acupuncture treatment. In recent years, more and more studies have focused on the brain functional connectivity network of acupuncture stimulation. To offer an overview of the different influences of acupuncture on the brain functional connectivity network from studies using resting-state fMRI. The authors performed a systematic search according to PRISMA guidelines. The database PubMed was searched from January 1, 2006 to December 31, 2016 with restriction to human studies in English language. Electronic searches were conducted in PubMed using the keywords "acupuncture" and "neuroimaging" or "resting-state fMRI" or "functional connectivity". Selection of included articles, data extraction and methodological quality assessments were respectively conducted by two review authors. Forty-four resting-state fMRI studies were included in this systematic review according to inclusion criteria. Thirteen studies applied manual acupuncture vs. sham, four studies applied electro-acupuncture vs. sham, two studies also compared transcutaneous electrical acupoint stimulation vs. sham, and nine applied sham acupoint as control. Nineteen studies with a total number of 574 healthy subjects selected to perform fMRI only considered healthy adult volunteers. The brain functional connectivity of the patients had varying degrees of change. Compared with sham acupuncture, verum acupuncture could increase default mode network and sensorimotor network connectivity with pain-, affective- and memory-related brain areas. It has significantly greater connectivity of genuine acupuncture between the periaqueductal gray, anterior cingulate cortex, left posterior cingulate cortex, right anterior insula, limbic/paralimbic and precuneus compared with sham acupuncture. Some research had also shown that acupuncture could adjust the limbic-paralimbic-neocortical network, brainstem, cerebellum, subcortical and hippocampus brain areas. It can be presumed that the functional connectivity network is closely related to the mechanism of acupuncture, and central integration plays a critical role in the acupuncture mechanism. Copyright © 2017 Shanghai Changhai Hospital. Published by Elsevier B.V. All rights reserved.
Ho, Tung Manh; Nguyen, Ha Viet; Vuong, Thu-Trang; Dam, Quang-Minh; Pham, Hiep-Hung; Vuong, Quan-Hoang
2017-01-01
Background: Collaboration is a common occurrence among Vietnamese scientists; however, insights into Vietnamese scientific collaborations have been scarce. On the other hand, the application of social network analysis in studying science collaboration has gained much attention all over the world. The technique could be employed to explore Vietnam's scientific community. Methods: This paper employs network theory to explore characteristics of a network of 412 Vietnamese social scientists whose papers can be found indexed in the Scopus database. Two basic network measures, density and clustering coefficient, were taken, and the entire network was studied in comparison with two of its largest components. Results: The networks connections are very sparse, with a density of only 0.47%, while the clustering coefficient is very high (58.64%). This suggests an inefficient dissemination of information, knowledge, and expertise in the network. Secondly, the disparity in levels of connection among individuals indicates that the network would easily fall apart if a few highly-connected nodes are removed. Finally, the two largest components of the network were found to differ from the entire networks in terms of measures and were both led by the most productive and well-connected researchers. Conclusions: High clustering and low density seems to be tied to inefficient dissemination of expertise among Vietnamese social scientists, and consequently low scientific output. Also low in robustness, the network shows the potential of an intellectual elite composed of well-connected, productive, and socially significant individuals.
Ho, Tung Manh; Nguyen, Ha Viet; Vuong, Thu-Trang; Dam, Quang-Minh; Pham, Hiep-Hung; Vuong, Quan-Hoang
2017-01-01
Background: Collaboration is a common occurrence among Vietnamese scientists; however, insights into Vietnamese scientific collaborations have been scarce. On the other hand, the application of social network analysis in studying science collaboration has gained much attention all over the world. The technique could be employed to explore Vietnam’s scientific community. Methods: This paper employs network theory to explore characteristics of a network of 412 Vietnamese social scientists whose papers can be found indexed in the Scopus database. Two basic network measures, density and clustering coefficient, were taken, and the entire network was studied in comparison with two of its largest components. Results: The networks connections are very sparse, with a density of only 0.47%, while the clustering coefficient is very high (58.64%). This suggests an inefficient dissemination of information, knowledge, and expertise in the network. Secondly, the disparity in levels of connection among individuals indicates that the network would easily fall apart if a few highly-connected nodes are removed. Finally, the two largest components of the network were found to differ from the entire networks in terms of measures and were both led by the most productive and well-connected researchers. Conclusions: High clustering and low density seems to be tied to inefficient dissemination of expertise among Vietnamese social scientists, and consequently low scientific output. Also low in robustness, the network shows the potential of an intellectual elite composed of well-connected, productive, and socially significant individuals. PMID:28928958
CMO: Cruise Metadata Organizer for JAMSTEC Research Cruises
NASA Astrophysics Data System (ADS)
Fukuda, K.; Saito, H.; Hanafusa, Y.; Vanroosebeke, A.; Kitayama, T.
2011-12-01
JAMSTEC's Data Research Center for Marine-Earth Sciences manages and distributes a wide variety of observational data and samples obtained from JAMSTEC research vessels and deep sea submersibles. Generally, metadata are essential to identify data and samples were obtained. In JAMSTEC, cruise metadata include cruise information such as cruise ID, name of vessel, research theme, and diving information such as dive number, name of submersible and position of diving point. They are submitted by chief scientists of research cruises in the Microsoft Excel° spreadsheet format, and registered into a data management database to confirm receipt of observational data files, cruise summaries, and cruise reports. The cruise metadata are also published via "JAMSTEC Data Site for Research Cruises" within two months after end of cruise. Furthermore, these metadata are distributed with observational data, images and samples via several data and sample distribution websites after a publication moratorium period. However, there are two operational issues in the metadata publishing process. One is that duplication efforts and asynchronous metadata across multiple distribution websites due to manual metadata entry into individual websites by administrators. The other is that differential data types or representation of metadata in each website. To solve those problems, we have developed a cruise metadata organizer (CMO) which allows cruise metadata to be connected from the data management database to several distribution websites. CMO is comprised of three components: an Extensible Markup Language (XML) database, an Enterprise Application Integration (EAI) software, and a web-based interface. The XML database is used because of its flexibility for any change of metadata. Daily differential uptake of metadata from the data management database to the XML database is automatically processed via the EAI software. Some metadata are entered into the XML database using the web-based interface by a metadata editor in CMO as needed. Then daily differential uptake of metadata from the XML database to databases in several distribution websites is automatically processed using a convertor defined by the EAI software. Currently, CMO is available for three distribution websites: "Deep Sea Floor Rock Sample Database GANSEKI", "Marine Biological Sample Database", and "JAMSTEC E-library of Deep-sea Images". CMO is planned to provide "JAMSTEC Data Site for Research Cruises" with metadata in the future.
A graph-based approach to construct target-focused libraries for virtual screening.
Naderi, Misagh; Alvin, Chris; Ding, Yun; Mukhopadhyay, Supratik; Brylinski, Michal
2016-01-01
Due to exorbitant costs of high-throughput screening, many drug discovery projects commonly employ inexpensive virtual screening to support experimental efforts. However, the vast majority of compounds in widely used screening libraries, such as the ZINC database, will have a very low probability to exhibit the desired bioactivity for a given protein. Although combinatorial chemistry methods can be used to augment existing compound libraries with novel drug-like compounds, the broad chemical space is often too large to be explored. Consequently, the trend in library design has shifted to produce screening collections specifically tailored to modulate the function of a particular target or a protein family. Assuming that organic compounds are composed of sets of rigid fragments connected by flexible linkers, a molecule can be decomposed into its building blocks tracking their atomic connectivity. On this account, we developed eSynth, an exhaustive graph-based search algorithm to computationally synthesize new compounds by reconnecting these building blocks following their connectivity patterns. We conducted a series of benchmarking calculations against the Directory of Useful Decoys, Enhanced database. First, in a self-benchmarking test, the correctness of the algorithm is validated with the objective to recover a molecule from its building blocks. Encouragingly, eSynth can efficiently rebuild more than 80 % of active molecules from their fragment components. Next, the capability to discover novel scaffolds is assessed in a cross-benchmarking test, where eSynth successfully reconstructed 40 % of the target molecules using fragments extracted from chemically distinct compounds. Despite an enormous chemical space to be explored, eSynth is computationally efficient; half of the molecules are rebuilt in less than a second, whereas 90 % take only about a minute to be generated. eSynth can successfully reconstruct chemically feasible molecules from molecular fragments. Furthermore, in a procedure mimicking the real application, where one expects to discover novel compounds based on a small set of already developed bioactives, eSynth is capable of generating diverse collections of molecules with the desired activity profiles. Thus, we are very optimistic that our effort will contribute to targeted drug discovery. eSynth is freely available to the academic community at www.brylinski.org/content/molecular-synthesis.Graphical abstractAssuming that organic compounds are composed of sets of rigid fragments connected by flexible linkers, a molecule can be decomposed into its building blocks tracking their atomic connectivity. Here, we developed eSynth, an automated method to synthesize new compounds by reconnecting these building blocks following the connectivity patterns via an exhaustive graph-based search algorithm. eSynth opens up a possibility to rapidly construct virtual screening libraries for targeted drug discovery.
Status, upgrades, and advances of RTS2: the open source astronomical observatory manager
NASA Astrophysics Data System (ADS)
Kubánek, Petr
2016-07-01
RTS2 is an open source observatory control system. Being developed from early 2000, it continue to receive new features in last two years. RTS2 is a modulat, network-based distributed control system, featuring telescope drivers with advanced tracking and pointing capabilities, fast camera drivers and high level modules for "business logic" of the observatory, connected to a SQL database. Running on all continents of the planet, it accumulated a lot to control parts or full observatory setups.
Thermodynamic Functions of Yttrium Trifluoride and Its Dimer in the Gas Phase
NASA Astrophysics Data System (ADS)
Osina, E. L.; Kovtun, D. M.
2018-05-01
New calculations of the functions for YF3 and Y2F6 in the gas phase using quantum-chemical calculations by MP2 and CCSD(T) methods are performed in connection with the ongoing work on obtaining reliable thermodynamic data of yttrium halides. The obtained values are entered in the database of the IVTANTERMO software complex. Equations approximating the temperature dependence of the reduced Gibbs energy in the T = 298.15-6000 K range of temperatures are presented.
Tools for the Conceptual Design and Engineering Analysis of Micro Air Vehicles
2009-03-01
problem with two DC motors with propellers, mounted on each wing tip and oriented such that the thrust vectors had an angular separation of 180...ElectriCalc or MotoCalc Database • Script Program (MC) In determination of the components to be integrated into MC, the R/C world was explored since the tools...Excel, ProE, QuickWrap and Script . Importing outside applications can be achieved by direct interaction with MC or through analysis server connections [11
The future of fMRI in cognitive neuroscience.
Poldrack, Russell A
2012-08-15
Over the last 20 years, fMRI has revolutionized cognitive neuroscience. Here I outline a vision for what the next 20 years of fMRI in cognitive neuroscience might look like. Some developments that I hope for include increased methodological rigor, an increasing focus on connectivity and pattern analysis as opposed to "blobology", a greater focus on selective inference powered by open databases, and increased use of ontologies and computational models to describe underlying processes. Copyright © 2011 Elsevier Inc. All rights reserved.
ENFIN a network to enhance integrative systems biology.
Kahlem, Pascal; Birney, Ewan
2007-12-01
Integration of biological data of various types and development of adapted bioinformatics tools represent critical objectives to enable research at the systems level. The European Network of Excellence ENFIN is engaged in developing both an adapted infrastructure to connect databases and platforms to enable the generation of new bioinformatics tools as well as the experimental validation of computational predictions. We will give an overview of the projects tackled within ENFIN and discuss the challenges associated with integration for systems biology.
NASA Technical Reports Server (NTRS)
Hess, Elizabeth L.; Wallace-Robinson, Janice; Dickson, Katherine J.; Powers, Janet V.
1992-01-01
A 10-year cumulative bibliography of publications resulting from research supported by the musculoskeletal discipline of the space physiology and countermeasures program of NASA's Life Sciences Division is provided. Primary subjects are bone, mineral, and connective tissue, and muscle. General physiology references are also included. Principal investigators whose research tasks resulted in publication are identified by asterisk. Publications are identified by a record number corresponding with their entry in the life sciences bibliographic database, maintained by the George Washington University.
The Genomic, Epigenomic, and Psychosocial Characteristics of Long-Term Survivors of Ovarian Cancer
2016-12-01
who will consent them and collect their clinical data, quality of life survey , and tumor samples. The program manager will also connect them to Lari...that will be enrolled in the study during the Phase II of this award. The database contains a link to the new quality of life survey so that the... survey to which Dr. Lari Wenzel has access. Attached to this submission are: the advertizing material designed by the advocates, the clinical
CBS Genome Atlas Database: a dynamic storage for bioinformatic results and sequence data.
Hallin, Peter F; Ussery, David W
2004-12-12
Currently, new bacterial genomes are being published on a monthly basis. With the growing amount of genome sequence data, there is a demand for a flexible and easy-to-maintain structure for storing sequence data and results from bioinformatic analysis. More than 150 sequenced bacterial genomes are now available, and comparisons of properties for taxonomically similar organisms are not readily available to many biologists. In addition to the most basic information, such as AT content, chromosome length, tRNA count and rRNA count, a large number of more complex calculations are needed to perform detailed comparative genomics. DNA structural calculations like curvature and stacking energy, DNA compositions like base skews, oligo skews and repeats at the local and global level are just a few of the analysis that are presented on the CBS Genome Atlas Web page. Complex analysis, changing methods and frequent addition of new models are factors that require a dynamic database layout. Using basic tools like the GNU Make system, csh, Perl and MySQL, we have created a flexible database environment for storing and maintaining such results for a collection of complete microbial genomes. Currently, these results counts to more than 220 pieces of information. The backbone of this solution consists of a program package written in Perl, which enables administrators to synchronize and update the database content. The MySQL database has been connected to the CBS web-server via PHP4, to present a dynamic web content for users outside the center. This solution is tightly fitted to existing server infrastructure and the solutions proposed here can perhaps serve as a template for other research groups to solve database issues. A web based user interface which is dynamically linked to the Genome Atlas Database can be accessed via www.cbs.dtu.dk/services/GenomeAtlas/. This paper has a supplemental information page which links to the examples presented: www.cbs.dtu.dk/services/GenomeAtlas/suppl/bioinfdatabase.
Graph theoretical model of a sensorimotor connectome in zebrafish.
Stobb, Michael; Peterson, Joshua M; Mazzag, Borbala; Gahtan, Ethan
2012-01-01
Mapping the detailed connectivity patterns (connectomes) of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron) varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.
[A relational database to store Poison Centers calls].
Barelli, Alessandro; Biondi, Immacolata; Tafani, Chiara; Pellegrini, Aristide; Soave, Maurizio; Gaspari, Rita; Annetta, Maria Giuseppina
2006-01-01
Italian Poison Centers answer to approximately 100,000 calls per year. Potentially, this activity is a huge source of data for toxicovigilance and for syndromic surveillance. During the last decade, surveillance systems for early detection of outbreaks have drawn the attention of public health institutions due to the threat of terrorism and high-profile disease outbreaks. Poisoning surveillance needs the ongoing, systematic collection, analysis, interpretation, and dissemination of harmonised data about poisonings from all Poison Centers for use in public health action to reduce morbidity and mortality and to improve health. The entity-relationship model for a Poison Center relational database is extremely complex and not studied in detail. For this reason, not harmonised data collection happens among Italian Poison Centers. Entities are recognizable concepts, either concrete or abstract, such as patients and poisons, or events which have relevance to the database, such as calls. Connectivity and cardinality of relationships are complex as well. A one-to-many relationship exist between calls and patients: for one instance of entity calls, there are zero, one, or many instances of entity patients. At the same time, a one-to-many relationship exist between patients and poisons: for one instance of entity patients, there are zero, one, or many instances of entity poisons. This paper shows a relational model for a poison center database which allows the harmonised data collection of poison centers calls.
The YeastGenome app: the Saccharomyces Genome Database at your fingertips.
Wong, Edith D; Karra, Kalpana; Hitz, Benjamin C; Hong, Eurie L; Cherry, J Michael
2013-01-01
The Saccharomyces Genome Database (SGD) is a scientific database that provides researchers with high-quality curated data about the genes and gene products of Saccharomyces cerevisiae. To provide instant and easy access to this information on mobile devices, we have developed YeastGenome, a native application for the Apple iPhone and iPad. YeastGenome can be used to quickly find basic information about S. cerevisiae genes and chromosomal features regardless of internet connectivity. With or without network access, you can view basic information and Gene Ontology annotations about a gene of interest by searching gene names and gene descriptions or by browsing the database within the app to find the gene of interest. With internet access, the app provides more detailed information about the gene, including mutant phenotypes, references and protein and genetic interactions, as well as provides hyperlinks to retrieve detailed information by showing SGD pages and views of the genome browser. SGD provides online help describing basic ways to navigate the mobile version of SGD, highlights key features and answers frequently asked questions related to the app. The app is available from iTunes (http://itunes.com/apps/yeastgenome). The YeastGenome app is provided freely as a service to our community, as part of SGD's mission to provide free and open access to all its data and annotations.
pGenN, a Gene Normalization Tool for Plant Genes and Proteins in Scientific Literature
Ding, Ruoyao; Arighi, Cecilia N.; Lee, Jung-Youn; Wu, Cathy H.; Vijay-Shanker, K.
2015-01-01
Background Automatically detecting gene/protein names in the literature and connecting them to databases records, also known as gene normalization, provides a means to structure the information buried in free-text literature. Gene normalization is critical for improving the coverage of annotation in the databases, and is an essential component of many text mining systems and database curation pipelines. Methods In this manuscript, we describe a gene normalization system specifically tailored for plant species, called pGenN (pivot-based Gene Normalization). The system consists of three steps: dictionary-based gene mention detection, species assignment, and intra species normalization. We have developed new heuristics to improve each of these phases. Results We evaluated the performance of pGenN on an in-house expertly annotated corpus consisting of 104 plant relevant abstracts. Our system achieved an F-value of 88.9% (Precision 90.9% and Recall 87.2%) on this corpus, outperforming state-of-art systems presented in BioCreative III. We have processed over 440,000 plant-related Medline abstracts using pGenN. The gene normalization results are stored in a local database for direct query from the pGenN web interface (proteininformationresource.org/pgenn/). The annotated literature corpus is also publicly available through the PIR text mining portal (proteininformationresource.org/iprolink/). PMID:26258475
Wang, Lizhu; Riseng, Catherine M.; Mason, Lacey; Werhrly, Kevin; Rutherford, Edward; McKenna, James E.; Castiglione, Chris; Johnson, Lucinda B.; Infante, Dana M.; Sowa, Scott P.; Robertson, Mike; Schaeffer, Jeff; Khoury, Mary; Gaiot, John; Hollenhurst, Tom; Brooks, Colin N.; Coscarelli, Mark
2015-01-01
Managing the world's largest and most complex freshwater ecosystem, the Laurentian Great Lakes, requires a spatially hierarchical basin-wide database of ecological and socioeconomic information that is comparable across the region. To meet such a need, we developed a spatial classification framework and database — Great Lakes Aquatic Habitat Framework (GLAHF). GLAHF consists of catchments, coastal terrestrial, coastal margin, nearshore, and offshore zones that encompass the entire Great Lakes Basin. The catchments captured in the database as river pour points or coastline segments are attributed with data known to influence physicochemical and biological characteristics of the lakes from the catchments. The coastal terrestrial zone consists of 30-m grid cells attributed with data from the terrestrial region that has direct connection with the lakes. The coastal margin and nearshore zones consist of 30-m grid cells attributed with data describing the coastline conditions, coastal human disturbances, and moderately to highly variable physicochemical and biological characteristics. The offshore zone consists of 1.8-km grid cells attributed with data that are spatially less variable compared with the other aquatic zones. These spatial classification zones and their associated data are nested within lake sub-basins and political boundaries and allow the synthesis of information from grid cells to classification zones, within and among political boundaries, lake sub-basins, Great Lakes, or within the entire Great Lakes Basin. This spatially structured database could help the development of basin-wide management plans, prioritize locations for funding and specific management actions, track protection and restoration progress, and conduct research for science-based decision making.
TransAtlasDB: an integrated database connecting expression data, metadata and variants
Adetunji, Modupeore O; Lamont, Susan J; Schmidt, Carl J
2018-01-01
Abstract High-throughput transcriptome sequencing (RNAseq) is the universally applied method for target-free transcript identification and gene expression quantification, generating huge amounts of data. The constraint of accessing such data and interpreting results can be a major impediment in postulating suitable hypothesis, thus an innovative storage solution that addresses these limitations, such as hard disk storage requirements, efficiency and reproducibility are paramount. By offering a uniform data storage and retrieval mechanism, various data can be compared and easily investigated. We present a sophisticated system, TransAtlasDB, which incorporates a hybrid architecture of both relational and NoSQL databases for fast and efficient data storage, processing and querying of large datasets from transcript expression analysis with corresponding metadata, as well as gene-associated variants (such as SNPs) and their predicted gene effects. TransAtlasDB provides the data model of accurate storage of the large amount of data derived from RNAseq analysis and also methods of interacting with the database, either via the command-line data management workflows, written in Perl, with useful functionalities that simplifies the complexity of data storage and possibly manipulation of the massive amounts of data generated from RNAseq analysis or through the web interface. The database application is currently modeled to handle analyses data from agricultural species, and will be expanded to include more species groups. Overall TransAtlasDB aims to serve as an accessible repository for the large complex results data files derived from RNAseq gene expression profiling and variant analysis. Database URL: https://modupeore.github.io/TransAtlasDB/ PMID:29688361
Andrejasic, Miha; Praaenikar, Jure; Turk, Dusan
2008-11-01
The number and variety of macromolecular structures in complex with ;hetero' ligands is growing. The need for rapid delivery of correct geometric parameters for their refinement, which is often crucial for understanding the biological relevance of the structure, is growing correspondingly. The current standard for describing protein structures is the Engh-Huber parameter set. It is an expert data set resulting from selection and analysis of the crystal structures gathered in the Cambridge Structural Database (CSD). Clearly, such a manual approach cannot be applied to the vast and ever-growing number of chemical compounds. Therefore, a database, named PURY, of geometric parameters of chemical compounds has been developed, together with a server that accesses it. PURY is a compilation of the whole CSD. It contains lists of atom classes and bonds connecting them, as well as angle, chirality, planarity and conformation parameters. The current compilation is based on CSD 5.28 and contains 1978 atom classes and 32,702 bonding, 237,068 angle, 201,860 dihedral and 64,193 improper geometric restraints. Analysis has confirmed that the restraints from the PURY database are suitable for use in macromolecular crystal structure refinement and should be of value to the crystallographic community. The database can be accessed through the web server http://pury.ijs.si/, which creates topology and parameter files from deposited coordinates in suitable forms for the refinement programs MAIN, CNS and REFMAC. In the near future, the server will move to the CSD website http://pury.ccdc.cam.ac.uk/.
Statistical organelle dissection of Arabidopsis guard cells using image database LIPS.
Higaki, Takumi; Kutsuna, Natsumaro; Hosokawa, Yoichiroh; Akita, Kae; Ebine, Kazuo; Ueda, Takashi; Kondo, Noriaki; Hasezawa, Seiichiro
2012-01-01
To comprehensively grasp cell biological events in plant stomatal movement, we have captured microscopic images of guard cells with various organelles markers. The 28,530 serial optical sections of 930 pairs of Arabidopsis guard cells have been released as a new image database, named Live Images of Plant Stomata (LIPS). We visualized the average organellar distributions in guard cells using probabilistic mapping and image clustering techniques. The results indicated that actin microfilaments and endoplasmic reticulum (ER) are mainly localized to the dorsal side and connection regions of guard cells. Subtractive images of open and closed stomata showed distribution changes in intracellular structures, including the ER, during stomatal movement. Time-lapse imaging showed that similar ER distribution changes occurred during stomatal opening induced by light irradiation or femtosecond laser shots on neighboring epidermal cells, indicating that our image analysis approach has identified a novel ER relocation in stomatal opening.
Molecular Interaction Map of the Mammalian Cell Cycle Control and DNA Repair Systems
Kohn, Kurt W.
1999-01-01
Eventually to understand the integrated function of the cell cycle regulatory network, we must organize the known interactions in the form of a diagram, map, and/or database. A diagram convention was designed capable of unambiguous representation of networks containing multiprotein complexes, protein modifications, and enzymes that are substrates of other enzymes. To facilitate linkage to a database, each molecular species is symbolically represented only once in each diagram. Molecular species can be located on the map by means of indexed grid coordinates. Each interaction is referenced to an annotation list where pertinent information and references can be found. Parts of the network are grouped into functional subsystems. The map shows how multiprotein complexes could assemble and function at gene promoter sites and at sites of DNA damage. It also portrays the richness of connections between the p53-Mdm2 subsystem and other parts of the network. PMID:10436023
A Mobile Food Record For Integrated Dietary Assessment*
Ahmad, Ziad; Kerr, Deborah A.; Bosch, Marc; Boushey, Carol J.; Delp, Edward J.; Khanna, Nitin; Zhu, Fengqing
2017-01-01
This paper presents an integrated dietary assessment system based on food image analysis that uses mobile devices or smartphones. We describe two components of our integrated system: a mobile application and an image-based food nutrient database that is connected to the mobile application. An easy-to-use mobile application user interface is described that was designed based on user preferences as well as the requirements of the image analysis methods. The user interface is validated by user feedback collected from several studies. Food nutrient and image databases are also described which facilitates image-based dietary assessment and enable dietitians and other healthcare professionals to monitor patients dietary intake in real-time. The system has been tested and validated in several user studies involving more than 500 users who took more than 60,000 food images under controlled and community-dwelling conditions. PMID:28691119
ENFIN--A European network for integrative systems biology.
Kahlem, Pascal; Clegg, Andrew; Reisinger, Florian; Xenarios, Ioannis; Hermjakob, Henning; Orengo, Christine; Birney, Ewan
2009-11-01
Integration of biological data of various types and the development of adapted bioinformatics tools represent critical objectives to enable research at the systems level. The European Network of Excellence ENFIN is engaged in developing an adapted infrastructure to connect databases, and platforms to enable both the generation of new bioinformatics tools and the experimental validation of computational predictions. With the aim of bridging the gap existing between standard wet laboratories and bioinformatics, the ENFIN Network runs integrative research projects to bring the latest computational techniques to bear directly on questions dedicated to systems biology in the wet laboratory environment. The Network maintains internally close collaboration between experimental and computational research, enabling a permanent cycling of experimental validation and improvement of computational prediction methods. The computational work includes the development of a database infrastructure (EnCORE), bioinformatics analysis methods and a novel platform for protein function analysis FuncNet.
Linkage disequilibrium matches forensic genetic records to disjoint genomic marker sets.
Edge, Michael D; Algee-Hewitt, Bridget F B; Pemberton, Trevor J; Li, Jun Z; Rosenberg, Noah A
2017-05-30
Combining genotypes across datasets is central in facilitating advances in genetics. Data aggregation efforts often face the challenge of record matching-the identification of dataset entries that represent the same individual. We show that records can be matched across genotype datasets that have no shared markers based on linkage disequilibrium between loci appearing in different datasets. Using two datasets for the same 872 people-one with 642,563 genome-wide SNPs and the other with 13 short tandem repeats (STRs) used in forensic applications-we find that 90-98% of forensic STR records can be connected to corresponding SNP records and vice versa. Accuracy increases to 99-100% when ∼30 STRs are used. Our method expands the potential of data aggregation, but it also suggests privacy risks intrinsic in maintenance of databases containing even small numbers of markers-including databases of forensic significance.
LabPatch, an acquisition and analysis program for patch-clamp electrophysiology.
Robinson, T; Thomsen, L; Huizinga, J D
2000-05-01
An acquisition and analysis program, "LabPatch," has been developed for use in patch-clamp research. LabPatch controls any patch-clamp amplifier, acquires and records data, runs voltage protocols, plots and analyzes data, and connects to spreadsheet and database programs. Controls within LabPatch are grouped by function on one screen, much like an oscilloscope front panel. The software is mouse driven, so that the user need only point and click. Finally, the ability to copy data to other programs running in Windows 95/98, and the ability to keep track of experiments using a database, make LabPatch extremely versatile. The system requirements include Windows 95/98, at least a 100-MHz processor and 16 MB RAM, a data acquisition card, digital-to-analog converter, and a patch-clamp amplifier. LabPatch is available free of charge at http://www.fhs.mcmaster.ca/huizinga/.
NASA Astrophysics Data System (ADS)
Michel, L.; Motch, C.; Pineau, F. X.
2009-05-01
As members of the Survey Science Consortium of the XMM-Newton mission the Strasbourg Observatory is in charge of the real-time cross-correlations of X-ray data with archival catalogs. We also are committed to provide a specific tools to handle these cross-correlations and propose identifications at other wavelengths. In order to do so, we developed a database generator (Saada) managing persitent links and supporting heterogeneous input datasets. This system allows to easily build an archive containing numerous and complex links between individual items [1]. It also offers a powerfull query engine able to select sources on the basis of the properties (existence, distance, colours) of the X-ray-archival associations. We present such a database in operation for the 2XMMi catalogue. This system is flexible enough to provide both a public data interface and a servicing interface which could be used in the framework of the Simbol-X ground segment.
Sperrin, Matthew; Rushton, Helen; Dixon, William G; Normand, Alexis; Villard, Joffrey; Chieh, Angela; Buchan, Iain
2016-01-21
Digital self-monitoring, particularly of weight, is increasingly prevalent. The associated data could be reused for clinical and research purposes. The aim was to compare participants who use connected smart scale technologies with the general population and explore how use of smart scale technology affects, or is affected by, weight change. This was a retrospective study comparing 2 databases: (1) the longitudinal height and weight measurement database of smart scale users and (2) the Health Survey for England, a cross-sectional survey of the general population in England. Baseline comparison was of body mass index (BMI) in the 2 databases via a regression model. For exploring engagement with the technology, two analyses were performed: (1) a regression model of BMI change predicted by measures of engagement and (2) a recurrent event survival analysis with instantaneous probability of a subsequent self-weighing predicted by previous BMI change. Among women, users of self-weighing technology had a mean BMI of 1.62 kg/m(2) (95% CI 1.03-2.22) lower than the general population (of the same age and height) (P<.001). Among men, users had a mean BMI of 1.26 kg/m(2) (95% CI 0.84-1.69) greater than the general population (of the same age and height) (P<.001). Reduction in BMI was independently associated with greater engagement with self-weighing. Self-weighing events were more likely when users had recently reduced their BMI. Users of self-weighing technology are a selected sample of the general population and this must be accounted for in studies that employ these data. Engagement with self-weighing is associated with recent weight change; more research is needed to understand the extent to which weight change encourages closer monitoring versus closer monitoring driving the weight change. The concept of isolated measures needs to give way to one of connected health metrics.
Speleogenesis, geometry, and topology of caves: A quantitative study of 3D karst conduits
NASA Astrophysics Data System (ADS)
Jouves, Johan; Viseur, Sophie; Arfib, Bruno; Baudement, Cécile; Camus, Hubert; Collon, Pauline; Guglielmi, Yves
2017-12-01
Karst systems are hierarchically spatially organized three-dimensional (3D) networks of conduits behaving as drains for groundwater flow. Recently, geostatistical approaches proposed to generate karst networks from data and parameters stemming from analogous observed karst features. Other studies have qualitatively highlighted relationships between speleogenetic processes and cave patterns. However, few studies have been performed to quantitatively define these relationships. This paper reports a quantitative study of cave geometries and topologies that takes the underlying speleogenetic processes into account. In order to study the spatial organization of caves, a 3D numerical database was built from 26 caves, corresponding to 621 km of cumulative cave passages representative of the variety of karst network patterns. The database includes 3D speleological surveys for which the speleogenetic context is known, allowing the polygenic karst networks to be divided into 48 monogenic cave samples and classified into four cave patterns: vadose branchwork (VB), water-table cave (WTC), looping cave (LC), and angular maze (AM). Eight morphometric cave descriptors were calculated, four geometrical parameters (width-height ratio, tortuosity, curvature, and vertical index) and four topological ones (degree of node connectivity, α and γ graph indices, and ramification index) respectively. The results were validated by statistical analyses (Kruskal-Wallis test and PCA). The VB patterns are clearly distinct from AM ones and from a third group including WTC and LC. A quantitative database of cave morphology characteristics is provided, depending on their speleogenetic processes. These characteristics can be used to constrain and/or validate 3D geostatistical simulations. This study shows how important it is to relate the geometry and connectivity of cave networks to recharge and flow processes. Conversely, the approach developed here provides proxies to estimate the evolution of the vadose zone to epiphreatic and phreatic zones in limestones from the quantitative analysis of existing cave patterns.
Ethnopedology in the Study of Toponyms Connected to the Indigenous Knowledge on Soil Resource
Capra, Gian Franco; Ganga, Antonio; Buondonno, Andrea; Grilli, Eleonora; Gaviano, Carla; Vacca, Sergio
2015-01-01
In taking an integrated ethnopedological approach, this study aims to investigate the meaning of the distribution of the toponyms used in traditional and recent cartography of Sardinia (southern Italy). It is particularly, but not only, focused on those related to soil resources. Sardinia is particularly interesting in this respect, as its unique history, geography, and linguistic position makes it one of the Italian and Mediterranean regions with the greatest number of toponyms. This research investigated the toponyms belonging to an important sub-region of Sardinia, called Ogliastra (central-eastern Sardinia). The research was conducted through the following integrated approach: i) toponymy research and collection from different sources; ii) database creation and translation of toponyms from the Sardinian language (SL); iii) categorization of toponyms; and iv) graphical, statistical, and cartographic data processing. Distribution and diversity of toponyms were assessed using the compiled database, coupled with a geographical information system (GIS). Of around 7700 toponyms collected, 79% had already been reported in SL, while just 21% were in Italian. Of the toponyms in SL, 77% are of known meaning and 54% of these toponyms were characterized by a meaning directly and/or indirectly connected to specific environmental features. On the whole, morphology would appear to be the primary environmental factor able to explain the complex, articulated presence, distribution, and typology of the investigated toponyms. A least squares regression analysis of pedodiversity vs. topodiversity shows a very closed distribution, with an impressive high correlation index (R2 = 0.824). The principal factor analysis (PFA) shows that such a connection may be morphologically based, thereby confirming that pedodiversity and topodiversity are strongly linked each other. Overall, the research shows that an integrated ethnopedological approach, combining indigenous and scientific knowledge may be of great interest in order to mitigate the impressive phenomena of the indigenous knowledge lost. PMID:25789985
Integration of the Eventlndex with other ATLAS systems
NASA Astrophysics Data System (ADS)
Barberis, D.; Cárdenas Zárate, S. E.; Gallas, E. J.; Prokoshin, F.
2015-12-01
The ATLAS EventIndex System, developed for use in LHC Run 2, is designed to index every processed event in ATLAS, replacing the TAG System used in Run 1. Its storage infrastructure, based on Hadoop open-source software framework, necessitates revamping how information in this system relates to other ATLAS systems. It will store more indexes since the fundamental mechanisms for retrieving these indexes will be better integrated into all stages of data processing, allowing more events from later stages of processing to be indexed than was possible with the previous system. Connections with other systems (conditions database, monitoring) are fundamentally critical to assess dataset completeness, identify data duplication, and check data integrity, and also enhance access to information in EventIndex by user and system interfaces. This paper gives an overview of the ATLAS systems involved, the relevant metadata, and describe the technologies we are deploying to complete these connections.
Cradle-to-Gate Impact Assessment of a High-Pressure Die-Casting Safety-Relevant Automotive Component
NASA Astrophysics Data System (ADS)
Cecchel, Silvia; Cornacchia, Giovanna; Panvini, Andrea
2016-09-01
The mass of automotive components has a direct influence on several aspects of vehicle performance, including both fuel consumption and tailpipe emissions, but the real environmental benefit has to be evaluated considering the entire life of the products with a proper life cycle assessment. In this context, the present paper analyzes the environmental burden connected to the production of a safety-relevant aluminum high-pressure die-casting component for commercial vehicles (a suspension cross-beam) considering all the phases connected to its manufacture. The focus on aluminum high-pressure die casting reflects the current trend of the industry and its high energy consumption. This work shows a new method that deeply analyzes every single step of the component's production through the implementation of a wide database of primary data collected thanks to collaborations of some automotive supplier companies. This energy analysis shows significant environmental benefits of aluminum recycling.
Clustering of 770,000 genomes reveals post-colonial population structure of North America
NASA Astrophysics Data System (ADS)
Han, Eunjung; Carbonetto, Peter; Curtis, Ross E.; Wang, Yong; Granka, Julie M.; Byrnes, Jake; Noto, Keith; Kermany, Amir R.; Myres, Natalie M.; Barber, Mathew J.; Rand, Kristin A.; Song, Shiya; Roman, Theodore; Battat, Erin; Elyashiv, Eyal; Guturu, Harendra; Hong, Eurie L.; Chahine, Kenneth G.; Ball, Catherine A.
2017-02-01
Despite strides in characterizing human history from genetic polymorphism data, progress in identifying genetic signatures of recent demography has been limited. Here we identify very recent fine-scale population structure in North America from a network of over 500 million genetic (identity-by-descent, IBD) connections among 770,000 genotyped individuals of US origin. We detect densely connected clusters within the network and annotate these clusters using a database of over 20 million genealogical records. Recent population patterns captured by IBD clustering include immigrants such as Scandinavians and French Canadians; groups with continental admixture such as Puerto Ricans; settlers such as the Amish and Appalachians who experienced geographic or cultural isolation; and broad historical trends, including reduced north-south gene flow. Our results yield a detailed historical portrait of North America after European settlement and support substantial genetic heterogeneity in the United States beyond that uncovered by previous studies.
Clustering of 770,000 genomes reveals post-colonial population structure of North America
Han, Eunjung; Carbonetto, Peter; Curtis, Ross E.; Wang, Yong; Granka, Julie M.; Byrnes, Jake; Noto, Keith; Kermany, Amir R.; Myres, Natalie M.; Barber, Mathew J.; Rand, Kristin A.; Song, Shiya; Roman, Theodore; Battat, Erin; Elyashiv, Eyal; Guturu, Harendra; Hong, Eurie L.; Chahine, Kenneth G.; Ball, Catherine A.
2017-01-01
Despite strides in characterizing human history from genetic polymorphism data, progress in identifying genetic signatures of recent demography has been limited. Here we identify very recent fine-scale population structure in North America from a network of over 500 million genetic (identity-by-descent, IBD) connections among 770,000 genotyped individuals of US origin. We detect densely connected clusters within the network and annotate these clusters using a database of over 20 million genealogical records. Recent population patterns captured by IBD clustering include immigrants such as Scandinavians and French Canadians; groups with continental admixture such as Puerto Ricans; settlers such as the Amish and Appalachians who experienced geographic or cultural isolation; and broad historical trends, including reduced north-south gene flow. Our results yield a detailed historical portrait of North America after European settlement and support substantial genetic heterogeneity in the United States beyond that uncovered by previous studies. PMID:28169989
Development of a forestry government agency enterprise GIS system: a disconnected editing approach
NASA Astrophysics Data System (ADS)
Zhu, Jin; Barber, Brad L.
2008-10-01
The Texas Forest Service (TFS) has developed a geographic information system (GIS) for use by agency personnel in central Texas for managing oak wilt suppression and other landowner assistance programs. This Enterprise GIS system was designed to support multiple concurrent users accessing shared information resources. The disconnected editing approach was adopted in this system to avoid the overhead of maintaining an active connection between TFS central Texas field offices and headquarters since most field offices are operating with commercially provided Internet service. The GIS system entails maintaining a personal geodatabase on each local field office computer. Spatial data from the field is periodically up-loaded into a central master geodatabase stored in a Microsoft SQL Server at the TFS headquarters in College Station through the ESRI Spatial Database Engine (SDE). This GIS allows users to work off-line when editing data and requires connecting to the central geodatabase only when needed.
Conflict in the Currents: The Cross-boundary Consequences of Larval Dispersal
NASA Astrophysics Data System (ADS)
Rising, J. A.; Ramesh, N.; Dookie, D.
2016-02-01
As commercial fish populations decline in many regions, the increasing demand for ocean resources can create conflicts along international boundaries. Because fish stock ranges do not respect political boundaries, neighboring countries can impact each other through the management of the stocks within their exclusive economic zones. By combining spawning and larvae information from the FishBase database with current velocities from ocean reanalyses using a particle tracking scheme, we construct a measure of the cross-boundary diffusion of fish larvae for 40 major exploited species. These flows represent important connections both for fish populations and for fisheries and the people who depend on them, but these connections rely on fisheries management in the 'source' countries. We then use socioeconomic data on the national importance of these fish to identify hotspots for potential conflict. Finally, we consider how ranges will shift under climate change, and the social impacts of these shifts.
Clustering of 770,000 genomes reveals post-colonial population structure of North America.
Han, Eunjung; Carbonetto, Peter; Curtis, Ross E; Wang, Yong; Granka, Julie M; Byrnes, Jake; Noto, Keith; Kermany, Amir R; Myres, Natalie M; Barber, Mathew J; Rand, Kristin A; Song, Shiya; Roman, Theodore; Battat, Erin; Elyashiv, Eyal; Guturu, Harendra; Hong, Eurie L; Chahine, Kenneth G; Ball, Catherine A
2017-02-07
Despite strides in characterizing human history from genetic polymorphism data, progress in identifying genetic signatures of recent demography has been limited. Here we identify very recent fine-scale population structure in North America from a network of over 500 million genetic (identity-by-descent, IBD) connections among 770,000 genotyped individuals of US origin. We detect densely connected clusters within the network and annotate these clusters using a database of over 20 million genealogical records. Recent population patterns captured by IBD clustering include immigrants such as Scandinavians and French Canadians; groups with continental admixture such as Puerto Ricans; settlers such as the Amish and Appalachians who experienced geographic or cultural isolation; and broad historical trends, including reduced north-south gene flow. Our results yield a detailed historical portrait of North America after European settlement and support substantial genetic heterogeneity in the United States beyond that uncovered by previous studies.
PedAM: a database for Pediatric Disease Annotation and Medicine.
Jia, Jinmeng; An, Zhongxin; Ming, Yue; Guo, Yongli; Li, Wei; Li, Xin; Liang, Yunxiang; Guo, Dongming; Tai, Jun; Chen, Geng; Jin, Yaqiong; Liu, Zhimei; Ni, Xin; Shi, Tieliu
2018-01-04
There is a significant number of children around the world suffering from the consequence of the misdiagnosis and ineffective treatment for various diseases. To facilitate the precision medicine in pediatrics, a database namely the Pediatric Disease Annotations & Medicines (PedAM) has been built to standardize and classify pediatric diseases. The PedAM integrates both biomedical resources and clinical data from Electronic Medical Records to support the development of computational tools, by which enables robust data analysis and integration. It also uses disease-manifestation (D-M) integrated from existing biomedical ontologies as prior knowledge to automatically recognize text-mined, D-M-specific syntactic patterns from 774 514 full-text articles and 8 848 796 abstracts in MEDLINE. Additionally, disease connections based on phenotypes or genes can be visualized on the web page of PedAM. Currently, the PedAM contains standardized 8528 pediatric disease terms (4542 unique disease concepts and 3986 synonyms) with eight annotation fields for each disease, including definition synonyms, gene, symptom, cross-reference (Xref), human phenotypes and its corresponding phenotypes in the mouse. The database PedAM is freely accessible at http://www.unimd.org/pedam/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Event Driven Messaging with Role-Based Subscriptions
NASA Technical Reports Server (NTRS)
Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Kim, rachel; Allen, Christopher; Luong, Ivy; Chang, George; Zendejas, Silvino; Sadaqathulla, Syed
2009-01-01
Event Driven Messaging with Role-Based Subscriptions (EDM-RBS) is a framework integrated into the Service Management Database (SMDB) to allow for role-based and subscription-based delivery of synchronous and asynchronous messages over JMS (Java Messaging Service), SMTP (Simple Mail Transfer Protocol), or SMS (Short Messaging Service). This allows for 24/7 operation with users in all parts of the world. The software classifies messages by triggering data type, application source, owner of data triggering event (mission), classification, sub-classification and various other secondary classifying tags. Messages are routed to applications or users based on subscription rules using a combination of the above message attributes. This program provides a framework for identifying connected users and their applications for targeted delivery of messages over JMS to the client applications the user is logged into. EDMRBS provides the ability to send notifications over e-mail or pager rather than having to rely on a live human to do it. It is implemented as an Oracle application that uses Oracle relational database management system intrinsic functions. It is configurable to use Oracle AQ JMS API or an external JMS provider for messaging. It fully integrates into the event-logging framework of SMDB (Subnet Management Database).
NASA Astrophysics Data System (ADS)
Anugrah, Wirdah; Suryono; Suseno, Jatmiko Endro
2018-02-01
Management of water resources based on Geographic Information System can provide substantial benefits to water availability settings. Monitoring the potential water level is needed in the development sector, agriculture, energy and others. In this research is developed water resource information system using real-time Geographic Information System concept for monitoring the potential water level of web based area by applying rule based system method. GIS consists of hardware, software, and database. Based on the web-based GIS architecture, this study uses a set of computer that are connected to the network, run on the Apache web server and PHP programming language using MySQL database. The Ultrasound Wireless Sensor System is used as a water level data input. It also includes time and geographic location information. This GIS maps the five sensor locations. GIS is processed through a rule based system to determine the level of potential water level of the area. Water level monitoring information result can be displayed on thematic maps by overlaying more than one layer, and also generating information in the form of tables from the database, as well as graphs are based on the timing of events and the water level values.
Kerman Photovoltaic Power Plant R&D data collection computer system operations and maintenance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosen, P.B.
1994-06-01
The Supervisory Control and Data Acquisition (SCADA) system at the Kerman PV Plant monitors 52 analog, 44 status, 13 control, and 4 accumulator data points in real-time. A Remote Terminal Unit (RTU) polls 7 peripheral data acquisition units that are distributed throughout the plant once every second, and stores all analog, status, and accumulator points that have changed since the last scan. The R&D Computer, which is connected to the SCADA RTU via a RS-232 serial link, polls the RTU once every 5-7 seconds and records any values that have changed since the last scan. A SCADA software package calledmore » RealFlex runs on the R&D computer and stores all updated data values taken from the RTU, along with a time-stamp for each, in a historical real-time database. From this database, averages of all analog data points and snapshots of all status points are generated every 10 minutes and appended to a daily file. These files are downloaded via modem by PVUSA/Davis staff every day, and the data is placed into the PVUSA database.« less
Updated regulation curation model at the Saccharomyces Genome Database
Engel, Stacia R; Skrzypek, Marek S; Hellerstedt, Sage T; Wong, Edith D; Nash, Robert S; Weng, Shuai; Binkley, Gail; Sheppard, Travis K; Karra, Kalpana; Cherry, J Michael
2018-01-01
Abstract The Saccharomyces Genome Database (SGD) provides comprehensive, integrated biological information for the budding yeast Saccharomyces cerevisiae, along with search and analysis tools to explore these data, enabling the discovery of functional relationships between sequence and gene products in fungi and higher organisms. We have recently expanded our data model for regulation curation to address regulation at the protein level in addition to transcription, and are presenting the expanded data on the ‘Regulation’ pages at SGD. These pages include a summary describing the context under which the regulator acts, manually curated and high-throughput annotations showing the regulatory relationships for that gene and a graphical visualization of its regulatory network and connected networks. For genes whose products regulate other genes or proteins, the Regulation page includes Gene Ontology enrichment analysis of the biological processes in which those targets participate. For DNA-binding transcription factors, we also provide other information relevant to their regulatory function, such as DNA binding site motifs and protein domains. As with other data types at SGD, all regulatory relationships and accompanying data are available through YeastMine, SGD’s data warehouse based on InterMine. Database URL: http://www.yeastgenome.org PMID:29688362
Designing a data portal for synthesis modeling
NASA Astrophysics Data System (ADS)
Holmes, M. A.
2006-12-01
Processing of field and model data in multi-disciplinary integrated science studies is a vital part of synthesis modeling. Collection and storage techniques for field data vary greatly between the participating scientific disciplines due to the nature of the data being collected, whether it be in situ, remotely sensed, or recorded by automated data logging equipment. Spreadsheets, personal databases, text files and binary files are used in the initial storage and processing of the raw data. In order to be useful to scientists, engineers and modelers the data need to be stored in a format that is easily identifiable, accessible and transparent to a variety of computing environments. The Model Operations and Synthesis (MOAS) database and associated web portal were created to provide such capabilities. The industry standard relational database is comprised of spatial and temporal data tables, shape files and supporting metadata accessible over the network, through a menu driven web-based portal or spatially accessible through ArcSDE connections from the user's local GIS desktop software. A separate server provides public access to spatial data and model output in the form of attributed shape files through an ArcIMS web-based graphical user interface.
Vector and Raster Data Storage Based on Morton Code
NASA Astrophysics Data System (ADS)
Zhou, G.; Pan, Q.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Liu, X.
2018-05-01
Even though geomatique is so developed nowadays, the integration of spatial data in vector and raster formats is still a very tricky problem in geographic information system environment. And there is still not a proper way to solve the problem. This article proposes a method to interpret vector data and raster data. In this paper, we saved the image data and building vector data of Guilin University of Technology to Oracle database. Then we use ADO interface to connect database to Visual C++ and convert row and column numbers of raster data and X Y of vector data to Morton code in Visual C++ environment. This method stores vector and raster data to Oracle Database and uses Morton code instead of row and column and X Y to mark the position information of vector and raster data. Using Morton code to mark geographic information enables storage of data make full use of storage space, simultaneous analysis of vector and raster data more efficient and visualization of vector and raster more intuitive. This method is very helpful for some situations that need to analyse or display vector data and raster data at the same time.
Dhanasekaran, A Ranjitha; Pearson, Jon L; Ganesan, Balasubramanian; Weimer, Bart C
2015-02-25
Mass spectrometric analysis of microbial metabolism provides a long list of possible compounds. Restricting the identification of the possible compounds to those produced by the specific organism would benefit the identification process. Currently, identification of mass spectrometry (MS) data is commonly done using empirically derived compound databases. Unfortunately, most databases contain relatively few compounds, leaving long lists of unidentified molecules. Incorporating genome-encoded metabolism enables MS output identification that may not be included in databases. Using an organism's genome as a database restricts metabolite identification to only those compounds that the organism can produce. To address the challenge of metabolomic analysis from MS data, a web-based application to directly search genome-constructed metabolic databases was developed. The user query returns a genome-restricted list of possible compound identifications along with the putative metabolic pathways based on the name, formula, SMILES structure, and the compound mass as defined by the user. Multiple queries can be done simultaneously by submitting a text file created by the user or obtained from the MS analysis software. The user can also provide parameters specific to the experiment's MS analysis conditions, such as mass deviation, adducts, and detection mode during the query so as to provide additional levels of evidence to produce the tentative identification. The query results are provided as an HTML page and downloadable text file of possible compounds that are restricted to a specific genome. Hyperlinks provided in the HTML file connect the user to the curated metabolic databases housed in ProCyc, a Pathway Tools platform, as well as the KEGG Pathway database for visualization and metabolic pathway analysis. Metabolome Searcher, a web-based tool, facilitates putative compound identification of MS output based on genome-restricted metabolic capability. This enables researchers to rapidly extend the possible identifications of large data sets for metabolites that are not in compound databases. Putative compound names with their associated metabolic pathways from metabolomics data sets are returned to the user for additional biological interpretation and visualization. This novel approach enables compound identification by restricting the possible masses to those encoded in the genome.
Graphic-based musculoskeletal model for biomechanical analyses and animation.
Chao, Edmund Y S
2003-04-01
The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.
NASA Astrophysics Data System (ADS)
Gao, Jian; Chu, Geng; He, Meng; Zhang, Shu; Xiao, RuiJuan; Li, Hong; Chen, LiQuan
2014-08-01
Inorganic solid electrolytes have distinguished advantages in terms of safety and stability, and are promising to substitute for conventional organic liquid electrolytes. However, low ionic conductivity of typical candidates is the key problem. As connective diffusion path is the prerequisite for high performance, we screen for possible solid electrolytes from the 2004 International Centre for Diffraction Data (ICDD) database by calculating conduction pathways using Bond Valence (BV) method. There are 109846 inorganic crystals in the 2004 ICDD database, and 5295 of them contain lithium. Except for those with toxic, radioactive, rare, or variable valence elements, 1380 materials are candidates for solid electrolytes. The rationality of the BV method is approved by comparing the existing solid electrolytes' conduction pathways we had calculated with those from experiments or first principle calculations. The implication for doping and substitution, two important ways to improve the conductivity, is also discussed. Among them Li2CO3 is selected for a detailed comparison, and the pathway is reproduced well with that based on the density functional studies. To reveal the correlation between connectivity of pathways and conductivity, α/ γ-LiAlO2 and Li2CO3 are investigated by the impedance spectrum as an example, and many experimental and theoretical studies are in process to indicate the relationship between property and structure. The BV method can calculate one material within a few minutes, providing an efficient way to lock onto targets from abundant data, and to investigate the structure-property relationship systematically.
Cattle movement patterns in Australia: an analysis of the NLIS database 2008-2012.
Iglesias, R M; East, I J
2015-11-01
To identify patterns of cattle movement that could influence disease spread in the Australian cattle population. Records from the National Livestock Identification System database for the period January 2008 to December 2012 were accessed and analysed. Postcodes were used to allocate each individual property to one of 12 livestock production regions. National movement patterns and the characteristics of each livestock production region were quantified in terms of the number of consignments and animals moved, and seasonality of movements. The majority of cattle movements remained within a single livestock production region, while those that did not, usually remained within the same state or territory. Producers were the most common source of cattle, and abattoirs and other producers the most common destinations, with approximately 40% of animals moving via a saleyard. The northern regions generally moved larger consignments than the southern regions and were less connected to other regions. The eastern and south-eastern regions were very well connected by cattle movements. Seasonal patterns were seen for some regions, particularly the northern regions where weather patterns strongly influence the ability of producers to muster and transport stock. The movement patterns observed provide quantitative support for previous information based on surveys and expert opinion, and capture more of the variability in Australian cattle production. This information may assist with management of animal disease risks, in particular exotic diseases, and in planning surveillance programs. © 2015 2015 Commonwealth of Australia Australian Veterinary Journal © 2015 Australian Veterinary Association.
Social Network Analysis of Elders' Health Literacy and their Use of Online Health Information.
Jang, Haeran; An, Ji-Young
2014-07-01
Utilizing social network analysis, this study aimed to analyze the main keywords in the literature regarding the health literacy of and the use of online health information by aged persons over 65. Medical Subject Heading keywords were extracted from articles on the PubMed database of the National Library of Medicine. For health literacy, 110 articles out of 361 were initially extracted. Seventy-one keywords out of 1,021 were finally selected after removing repeated keywords and applying pruning. Regarding the use of online health information, 19 articles out of 26 were selected. One hundred forty-four keywords were initially extracted. After removing the repeated keywords, 74 keywords were finally selected. Health literacy was found to be strongly connected with 'Health knowledge, attitudes, practices' and 'Patient education as topic.' 'Computer literacy' had strong connections with 'Internet' and 'Attitude towards computers.' 'Computer literacy' was connected to 'Health literacy,' and was studied according to the parameters 'Attitude towards health' and 'Patient education as topic.' The use of online health information was strongly connected with 'Health knowledge, attitudes, practices,' 'Consumer health information,' 'Patient education as topic,' etc. In the network, 'Computer literacy' was connected with 'Health education,' 'Patient satisfaction,' 'Self-efficacy,' 'Attitude to computer,' etc. Research on older citizens' health literacy and their use of online health information was conducted together with study of computer literacy, patient education, attitude towards health, health education, patient satisfaction, etc. In particular, self-efficacy was noted as an important keyword. Further research should be conducted to identify the effective outcomes of self-efficacy in the area of interest.
Koch, Saskia B J; van Zuiden, Mirjam; Nawijn, Laura; Frijling, Jessie L; Veltman, Dick J; Olff, Miranda
2016-07-01
About 10% of trauma-exposed individuals develop PTSD. Although a growing number of studies have investigated resting-state abnormalities in PTSD, inconsistent results suggest a need for a meta-analysis and a systematic review. We conducted a systematic literature search in four online databases using keywords for PTSD, functional neuroimaging, and resting-state. In total, 23 studies matched our eligibility criteria. For the meta-analysis, we included 14 whole-brain resting-state studies, reporting data on 663 participants (298 PTSD patients and 365 controls). We used the activation likelihood estimation approach to identify concurrence of whole-brain hypo- and hyperactivations in PTSD patients during rest. Seed-based studies could not be included in the quantitative meta-analysis. Therefore, a separate qualitative systematic review was conducted on nine seed-based functional connectivity studies. The meta-analysis showed consistent hyperactivity in the ventral anterior cingulate cortex and the parahippocampus/amygdala, but hypoactivity in the (posterior) insula, cerebellar pyramis and middle frontal gyrus in PTSD patients, compared to healthy controls. Partly concordant with these findings, the systematic review on seed-based functional connectivity studies showed enhanced salience network (SN) connectivity, but decreased default mode network (DMN) connectivity in PTSD. Combined, these altered resting-state connectivity and activity patterns could represent neurobiological correlates of increased salience processing and hypervigilance (SN), at the cost of awareness of internal thoughts and autobiographical memory (DMN) in PTSD. However, several discrepancies between findings of the meta-analysis and systematic review were observed, stressing the need for future studies on resting-state abnormalities in PTSD patients. © 2016 Wiley Periodicals, Inc.
Categorizing Cortical Dysplasia Lesions for Surgical Outcome Using Network Functional Connectivity
NASA Astrophysics Data System (ADS)
Bdaiwi, Abdullah Sarray
Lesion-symptom mapping is a powerful and broadly applicable approach that is used for linking neurological symptoms to specific brain regions. Traditionally, it involves identifying overlap in lesion location across patients with similar symptoms. This approach has limitations when symptoms do not localize to a single region or when lesions do not tend to overlap. In this thesis, we show that we can expand the traditional approach of lesion mapping to incorporate network effects into symptom localization without the need for specialized neuroimaging of patients. Our approach involves assessing the functional connectivity of each lesion volume with the rest of the typical healthy brain using a database of healthy pediatric brain imaging data (C-MIND), available at CCHMC. Our study included 24 subjects that had cortical dysplasia lesions and underwent surgery for seizures that did not respond to drug therapy. We tested our approach using healthy brain imaging data across all ages (2-18 years old) and using age & gender specific groupings of data. The analysis sought categorization of lesion connectivity based on five subject characteristics: gender, cortical dysplasia pathology, epilepsy syndrome, scalp EEG pattern and surgical outcome. Our primary analysis focused on surgical outcome. The results showed that there are some substantial connectivity differences in the outcome analysis. Lesions with stronger connectivity to default mode and attention/motor networks tended to result in poorer surgical outcomes. This result could be expanded with a larger set of data with the ultimate goal of allowing examination of lesions of cortical dysplasia patients and predicting their seizure outcomes.
Designing an algorithm to preserve privacy for medical record linkage with error-prone data.
Pal, Doyel; Chen, Tingting; Zhong, Sheng; Khethavath, Praveen
2014-01-20
Linking medical records across different medical service providers is important to the enhancement of health care quality and public health surveillance. In records linkage, protecting the patients' privacy is a primary requirement. In real-world health care databases, records may well contain errors due to various reasons such as typos. Linking the error-prone data and preserving data privacy at the same time are very difficult. Existing privacy preserving solutions for this problem are only restricted to textual data. To enable different medical service providers to link their error-prone data in a private way, our aim was to provide a holistic solution by designing and developing a medical record linkage system for medical service providers. To initiate a record linkage, one provider selects one of its collaborators in the Connection Management Module, chooses some attributes of the database to be matched, and establishes the connection with the collaborator after the negotiation. In the Data Matching Module, for error-free data, our solution offered two different choices for cryptographic schemes. For error-prone numerical data, we proposed a newly designed privacy preserving linking algorithm named the Error-Tolerant Linking Algorithm, that allows the error-prone data to be correctly matched if the distance between the two records is below a threshold. We designed and developed a comprehensive and user-friendly software system that provides privacy preserving record linkage functions for medical service providers, which meets the regulation of Health Insurance Portability and Accountability Act. It does not require a third party and it is secure in that neither entity can learn the records in the other's database. Moreover, our novel Error-Tolerant Linking Algorithm implemented in this software can work well with error-prone numerical data. We theoretically proved the correctness and security of our Error-Tolerant Linking Algorithm. We have also fully implemented the software. The experimental results showed that it is reliable and efficient. The design of our software is open so that the existing textual matching methods can be easily integrated into the system. Designing algorithms to enable medical records linkage for error-prone numerical data and protect data privacy at the same time is difficult. Our proposed solution does not need a trusted third party and is secure in that in the linking process, neither entity can learn the records in the other's database.
Nikolić, Miloš; Papantonis, Argyris
2017-01-01
Abstract Genome-wide association studies (GWAS) have emerged as a powerful tool to uncover the genetic basis of human common diseases, which often show a complex, polygenic and multi-factorial aetiology. These studies have revealed that 70–90% of all single nucleotide polymorphisms (SNPs) associated with common complex diseases do not occur within genes (i.e. they are non-coding), making the discovery of disease-causative genetic variants and the elucidation of the underlying pathological mechanisms far from straightforward. Based on emerging evidences suggesting that disease-associated SNPs are frequently found within cell type-specific regulatory sequences, here we present GARLIC (GWAS-based Prediction Toolkit for Connecting Diseases and Cell Types), a user-friendly, multi-purpose software with an associated database and online viewer that, using global maps of cis-regulatory elements, can aetiologically connect human diseases with relevant cell types. Additionally, GARLIC can be used to retrieve potential disease-causative genetic variants overlapping regulatory sequences of interest. Overall, GARLIC can satisfy several important needs within the field of medical genetics, thus potentially assisting in the ultimate goal of uncovering the elusive and complex genetic basis of common human disorders. PMID:28007912
NASA Astrophysics Data System (ADS)
Seo, Yongwon; Hwang, Junsik; Choi, Hyun Il
2017-04-01
The concept of directly connected impervious area (DCIA) or efficient impervious areas (EIA) refers to a subset of impervious cover, which is directly connected to a drainage system or a water body via continuous impervious surfaces. The concept of DCIA is important in that it is regarded as a better predictor of stream ecosystem health than the total impervious area (TIA). DCIA is a key concept for a better assessment of green infrastructures introduced in urban catchments. Green infrastructure can help restore water cycle; it improves water quality, manages stormwater, provides recreational environment even at lower cost compared to conventional alternatives. In this study, we evaluated several methods to obtain the DCIA based on a GIS database and showed the importance of the accurate measurement of DCIA in terms of resulting hydrographs. We also evaluated several potential green infrastructure scenarios and showed how the spatial planning of green infrastruesture affects the shape of hydrographs and reduction of peak flows. These results imply that well-planned green infrastructure can be introduced to urban catchments for flood risk managements and quantitative assessment of spatial distribution of DCIA is crucial for sustainable development in urban environment.
Costs and benefits of connecting community physicians to a hospital WAN.
Kouroubali, A.; Starren, J. B.; Clayton, P. D.
1998-01-01
The Washington Heights-Inwood Community Health Management Information System (WHICHIS) at the Columbia-Presbyterian Medical Center (CPMC) provides 15 community physician practices with seamless networking to the CPMC Wide-Area Network. The costs and benefits of the project were evaluated. Installation costs, including hardware, office management software, cabling, network routers, ISDN connection and personnel time, averaged $22,902 per office. Maintenance and support costs averaged $6,293 per office per year. These costs represent a "best-case" scenario after a several year learning curve. Participating physicians were interviewed to assess the impact of the project. Access to the CPMC Clinical Information System (CIS) was used by 87%. Other resource usage was: non-CPMC Web-based resources, 80%; computer billing, 73%; Medline and drug information databases, 67%; and, electronic mail, 60%. The most valued feature of the system was access to the CPMC CIS. The second most important was the automatic connection provided by routed ISDN. Frequency of access to the CIS averaged 6.67 days/month. Physicians reported that the system had significantly improved their practice of medicine. We are currently exploring less expensive options to provide this functionality. PMID:9929211
Gbadamosi, Semiu Olatunde; Eze, Chuka; Olawepo, John Olajide; Iwelunmor, Juliet; Sarpong, Daniel F; Ogidi, Amaka Grace; Patel, Dina; Oko, John Okpanachi; Onoka, Chima; Ezeanolue, Echezona Edozie
2018-01-15
Community-based strategies to test for HIV, hepatitis B virus (HBV), and sickle cell disease (SCD) have expanded opportunities to increase the proportion of pregnant women who are aware of their diagnosis. In order to use this information to implement evidence-based interventions, these results have to be available to skilled health providers at the point of delivery. Most electronic health platforms are dependent on the availability of reliable Internet connectivity and, thus, have limited use in many rural and resource-limited settings. Here we describe our work on the development and deployment of an integrated mHealth platform that is able to capture medical information, including test results, and encrypt it into a patient-held smartcard that can be read at the point of delivery without the need for an Internet connection. We engaged a team of implementation scientists, public health experts, and information technology specialists in a requirement-gathering process to inform the design of a prototype for a platform that uses smartcard technology, database deployment, and mobile phone app development. Key design decisions focused on usability, scalability, and security. We successfully designed an integrated mHealth platform and deployed it in 4 health facilities across Benue State, Nigeria. We developed the Vitira Health platform to store test results of HIV, HBV, and SCD in a database, and securely encrypt the results on a Quick Response code embedded on a smartcard. We used a mobile app to read the contents on the smartcard without the need for Internet connectivity. Our findings indicate that it is possible to develop a patient-held smartcard and an mHealth platform that contains vital health information that can be read at the point of delivery using a mobile phone-based app without an Internet connection. ClinicalTrials.gov NCT03027258; https://clinicaltrials.gov/ct2/show/NCT03027258 (Archived by WebCite at http://www.webcitation.org/6owR2D0kE). ©Semiu Olatunde Gbadamosi, Chuka Eze, John Olajide Olawepo, Juliet Iwelunmor, Daniel F Sarpong, Amaka Grace Ogidi, Dina Patel, John Okpanachi Oko, Chima Onoka, Echezona Edozie Ezeanolue. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.01.2018.
NASA Astrophysics Data System (ADS)
Ribeiro, André Santos; Lacerda, Luís Miguel; Silva, Nuno André da; Ferreira, Hugo Alexandre
2015-06-01
The Multimodal Imaging Brain Connectivity Analysis (MIBCA) toolbox is a fully automated all-in-one connectivity analysis toolbox that offers both pre-processing, connectivity, and graph theory analysis of multimodal images such as anatomical, diffusion, and functional MRI, and PET. In this work, the MIBCA functionalities were used to study Alzheimer's Disease (AD) in a multimodal MR/PET approach. Materials and Methods: Data from 12 healthy controls, and 36 patients with EMCI, LMCI and AD (12 patients for each group) were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu), including T1-weighted (T1-w), Diffusion Tensor Imaging (DTI) data, and 18F-AV-45 (florbetapir) dynamic PET data from 40-60 min post injection (4x5 min). Both MR and PET data were automatically pre-processed for all subjects using MIBCA. T1-w data was parcellated into cortical and subcortical regions-of-interest (ROIs), and the corresponding thicknesses and volumes were calculated. DTI data was used to compute structural connectivity matrices based on fibers connecting pairs of ROIs. Lastly, dynamic PET images were summed, and the relative Standard Uptake Values calculated for each ROI. Results: An overall higher uptake of 18F-AV-45, consistent with an increased deposition of beta-amyloid, was observed for the AD group. Additionally, patients showed significant cortical atrophy (thickness and volume) especially in the entorhinal cortex and temporal areas, and a significant increase in Mean Diffusivity (MD) in the hippocampus, amygdala and temporal areas. Furthermore, patients showed a reduction of fiber connectivity with the progression of the disease, especially for intra-hemispherical connections. Conclusion: This work shows the potential of the MIBCA toolbox for the study of AD, as findings were shown to be in agreement with the literature. Here, only structural changes and beta-amyloid accumulation were considered. Yet, MIBCA is further able to process fMRI and different radiotracers, thus leading to integration of functional information, and supporting the research for new multimodal biomarkers for AD and other neurodegenerative diseases.
Tomer, Mark D; James, David E; Sandoval-Green, Claudette M J
2017-05-01
Conservation planning information is important for identifying options for watershed water quality improvement and can be developed for use at field, farm, and watershed scales. Translation across scales is a key issue impeding progress at watershed scales because watershed improvement goals must be connected with implementation of farm- and field-level conservation practices to demonstrate success. This is particularly true when examining alternatives for "trap and treat" practices implemented at agricultural-field edges to control (or influence) water flows through fields, landscapes, and riparian corridors within agricultural watersheds. We propose that database structures used in developing conservation planning information can achieve translation across conservation-planning scales, and we developed the Agricultural Conservation Planning Framework (ACPF) to enable practical planning applications. The ACPF comprises a planning concept, a database to facilitate field-level and watershed-scale analyses, and an ArcGIS toolbox with Python scripts to identify specific options for placement of conservation practices. This paper appends two prior publications and describes the structure of the ACPF database, which contains land use, crop history, and soils information and is available for download for 6091 HUC12 watersheds located across Iowa, Illinois, Minnesota, and parts of Kansas, Missouri, Nebraska, and Wisconsin and comprises information on 2.74 × 10 agricultural fields (available through /). Sample results examining land use trends across Iowa and Illinois are presented here to demonstrate potential uses of the database. While designed for use with the ACPF toolbox, users are welcome to use the ACPF watershed data in a variety of planning and modeling approaches. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Squires, R. Burke; Noronha, Jyothi; Hunt, Victoria; García‐Sastre, Adolfo; Macken, Catherine; Baumgarth, Nicole; Suarez, David; Pickett, Brett E.; Zhang, Yun; Larsen, Christopher N.; Ramsey, Alvin; Zhou, Liwei; Zaremba, Sam; Kumar, Sanjeev; Deitrich, Jon; Klem, Edward; Scheuermann, Richard H.
2012-01-01
Please cite this paper as: Squires et al. (2012) Influenza research database: an integrated bioinformatics resource for influenza research and surveillance. Influenza and Other Respiratory Viruses 6(6), 404–416. Background The recent emergence of the 2009 pandemic influenza A/H1N1 virus has highlighted the value of free and open access to influenza virus genome sequence data integrated with information about other important virus characteristics. Design The Influenza Research Database (IRD, http://www.fludb.org) is a free, open, publicly‐accessible resource funded by the U.S. National Institute of Allergy and Infectious Diseases through the Bioinformatics Resource Centers program. IRD provides a comprehensive, integrated database and analysis resource for influenza sequence, surveillance, and research data, including user‐friendly interfaces for data retrieval, visualization and comparative genomics analysis, together with personal log in‐protected ‘workbench’ spaces for saving data sets and analysis results. IRD integrates genomic, proteomic, immune epitope, and surveillance data from a variety of sources, including public databases, computational algorithms, external research groups, and the scientific literature. Results To demonstrate the utility of the data and analysis tools available in IRD, two scientific use cases are presented. A comparison of hemagglutinin sequence conservation and epitope coverage information revealed highly conserved protein regions that can be recognized by the human adaptive immune system as possible targets for inducing cross‐protective immunity. Phylogenetic and geospatial analysis of sequences from wild bird surveillance samples revealed a possible evolutionary connection between influenza virus from Delaware Bay shorebirds and Alberta ducks. Conclusions The IRD provides a wealth of integrated data and information about influenza virus to support research of the genetic determinants dictating virus pathogenicity, host range restriction and transmission, and to facilitate development of vaccines, diagnostics, and therapeutics. PMID:22260278
Information specialist for a coming age (7)
NASA Astrophysics Data System (ADS)
Kishimoto, Tamotsu
Present Status and effective use of in-house data are described, by showing a case of Kokuyo as an example. Integrated Distribution Information System in which information for production, sales and distribution is integrated, and databases loaded on it, are introduced. Outline of "KOPS" and "KROS" which are external systems connected with the above system, and how Kokuyo makes use of information obtained from this system, are explained. Recently, Kokuyo has focused its efforts on selling goods direct to users, among the diversified distribution channels. Customer Information System which supports such sales activities is also introduced.
A virtual reality browser for Space Station models
NASA Technical Reports Server (NTRS)
Goldsby, Michael; Pandya, Abhilash; Aldridge, Ann; Maida, James
1993-01-01
The Graphics Analysis Facility at NASA/JSC has created a visualization and learning tool by merging its database of detailed geometric models with a virtual reality system. The system allows an interactive walk-through of models of the Space Station and other structures, providing detailed realistic stereo images. The user can activate audio messages describing the function and connectivity of selected components within his field of view. This paper presents the issues and trade-offs involved in the implementation of the VR system and discusses its suitability for its intended purposes.
[Development of Hospital Equipment Maintenance Information System].
Zhou, Zhixin
2015-11-01
Hospital equipment maintenance information system plays an important role in improving medical treatment quality and efficiency. By requirement analysis of hospital equipment maintenance, the system function diagram is drawed. According to analysis of input and output data, tables and reports in connection with equipment maintenance process, relationships between entity and attribute is found out, and E-R diagram is drawed and relational database table is established. Software development should meet actual process requirement of maintenance and have a friendly user interface and flexible operation. The software can analyze failure cause by statistical analysis.
2008-03-01
tractable fungal model system, Cryptococcus neoformans, and identified two kelch repeat homologs that are involved in mating (Kem1 and Kem2). To...find kelch-repeat proteins involved in G protein signaling, Cryptococcus homologues of Gpb1/2, which interacts with and negatively regulates the G...protein alpha subunit, Gpa2, in S. cerevisiae, were searched by BLAST (tblastn) in Cryptococcus genome database of serotype A (Duke University Medical