NASA Technical Reports Server (NTRS)
Beeson, Harold D.; Davis, Dennis D.; Ross, William L., Sr.; Tapphorn, Ralph M.
2002-01-01
This document represents efforts accomplished at the NASA Johnson Space Center White Sands Test Facility (WSTF) in support of the Enhanced Technology for Composite Overwrapped Pressure Vessels (COPV) Program, a joint research and technology effort among the U.S. Air Force, NASA, and the Aerospace Corporation. WSTF performed testing for several facets of the program. Testing that contributed to the Task 3.0 COPV database extension objective included baseline structural strength, failure mode and safe-life, impact damage tolerance, sustained load/impact effect, and materials compatibility. WSTF was also responsible for establishing impact protection and control requirements under Task 8.0 of the program. This included developing a methodology for establishing an impact control plan. Seven test reports detail the work done at WSTF. As such, this document contributes to the database of information regarding COPV behavior that will ensure performance benefits and safety are maintained throughout vessel service life.
Reliability Information Analysis Center 1st Quarter 2007, Technical Area Task (TAT) Report
2007-02-05
34* Created new SQL server database for "PC Configuration" web application. Added roles for security closed 4235 and posted application to production. "e Wrote...and ran SQL Server scripts to migrate production databases to new server . "e Created backup jobs for new SQL Server databases. "* Continued...second phase of the TENA demo. Extensive tasking was established and assigned. A TENA interface to EW Server was reaffirmed after some uncertainty about
32 CFR 1900.21 - Processing of requests for records.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Information Act Amendments of 1996. (b) Database of “officially released information.” As an alternative to extensive tasking and as an accommodation to many requesters, the Agency maintains a database of “officially released information” which contains copies of documents released by this Agency. Searches of this database...
New tools and methods for direct programmatic access to the dbSNP relational database.
Saccone, Scott F; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A; Rice, John P
2011-01-01
Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale.
New tools and methods for direct programmatic access to the dbSNP relational database
Saccone, Scott F.; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A.; Rice, John P.
2011-01-01
Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale. PMID:21037260
Behavioral Measurement of Remembering Phenomenologies: So Simple a Child Can Do It
ERIC Educational Resources Information Center
Brainerd, C. J.; Holliday, R. E.; Reyna, V. F.
2004-01-01
Two remembering phenomenologies, vivid recollection and vague familiarity, have been extensively studied in adults using introspective self-report tasks, such as rememberknow. Because such tasks are beyond the capabilities of young children, there is no database on how these phenomenologies first develop and what factors affect them. In…
NASA Technical Reports Server (NTRS)
Maluf, David A. (Inventor); Bell, David G. (Inventor); Gurram, Mohana M. (Inventor); Gawdiak, Yuri O. (Inventor)
2009-01-01
A system for managing a project that includes multiple tasks and a plurality of workers. Input information includes characterizations based upon a human model, a team model and a product model. Periodic reports, such as a monthly report, a task plan report, a budget report and a risk management report, are generated and made available for display or further analysis. An extensible database allows searching for information based upon context and upon content.
Federated Web-accessible Clinical Data Management within an Extensible NeuroImaging Database
Keator, David B.; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R.; Bockholt, Jeremy; Grethe, Jeffrey S.
2010-01-01
Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging studies is introduced: The Human Clinical Imaging Database (HID) and Toolkit. The database schema is constructed to support the storage of new data types without changes to the underlying schema. The complex infrastructure allows management of experiment data, such as image protocol and behavioral task parameters, as well as subject-specific data, including demographics, clinical assessments, and behavioral task performance metrics. Of significant interest, embedded clinical data entry and management tools enhance both consistency of data reporting and automatic entry of data into the database. The Clinical Assessment Layout Manager (CALM) allows users to create on-line data entry forms for use within and across sites, through which data is pulled into the underlying database via the generic clinical assessment management engine (GAME). Importantly, the system is designed to operate in a distributed environment, serving both human users and client applications in a service-oriented manner. Querying capabilities use a built-in multi-database parallel query builder/result combiner, allowing web-accessible queries within and across multiple federated databases. The system along with its documentation is open-source and available from the Neuroimaging Informatics Tools and Resource Clearinghouse (NITRC) site. PMID:20567938
Federated web-accessible clinical data management within an extensible neuroimaging database.
Ozyurt, I Burak; Keator, David B; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R; Bockholt, Jeremy; Grethe, Jeffrey S
2010-12-01
Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging studies is introduced: The Human Clinical Imaging Database (HID) and Toolkit. The database schema is constructed to support the storage of new data types without changes to the underlying schema. The complex infrastructure allows management of experiment data, such as image protocol and behavioral task parameters, as well as subject-specific data, including demographics, clinical assessments, and behavioral task performance metrics. Of significant interest, embedded clinical data entry and management tools enhance both consistency of data reporting and automatic entry of data into the database. The Clinical Assessment Layout Manager (CALM) allows users to create on-line data entry forms for use within and across sites, through which data is pulled into the underlying database via the generic clinical assessment management engine (GAME). Importantly, the system is designed to operate in a distributed environment, serving both human users and client applications in a service-oriented manner. Querying capabilities use a built-in multi-database parallel query builder/result combiner, allowing web-accessible queries within and across multiple federated databases. The system along with its documentation is open-source and available from the Neuroimaging Informatics Tools and Resource Clearinghouse (NITRC) site.
NASA Technical Reports Server (NTRS)
Levack, Daniel J. H.
2000-01-01
The Alternate Propulsion Subsystem Concepts contract had seven tasks defined that are reported under this contract deliverable. The tasks were: FAA Restart Study, J-2S Restart Study, Propulsion Database Development. SSME Upper Stage Use. CERs for Liquid Propellant Rocket Engines. Advanced Low Cost Engines, and Tripropellant Comparison Study. The two restart studies, F-1A and J-2S, generated program plans for restarting production of each engine. Special emphasis was placed on determining changes to individual parts due to obsolete materials, changes in OSHA and environmental concerns, new processes available, and any configuration changes to the engines. The Propulsion Database Development task developed a database structure and format which is easy to use and modify while also being comprehensive in the level of detail available. The database structure included extensive engine information and allows for parametric data generation for conceptual engine concepts. The SSME Upper Stage Use task examined the changes needed or desirable to use the SSME as an upper stage engine both in a second stage and in a translunar injection stage. The CERs for Liquid Engines task developed qualitative parametric cost estimating relationships at the engine and major subassembly level for estimating development and production costs of chemical propulsion liquid rocket engines. The Advanced Low Cost Engines task examined propulsion systems for SSTO applications including engine concept definition, mission analysis. trade studies. operating point selection, turbomachinery alternatives, life cycle cost, weight definition. and point design conceptual drawings and component design. The task concentrated on bipropellant engines, but also examined tripropellant engines. The Tripropellant Comparison Study task provided an unambiguous comparison among various tripropellant implementation approaches and cycle choices, and then compared them to similarly designed bipropellant engines in the SSTO mission This volume overviews each of the tasks giving its objectives, main results. and conclusions. More detailed Final Task Reports are available on each individual task.
Neuroimaging Data Sharing on the Neuroinformatics Database Platform
Book, Gregory A; Stevens, Michael; Assaf, Michal; Glahn, David; Pearlson, Godfrey D
2015-01-01
We describe the Neuroinformatics Database (NiDB), an open-source database platform for archiving, analysis, and sharing of neuroimaging data. Data from the multi-site projects Autism Brain Imaging Data Exchange (ABIDE), Bipolar-Schizophrenia Network on Intermediate Phenotypes parts one and two (B-SNIP1, B-SNIP2), and Monetary Incentive Delay task (MID) are available for download from the public instance of NiDB, with more projects sharing data as it becomes available. As demonstrated by making several large datasets available, NiDB is an extensible platform appropriately suited to archive and distribute shared neuroimaging data. PMID:25888923
Modeling Real-Time Applications with Reusable Design Patterns
NASA Astrophysics Data System (ADS)
Rekhis, Saoussen; Bouassida, Nadia; Bouaziz, Rafik
Real-Time (RT) applications, which manipulate important volumes of data, need to be managed with RT databases that deal with time-constrained data and time-constrained transactions. In spite of their numerous advantages, RT databases development remains a complex task, since developers must study many design issues related to the RT domain. In this paper, we tackle this problem by proposing RT design patterns that allow the modeling of structural and behavioral aspects of RT databases. We show how RT design patterns can provide design assistance through architecture reuse of reoccurring design problems. In addition, we present an UML profile that represents patterns and facilitates further their reuse. This profile proposes, on one hand, UML extensions allowing to model the variability of patterns in the RT context and, on another hand, extensions inspired from the MARTE (Modeling and Analysis of Real-Time Embedded systems) profile.
Schaefer, Sabine; Krampe, Ralf Th; Lindenberger, Ulman; Baltes, Paul B
2008-05-01
Task prioritization can lead to trade-off patterns in dual-task situations. The authors compared dual-task performances in 9- and 11-year-old children and young adults performing a cognitive task and a motor task concurrently. The motor task required balancing on an ankle-disc board. Two cognitive tasks measured working memory and episodic memory at difficulty levels individually adjusted during the course of extensive training. Adults showed performance decrements in both task domains under dual-task conditions. In contrast, children showed decrements only in the cognitive tasks but actually swayed less under dual-task than under single-task conditions and continued to reduce their body sway even when instructed to focus on the cognitive task. The authors argue that children perform closer to their stability boundaries in the balance task and therefore prioritize protection of their balance under dual-task conditions. (PsycINFO Database Record (c) 2008 APA, all rights reserved).
PathCase-SB architecture and database design
2011-01-01
Background Integration of metabolic pathways resources and regulatory metabolic network models, and deploying new tools on the integrated platform can help perform more effective and more efficient systems biology research on understanding the regulation in metabolic networks. Therefore, the tasks of (a) integrating under a single database environment regulatory metabolic networks and existing models, and (b) building tools to help with modeling and analysis are desirable and intellectually challenging computational tasks. Description PathCase Systems Biology (PathCase-SB) is built and released. The PathCase-SB database provides data and API for multiple user interfaces and software tools. The current PathCase-SB system provides a database-enabled framework and web-based computational tools towards facilitating the development of kinetic models for biological systems. PathCase-SB aims to integrate data of selected biological data sources on the web (currently, BioModels database and KEGG), and to provide more powerful and/or new capabilities via the new web-based integrative framework. This paper describes architecture and database design issues encountered in PathCase-SB's design and implementation, and presents the current design of PathCase-SB's architecture and database. Conclusions PathCase-SB architecture and database provide a highly extensible and scalable environment with easy and fast (real-time) access to the data in the database. PathCase-SB itself is already being used by researchers across the world. PMID:22070889
Exploring the single-cell RNA-seq analysis landscape with the scRNA-tools database.
Zappia, Luke; Phipson, Belinda; Oshlack, Alicia
2018-06-25
As single-cell RNA-sequencing (scRNA-seq) datasets have become more widespread the number of tools designed to analyse these data has dramatically increased. Navigating the vast sea of tools now available is becoming increasingly challenging for researchers. In order to better facilitate selection of appropriate analysis tools we have created the scRNA-tools database (www.scRNA-tools.org) to catalogue and curate analysis tools as they become available. Our database collects a range of information on each scRNA-seq analysis tool and categorises them according to the analysis tasks they perform. Exploration of this database gives insights into the areas of rapid development of analysis methods for scRNA-seq data. We see that many tools perform tasks specific to scRNA-seq analysis, particularly clustering and ordering of cells. We also find that the scRNA-seq community embraces an open-source and open-science approach, with most tools available under open-source licenses and preprints being extensively used as a means to describe methods. The scRNA-tools database provides a valuable resource for researchers embarking on scRNA-seq analysis and records the growth of the field over time.
Bréant, C; Borst, F; Campi, D; Griesser, V; Momjian, S
1999-01-01
The use of a controlled vocabulary set in a hospital-wide clinical information system is of crucial importance for many departmental database systems to communicate and exchange information. In the absence of an internationally recognized clinical controlled vocabulary set, a new extension of the International statistical Classification of Diseases (ICD) is proposed. It expands the scope of the standard ICD beyond diagnosis and procedures to clinical terminology. In addition, the common Clinical Findings Dictionary (CFD) further records the definition of clinical entities. The construction of the vocabulary set and the CFD is incremental and manual. Tools have been implemented to facilitate the tasks of defining/maintaining/publishing dictionary versions. The design of database applications in the integrated clinical information system is driven by the CFD which is part of the Medical Questionnaire Designer tool. Several integrated clinical database applications in the field of diabetes and neuro-surgery have been developed at the HUG.
Bréant, C.; Borst, F.; Campi, D.; Griesser, V.; Momjian, S.
1999-01-01
The use of a controlled vocabulary set in a hospital-wide clinical information system is of crucial importance for many departmental database systems to communicate and exchange information. In the absence of an internationally recognized clinical controlled vocabulary set, a new extension of the International statistical Classification of Diseases (ICD) is proposed. It expands the scope of the standard ICD beyond diagnosis and procedures to clinical terminology. In addition, the common Clinical Findings Dictionary (CFD) further records the definition of clinical entities. The construction of the vocabulary set and the CFD is incremental and manual. Tools have been implemented to facilitate the tasks of defining/maintaining/publishing dictionary versions. The design of database applications in the integrated clinical information system is driven by the CFD which is part of the Medical Questionnaire Designer tool. Several integrated clinical database applications in the field of diabetes and neuro-surgery have been developed at the HUG. Images Figure 1 PMID:10566451
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson Khosah
2007-07-31
Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analyticalmore » tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project was conducted in two phases. Phase One included the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. Phase Two involved the development of a platform for on-line data analysis. Phase Two included the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now technically completed.« less
A contingency model of conflict and team effectiveness.
Shaw, Jason D; Zhu, Jing; Duffy, Michelle K; Scott, Kristin L; Shih, Hsi-An; Susanto, Ely
2011-03-01
The authors develop and test theoretical extensions of the relationships of task conflict, relationship conflict, and 2 dimensions of team effectiveness (performance and team-member satisfaction) among 2 samples of work teams in Taiwan and Indonesia. Findings show that relationship conflict moderates the task conflict-team performance relationship. Specifically, the relationship is curvilinear in the shape of an inverted U when relationship conflict is low, but the relationship is linear and negative when relationship conflict is high. The results for team-member satisfaction are more equivocal, but the findings provide some evidence that relationship conflict exacerbates the negative relationship between task conflict and team-member satisfaction. PsycINFO Database Record (c) 2011 APA, all rights reserved.
Ambiguity and variability of database and software names in bioinformatics.
Duck, Geraint; Kovacevic, Aleksandar; Robertson, David L; Stevens, Robert; Nenadic, Goran
2015-01-01
There are numerous options available to achieve various tasks in bioinformatics, but until recently, there were no tools that could systematically identify mentions of databases and tools within the literature. In this paper we explore the variability and ambiguity of database and software name mentions and compare dictionary and machine learning approaches to their identification. Through the development and analysis of a corpus of 60 full-text documents manually annotated at the mention level, we report high variability and ambiguity in database and software mentions. On a test set of 25 full-text documents, a baseline dictionary look-up achieved an F-score of 46 %, highlighting not only variability and ambiguity but also the extensive number of new resources introduced. A machine learning approach achieved an F-score of 63 % (with precision of 74 %) and 70 % (with precision of 83 %) for strict and lenient matching respectively. We characterise the issues with various mention types and propose potential ways of capturing additional database and software mentions in the literature. Our analyses show that identification of mentions of databases and tools is a challenging task that cannot be achieved by relying on current manually-curated resource repositories. Although machine learning shows improvement and promise (primarily in precision), more contextual information needs to be taken into account to achieve a good degree of accuracy.
Brian J. Clough; Matthew B. Russell; Grant M. Domke; Christopher W. Woodall; Philip J. Radtke
2016-01-01
tEstimation of live tree biomass is an important task for both forest carbon accounting and studies of nutri-ent dynamics in forest ecosystems. In this study, we took advantage of an extensive felled-tree database(with 2885 foliage biomass observations) to compare different models and grouping schemes based onphylogenetic and geographic variation for predicting foliage...
AphasiaBank: a resource for clinicians.
Forbes, Margaret M; Fromm, Davida; Macwhinney, Brian
2012-08-01
AphasiaBank is a shared, multimedia database containing videos and transcriptions of ~180 aphasic individuals and 140 nonaphasic controls performing a uniform set of discourse tasks. The language in the videos is transcribed in Codes for the Human Analysis of Transcripts (CHAT) format and coded for analysis with Computerized Language ANalysis (CLAN) programs, which can perform a wide variety of language analyses. The database and the CLAN programs are freely available to aphasia researchers and clinicians for educational, clinical, and scholarly uses. This article describes the database, suggests some ways in which clinicians and clinician researchers might find these materials useful, and introduces a new language analysis program, EVAL, designed to streamline the transcription and coding processes, while still producing an extensive and useful language profile. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson P. Khosah; Frank T. Alex
2007-02-11
Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analyticalmore » tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project is being conducted in two phases. Phase One includes the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. Phase Two, which is currently underway, involves the development of a platform for on-line data analysis. Phase Two includes the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now into its forty-eighth month of development activities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson P. Khosah; Charles G. Crawford
2006-02-11
Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analyticalmore » tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project is being conducted in two phases. Phase One includes the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. Phase Two, which is currently underway, involves the development of a platform for on-line data analysis. Phase Two includes the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now into its forty-second month of development activities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson P. Khosah; Charles G. Crawford
Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analyticalmore » tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project is being conducted in two phases. Phase 1, which is currently in progress and will take twelve months to complete, will include the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. In Phase 2, which will be completed in the second year of the project, a platform for on-line data analysis will be developed. Phase 2 will include the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now into its eleventh month of Phase 1 development activities.« less
High Performance Databases For Scientific Applications
NASA Technical Reports Server (NTRS)
French, James C.; Grimshaw, Andrew S.
1997-01-01
The goal for this task is to develop an Extensible File System (ELFS). ELFS attacks the problem of the following: 1. Providing high bandwidth performance architectures; 2. Reducing the cognitive burden faced by applications programmers when they attempt to optimize; and 3. Seamlessly managing the proliferation of data formats and architectural differences. The approach for ELFS solution consists of language and run-time system support that permits the specification on a hierarchy of file classes.
Stahl, Olivier; Duvergey, Hugo; Guille, Arnaud; Blondin, Fanny; Vecchio, Alexandre Del; Finetti, Pascal; Granjeaud, Samuel; Vigy, Oana; Bidaut, Ghislain
2013-06-06
With the advance of post-genomic technologies, the need for tools to manage large scale data in biology becomes more pressing. This involves annotating and storing data securely, as well as granting permissions flexibly with several technologies (all array types, flow cytometry, proteomics) for collaborative work and data sharing. This task is not easily achieved with most systems available today. We developed Djeen (Database for Joomla!'s Extensible Engine), a new Research Information Management System (RIMS) for collaborative projects. Djeen is a user-friendly application, designed to streamline data storage and annotation collaboratively. Its database model, kept simple, is compliant with most technologies and allows storing and managing of heterogeneous data with the same system. Advanced permissions are managed through different roles. Templates allow Minimum Information (MI) compliance. Djeen allows managing project associated with heterogeneous data types while enforcing annotation integrity and minimum information. Projects are managed within a hierarchy and user permissions are finely-grained for each project, user and group.Djeen Component source code (version 1.5.1) and installation documentation are available under CeCILL license from http://sourceforge.net/projects/djeen/files and supplementary material.
2013-01-01
Background With the advance of post-genomic technologies, the need for tools to manage large scale data in biology becomes more pressing. This involves annotating and storing data securely, as well as granting permissions flexibly with several technologies (all array types, flow cytometry, proteomics) for collaborative work and data sharing. This task is not easily achieved with most systems available today. Findings We developed Djeen (Database for Joomla!’s Extensible Engine), a new Research Information Management System (RIMS) for collaborative projects. Djeen is a user-friendly application, designed to streamline data storage and annotation collaboratively. Its database model, kept simple, is compliant with most technologies and allows storing and managing of heterogeneous data with the same system. Advanced permissions are managed through different roles. Templates allow Minimum Information (MI) compliance. Conclusion Djeen allows managing project associated with heterogeneous data types while enforcing annotation integrity and minimum information. Projects are managed within a hierarchy and user permissions are finely-grained for each project, user and group. Djeen Component source code (version 1.5.1) and installation documentation are available under CeCILL license from http://sourceforge.net/projects/djeen/files and supplementary material. PMID:23742665
The Fabric for Frontier Experiments Project at Fermilab
NASA Astrophysics Data System (ADS)
Kirby, Michael
2014-06-01
The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere; 2) an extensive data management system for managing local and remote caches, cataloging, querying, moving, and tracking the use of data; 3) custom and generic database applications for calibrations, beam information, and other purposes; 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.
Radio Frequency Identification for Space Habitat Inventory and Stowage Allocation Management
NASA Technical Reports Server (NTRS)
Wagner, Carole Y.
2015-01-01
To date, the most extensive space-based inventory management operation has been the International Space Station (ISS). Approximately 20,000 items are tracked with the Inventory Management System (IMS) software application that requires both flight and ground crews to update the database daily. This audit process is manually intensive and laborious, requiring the crew to open cargo transfer bags (CTBs), then Ziplock bags therein, to retrieve individual items. This inventory process contributes greatly to the time allocated for general crew tasks.
Accelerating Pathology Image Data Cross-Comparison on CPU-GPU Hybrid Systems
Wang, Kaibo; Huai, Yin; Lee, Rubao; Wang, Fusheng; Zhang, Xiaodong; Saltz, Joel H.
2012-01-01
As an important application of spatial databases in pathology imaging analysis, cross-comparing the spatial boundaries of a huge amount of segmented micro-anatomic objects demands extremely data- and compute-intensive operations, requiring high throughput at an affordable cost. However, the performance of spatial database systems has not been satisfactory since their implementations of spatial operations cannot fully utilize the power of modern parallel hardware. In this paper, we provide a customized software solution that exploits GPUs and multi-core CPUs to accelerate spatial cross-comparison in a cost-effective way. Our solution consists of an efficient GPU algorithm and a pipelined system framework with task migration support. Extensive experiments with real-world data sets demonstrate the effectiveness of our solution, which improves the performance of spatial cross-comparison by over 18 times compared with a parallelized spatial database approach. PMID:23355955
Exploring molecular networks using MONET ontology.
Silva, João Paulo Müller da; Lemke, Ney; Mombach, José Carlos; Souza, José Guilherme Camargo de; Sinigaglia, Marialva; Vieira, Renata
2006-03-31
The description of the complex molecular network responsible for cell behavior requires new tools to integrate large quantities of experimental data in the design of biological information systems. These tools could be used in the characterization of these networks and in the formulation of relevant biological hypotheses. The building of an ontology is a crucial step because it integrates in a coherent framework the concepts necessary to accomplish such a task. We present MONET (molecular network), an extensible ontology and an architecture designed to facilitate the integration of data originating from different public databases in a single- and well-documented relational database, that is compatible with MONET formal definition. We also present an example of an application that can easily be implemented using these tools.
Tools don't-and won't-make the man: A cognitive look at the future.
Osiurak, François; Navarro, Jordan; Reynaud, Emanuelle; Thomas, Gauthier
2018-05-01
The question of whether tools erase cognitive and physical interindividual differences has been surprisingly overlooked in the literature. Yet if technology is profusely available in a near or far future, will we be equal in our capacity to use it? We sought to address this unexplored, fundamental issue, asking 200 participants to perform 3 physical (e.g., fine manipulation) and 3 cognitive tasks (e.g., calculation) in both non-tool-use and tool-use conditions. Here we show that tools do not erase but rather extend our intrinsic physical and cognitive skills. Moreover, this phenomenon of extension is task specific because we found no evidence for superusers, benefitting from the use of a tool irrespective of the task concerned. These results challenge the possibility that technical solutions could always be found to make people equal. Rather, technical innovation might be systematically limited by the user's initial degree of knowledge or skills for a given task. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Enhanced project management tool
NASA Technical Reports Server (NTRS)
Hsu, Chen-Jung (Inventor); Patel, Hemil N. (Inventor); Maluf, David A. (Inventor); Moh Hashim, Jairon C. (Inventor); Tran, Khai Peter B. (Inventor)
2012-01-01
A system for managing a project that includes multiple tasks and a plurality of workers. Input information includes characterizations based upon a human model, a team model and a product model. Periodic reports, such as one or more of a monthly report, a task plan report, a schedule report, a budget report and a risk management report, are generated and made available for display or further analysis or collection into a customized report template. An extensible database allows searching for information based upon context and upon content. Seven different types of project risks are addressed, including non-availability of required skill mix of workers. The system can be configured to exchange data and results with corresponding portions of similar project analyses, and to provide user-specific access to specified information.
Biological data integration: wrapping data and tools.
Lacroix, Zoé
2002-06-01
Nowadays scientific data is inevitably digital and stored in a wide variety of formats in heterogeneous systems. Scientists need to access an integrated view of remote or local heterogeneous data sources with advanced data accessing, analyzing, and visualization tools. Building a digital library for scientific data requires accessing and manipulating data extracted from flat files or databases, documents retrieved from the Web as well as data generated by software. We present an approach to wrapping web data sources, databases, flat files, or data generated by tools through a database view mechanism. Generally, a wrapper has two tasks: it first sends a query to the source to retrieve data and, second builds the expected output with respect to the virtual structure. Our wrappers are composed of a retrieval component based on an intermediate object view mechanism called search views mapping the source capabilities to attributes, and an eXtensible Markup Language (XML) engine, respectively, to perform these two tasks. The originality of the approach consists of: 1) a generic view mechanism to access seamlessly data sources with limited capabilities and 2) the ability to wrap data sources as well as the useful specific tools they may provide. Our approach has been developed and demonstrated as part of the multidatabase system supporting queries via uniform object protocol model (OPM) interfaces.
ARCTOS: a relational database relating specimens, specimen-based science, and archival documentation
Jarrell, Gordon H.; Ramotnik, Cindy A.; McDonald, D.L.
2010-01-01
Data are preserved when they are perpetually discoverable, but even in the Information Age, discovery of legacy data appropriate to particular investigations is uncertain. Secure Internet storage is necessary but insufficient. Data can be discovered only when they are adequately described, and visibility increases markedly if the data are related to other data that are receiving usage. Such relationships can be built within (1) the framework of a relational database, or (1) they can be built among separate resources, within the framework of the Internet. Evolving primarily around biological collections, Arctos is a database that does both of these tasks. It includes data structures for a diversity of specimen attributes, essentially all collection-management tasks, plus literature citations, project descriptions, etc. As a centralized collaboration of several university museums, Arctos is an ideal environment for capitalizing on the many relationships that often exist between items in separate collections. Arctos is related to NIH’s DNA-sequence repository (GenBank) with record-to-record reciprocal linkages, and it serves data to several discipline-specific web portals, including the Global Biodiversity Information Network (GBIF). The University of Alaska Museum’s paleontological collection is Arctos’s recent extension beyond the constraints of neontology. With about 1.3 million cataloged items, additional collections are being added each year.
Neural architecture underlying classification of face perception paradigms.
Laird, Angela R; Riedel, Michael C; Sutherland, Matthew T; Eickhoff, Simon B; Ray, Kimberly L; Uecker, Angela M; Fox, P Mickle; Turner, Jessica A; Fox, Peter T
2015-10-01
We present a novel strategy for deriving a classification system of functional neuroimaging paradigms that relies on hierarchical clustering of experiments archived in the BrainMap database. The goal of our proof-of-concept application was to examine the underlying neural architecture of the face perception literature from a meta-analytic perspective, as these studies include a wide range of tasks. Task-based results exhibiting similar activation patterns were grouped as similar, while tasks activating different brain networks were classified as functionally distinct. We identified four sub-classes of face tasks: (1) Visuospatial Attention and Visuomotor Coordination to Faces, (2) Perception and Recognition of Faces, (3) Social Processing and Episodic Recall of Faces, and (4) Face Naming and Lexical Retrieval. Interpretation of these sub-classes supports an extension of a well-known model of face perception to include a core system for visual analysis and extended systems for personal information, emotion, and salience processing. Overall, these results demonstrate that a large-scale data mining approach can inform the evolution of theoretical cognitive models by probing the range of behavioral manipulations across experimental tasks. Copyright © 2015 Elsevier Inc. All rights reserved.
Heterogeneous Face Attribute Estimation: A Deep Multi-Task Learning Approach.
Han, Hu; K Jain, Anil; Shan, Shiguang; Chen, Xilin
2017-08-10
Face attribute estimation has many potential applications in video surveillance, face retrieval, and social media. While a number of methods have been proposed for face attribute estimation, most of them did not explicitly consider the attribute correlation and heterogeneity (e.g., ordinal vs. nominal and holistic vs. local) during feature representation learning. In this paper, we present a Deep Multi-Task Learning (DMTL) approach to jointly estimate multiple heterogeneous attributes from a single face image. In DMTL, we tackle attribute correlation and heterogeneity with convolutional neural networks (CNNs) consisting of shared feature learning for all the attributes, and category-specific feature learning for heterogeneous attributes. We also introduce an unconstrained face database (LFW+), an extension of public-domain LFW, with heterogeneous demographic attributes (age, gender, and race) obtained via crowdsourcing. Experimental results on benchmarks with multiple face attributes (MORPH II, LFW+, CelebA, LFWA, and FotW) show that the proposed approach has superior performance compared to state of the art. Finally, evaluations on a public-domain face database (LAP) with a single attribute show that the proposed approach has excellent generalization ability.
NASA Technical Reports Server (NTRS)
Li, Chung-Sheng (Inventor); Smith, John R. (Inventor); Chang, Yuan-Chi (Inventor); Jhingran, Anant D. (Inventor); Padmanabhan, Sriram K. (Inventor); Hsiao, Hui-I (Inventor); Choy, David Mun-Hien (Inventor); Lin, Jy-Jine James (Inventor); Fuh, Gene Y. C. (Inventor); Williams, Robin (Inventor)
2004-01-01
Methods and apparatus for providing a multi-tier object-relational database architecture are disclosed. In one illustrative embodiment of the present invention, a multi-tier database architecture comprises an object-relational database engine as a top tier, one or more domain-specific extension modules as a bottom tier, and one or more universal extension modules as a middle tier. The individual extension modules of the bottom tier operationally connect with the one or more universal extension modules which, themselves, operationally connect with the database engine. The domain-specific extension modules preferably provide such functions as search, index, and retrieval services of images, video, audio, time series, web pages, text, XML, spatial data, etc. The domain-specific extension modules may include one or more IBM DB2 extenders, Oracle data cartridges and/or Informix datablades, although other domain-specific extension modules may be used.
The MIGenAS integrated bioinformatics toolkit for web-based sequence analysis
Rampp, Markus; Soddemann, Thomas; Lederer, Hermann
2006-01-01
We describe a versatile and extensible integrated bioinformatics toolkit for the analysis of biological sequences over the Internet. The web portal offers convenient interactive access to a growing pool of chainable bioinformatics software tools and databases that are centrally installed and maintained by the RZG. Currently, supported tasks comprise sequence similarity searches in public or user-supplied databases, computation and validation of multiple sequence alignments, phylogenetic analysis and protein–structure prediction. Individual tools can be seamlessly chained into pipelines allowing the user to conveniently process complex workflows without the necessity to take care of any format conversions or tedious parsing of intermediate results. The toolkit is part of the Max-Planck Integrated Gene Analysis System (MIGenAS) of the Max Planck Society available at (click ‘Start Toolkit’). PMID:16844980
Semantic memory: a feature-based analysis and new norms for Italian.
Montefinese, Maria; Ambrosini, Ettore; Fairfield, Beth; Mammarella, Nicola
2013-06-01
Semantic norms for properties produced by native speakers are valuable tools for researchers interested in the structure of semantic memory and in category-specific semantic deficits in individuals following brain damage. The aims of this study were threefold. First, we sought to extend existing semantic norms by adopting an empirical approach to category (Exp. 1) and concept (Exp. 2) selection, in order to obtain a more representative set of semantic memory features. Second, we extensively outlined a new set of semantic production norms collected from Italian native speakers for 120 artifactual and natural basic-level concepts, using numerous measures and statistics following a feature-listing task (Exp. 3b). Finally, we aimed to create a new publicly accessible database, since only a few existing databases are publicly available online.
Computer-Aided Systems Engineering for Flight Research Projects Using a Workgroup Database
NASA Technical Reports Server (NTRS)
Mizukami, Masahi
2004-01-01
An online systems engineering tool for flight research projects has been developed through the use of a workgroup database. Capabilities are implemented for typical flight research systems engineering needs in document library, configuration control, hazard analysis, hardware database, requirements management, action item tracking, project team information, and technical performance metrics. Repetitive tasks are automated to reduce workload and errors. Current data and documents are instantly available online and can be worked on collaboratively. Existing forms and conventional processes are used, rather than inventing or changing processes to fit the tool. An integrated tool set offers advantages by automatically cross-referencing data, minimizing redundant data entry, and reducing the number of programs that must be learned. With a simplified approach, significant improvements are attained over existing capabilities for minimal cost. By using a workgroup-level database platform, personnel most directly involved in the project can develop, modify, and maintain the system, thereby saving time and money. As a pilot project, the system has been used to support an in-house flight experiment. Options are proposed for developing and deploying this type of tool on a more extensive basis.
The Fabric for Frontier Experiments Project at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirby, Michael
2014-01-01
The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere, 2) an extensive data management system for managing local and remote caches, cataloging, querying,more » moving, and tracking the use of data, 3) custom and generic database applications for calibrations, beam information, and other purposes, 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.« less
Interactive Scene Analysis Module - A sensor-database fusion system for telerobotic environments
NASA Technical Reports Server (NTRS)
Cooper, Eric G.; Vazquez, Sixto L.; Goode, Plesent W.
1992-01-01
Accomplishing a task with telerobotics typically involves a combination of operator control/supervision and a 'script' of preprogrammed commands. These commands usually assume that the location of various objects in the task space conform to some internal representation (database) of that task space. The ability to quickly and accurately verify the task environment against the internal database would improve the robustness of these preprogrammed commands. In addition, the on-line initialization and maintenance of a task space database is difficult for operators using Cartesian coordinates alone. This paper describes the Interactive Scene' Analysis Module (ISAM) developed to provide taskspace database initialization and verification utilizing 3-D graphic overlay modelling, video imaging, and laser radar based range imaging. Through the fusion of taskspace database information and image sensor data, a verifiable taskspace model is generated providing location and orientation data for objects in a task space. This paper also describes applications of the ISAM in the Intelligent Systems Research Laboratory (ISRL) at NASA Langley Research Center, and discusses its performance relative to representation accuracy and operator interface efficiency.
Arabic handwritten: pre-processing and segmentation
NASA Astrophysics Data System (ADS)
Maliki, Makki; Jassim, Sabah; Al-Jawad, Naseer; Sellahewa, Harin
2012-06-01
This paper is concerned with pre-processing and segmentation tasks that influence the performance of Optical Character Recognition (OCR) systems and handwritten/printed text recognition. In Arabic, these tasks are adversely effected by the fact that many words are made up of sub-words, with many sub-words there associated one or more diacritics that are not connected to the sub-word's body; there could be multiple instances of sub-words overlap. To overcome these problems we investigate and develop segmentation techniques that first segment a document into sub-words, link the diacritics with their sub-words, and removes possible overlapping between words and sub-words. We shall also investigate two approaches for pre-processing tasks to estimate sub-words baseline, and to determine parameters that yield appropriate slope correction, slant removal. We shall investigate the use of linear regression on sub-words pixels to determine their central x and y coordinates, as well as their high density part. We also develop a new incremental rotation procedure to be performed on sub-words that determines the best rotation angle needed to realign baselines. We shall demonstrate the benefits of these proposals by conducting extensive experiments on publicly available databases and in-house created databases. These algorithms help improve character segmentation accuracy by transforming handwritten Arabic text into a form that could benefit from analysis of printed text.
Soranno, Patricia A; Bissell, Edward G; Cheruvelil, Kendra S; Christel, Samuel T; Collins, Sarah M; Fergus, C Emi; Filstrup, Christopher T; Lapierre, Jean-Francois; Lottig, Noah R; Oliver, Samantha K; Scott, Caren E; Smith, Nicole J; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A; Gries, Corinna; Henry, Emily N; Skaff, Nick K; Stanley, Emily H; Stow, Craig A; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E
2015-01-01
Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km(2)). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated database reproducible and extensible, allowing users to ask new research questions with the existing database or through the addition of new data. The largest challenge of this task was the heterogeneity of the data, formats, and metadata. Many steps of data integration need manual input from experts in diverse fields, requiring close collaboration.
Soranno, Patricia A.; Bissell, E.G.; Cheruvelil, Kendra S.; Christel, Samuel T.; Collins, Sarah M.; Fergus, C. Emi; Filstrup, Christopher T.; Lapierre, Jean-Francois; Lotting, Noah R.; Oliver, Samantha K.; Scott, Caren E.; Smith, Nicole J.; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A.; Gries, Corinna; Henry, Emily N.; Skaff, Nick K.; Stanley, Emily H.; Stow, Craig A.; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E.
2015-01-01
Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km2). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated database reproducible and extensible, allowing users to ask new research questions with the existing database or through the addition of new data. The largest challenge of this task was the heterogeneity of the data, formats, and metadata. Many steps of data integration need manual input from experts in diverse fields, requiring close collaboration.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-31
... Extension of Approval; Comment Request--Publicly Available Consumer Product Safety Information Database... Publicly Available Consumer Product Safety Information Database. The Commission will consider all comments... intention to seek extension of approval of a collection of information for a database on the safety of...
Columba: an integrated database of proteins, structures, and annotations.
Trissl, Silke; Rother, Kristian; Müller, Heiko; Steinke, Thomas; Koch, Ina; Preissner, Robert; Frömmel, Cornelius; Leser, Ulf
2005-03-31
Structural and functional research often requires the computation of sets of protein structures based on certain properties of the proteins, such as sequence features, fold classification, or functional annotation. Compiling such sets using current web resources is tedious because the necessary data are spread over many different databases. To facilitate this task, we have created COLUMBA, an integrated database of annotations of protein structures. COLUMBA currently integrates twelve different databases, including PDB, KEGG, Swiss-Prot, CATH, SCOP, the Gene Ontology, and ENZYME. The database can be searched using either keyword search or data source-specific web forms. Users can thus quickly select and download PDB entries that, for instance, participate in a particular pathway, are classified as containing a certain CATH architecture, are annotated as having a certain molecular function in the Gene Ontology, and whose structures have a resolution under a defined threshold. The results of queries are provided in both machine-readable extensible markup language and human-readable format. The structures themselves can be viewed interactively on the web. The COLUMBA database facilitates the creation of protein structure data sets for many structure-based studies. It allows to combine queries on a number of structure-related databases not covered by other projects at present. Thus, information on both many and few protein structures can be used efficiently. The web interface for COLUMBA is available at http://www.columba-db.de.
HLLV avionics requirements study and electronic filing system database development
NASA Technical Reports Server (NTRS)
1994-01-01
This final report provides a summary of achievements and activities performed under Contract NAS8-39215. The contract's objective was to explore a new way of delivering, storing, accessing, and archiving study products and information and to define top level system requirements for Heavy Lift Launch Vehicle (HLLV) avionics that incorporate Vehicle Health Management (VHM). This report includes technical objectives, methods, assumptions, recommendations, sample data, and issues as specified by DPD No. 772, DR-3. The report is organized into two major subsections, one specific to each of the two tasks defined in the Statement of Work: the Index Database Task and the HLLV Avionics Requirements Task. The Index Database Task resulted in the selection and modification of a commercial database software tool to contain the data developed during the HLLV Avionics Requirements Task. All summary information is addressed within each task's section.
He, Ying; Yang, Lei; Zhou, Jing; Yao, Liqing; Pang, Marco Yiu Chung
2018-02-01
This systematic review aimed to examine the effects of dual-task balance and mobility training in people with stroke. An extensive electronic databases literature search was conducted using MEDLINE, PubMed, EBSCO, The Cochrane Library, Web of Science, SCOPUS, and Wiley Online Library. Randomized controlled studies that assessed the effects of dual-task training in stroke patients were included for the review (last search in December 2017). The methodological quality was evaluated using the Cochrane Collaboration recommendation, and level of evidence was determined according to the criteria described by the Oxford Center for Evidence-Based Medicine. About 13 articles involving 457 participants were included in this systematic review. All had substantial risk of bias and thus provided level IIb evidence only. Dual-task mobility training was found to induce more improvement in single-task walking function (standardized effect size = 0.14-2.24), when compared with single-task mobility training. Its effect on dual-task walking function was not consistent. Cognitive-motor balance training was effective in improving single-task balance function (standardized effect size = 0.27-1.82), but its effect on dual-task balance ability was not studied. The beneficial effect of dual-task training on cognitive function was provided by one study only and thus inconclusive. There is some evidence that dual-task training can improve single-task walking and balance function in individuals with stroke. However, any firm recommendation cannot be made due to the weak methodology of the studies reviewed.
Information access in a dual-task context: testing a model of optimal strategy selection.
Wickens, C D; Seidler, K S
1997-09-01
Pilots were required to access information from a hierarchical aviation database by navigating under single-task conditions (Experiment 1) and when this task was time-shared with an altitude-monitoring task of varying bandwidth and priority (Experiment 2). In dual-task conditions, pilots had 2 viewports available, 1 always used for the information task and the other to be allocated to either task. Dual-task strategy, inferred from the decision of which task to allocate to the 2nd viewport, revealed that allocation was generally biased in favor of the monitoring task and was only partly sensitive to the difficulty of the 2 tasks and their relative priorities. Some dominant sources of navigational difficulties failed to adaptively influence selection strategy. The implications of the results are to provide tools for jumping to the top of the database, to provide 2 viewports into the common database, and to provide training as to the optimum viewport management strategy in a multitask environment.
Information access in a dual-task context: testing a model of optimal strategy selection
NASA Technical Reports Server (NTRS)
Wickens, C. D.; Seidler, K. S.
1997-01-01
Pilots were required to access information from a hierarchical aviation database by navigating under single-task conditions (Experiment 1) and when this task was time-shared with an altitude-monitoring task of varying bandwidth and priority (Experiment 2). In dual-task conditions, pilots had 2 viewports available, 1 always used for the information task and the other to be allocated to either task. Dual-task strategy, inferred from the decision of which task to allocate to the 2nd viewport, revealed that allocation was generally biased in favor of the monitoring task and was only partly sensitive to the difficulty of the 2 tasks and their relative priorities. Some dominant sources of navigational difficulties failed to adaptively influence selection strategy. The implications of the results are to provide tools for jumping to the top of the database, to provide 2 viewports into the common database, and to provide training as to the optimum viewport management strategy in a multitask environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saar, Martin O.; Seyfried, Jr., William E.; Longmire, Ellen K.
2016-06-24
A total of 12 publications and 23 abstracts were produced as a result of this study. In particular, the compilation of a thermodynamic database utilizing consistent, current thermodynamic data is a major step toward accurately modeling multi-phase fluid interactions with solids. Existing databases designed for aqueous fluids did not mesh well with existing solid phase databases. Addition of a second liquid phase (CO2) magnifies the inconsistencies between aqueous and solid thermodynamic databases. Overall, the combination of high temperature and pressure lab studies (task 1), using a purpose built apparatus, and solid characterization (task 2), using XRCT and more developed technologies,more » allowed observation of dissolution and precipitation processes under CO2 reservoir conditions. These observations were combined with results from PIV experiments on multi-phase fluids (task 3) in typical flow path geometries. The results of the tasks 1, 2, and 3 were compiled and integrated into numerical models utilizing Lattice-Boltzmann simulations (task 4) to realistically model the physical processes and were ultimately folded into TOUGH2 code for reservoir scale modeling (task 5). Compilation of the thermodynamic database assisted comparisons to PIV experiments (Task 3) and greatly improved Lattice Boltzmann (Task 4) and TOUGH2 simulations (Task 5). PIV (Task 3) and experimental apparatus (Task 1) have identified problem areas in TOUGHREACT code. Additional lab experiments and coding work has been integrated into an improved numerical modeling code.« less
The Human Communication Research Centre dialogue database.
Anderson, A H; Garrod, S C; Clark, A; Boyle, E; Mullin, J
1992-10-01
The HCRC dialogue database consists of over 700 transcribed and coded dialogues from pairs of speakers aged from seven to fourteen. The speakers are recorded while tackling co-operative problem-solving tasks and the same pairs of speakers are recorded over two years tackling 10 different versions of our two tasks. In addition there are over 200 dialogues recorded between pairs of undergraduate speakers engaged on versions of the same tasks. Access to the database, and to its accompanying custom-built search software, is available electronically over the JANET system by contacting liz@psy.glasgow.ac.uk, from whom further information about the database and a user's guide to the database can be obtained.
Implementing model-based system engineering for the whole lifecycle of a spacecraft
NASA Astrophysics Data System (ADS)
Fischer, P. M.; Lüdtke, D.; Lange, C.; Roshani, F.-C.; Dannemann, F.; Gerndt, A.
2017-09-01
Design information of a spacecraft is collected over all phases in the lifecycle of a project. A lot of this information is exchanged between different engineering tasks and business processes. In some lifecycle phases, model-based system engineering (MBSE) has introduced system models and databases that help to organize such information and to keep it consistent for everyone. Nevertheless, none of the existing databases approached the whole lifecycle yet. Virtual Satellite is the MBSE database developed at DLR. It has been used for quite some time in Phase A studies and is currently extended for implementing it in the whole lifecycle of spacecraft projects. Since it is unforeseeable which future use cases such a database needs to support in all these different projects, the underlying data model has to provide tailoring and extension mechanisms to its conceptual data model (CDM). This paper explains the mechanisms as they are implemented in Virtual Satellite, which enables extending the CDM along the project without corrupting already stored information. As an upcoming major use case, Virtual Satellite will be implemented as MBSE tool in the S2TEP project. This project provides a new satellite bus for internal research and several different payload missions in the future. This paper explains how Virtual Satellite will be used to manage configuration control problems associated with such a multi-mission platform. It discusses how the S2TEP project starts using the software for collecting the first design information from concurrent engineering studies, then making use of the extension mechanisms of the CDM to introduce further information artefacts such as functional electrical architecture, thus linking more and more processes into an integrated MBSE approach.
Development of educational image databases and e-books for medical physics training.
Tabakov, S; Roberts, V C; Jonsson, B-A; Ljungberg, M; Lewis, C A; Wirestam, R; Strand, S-E; Lamm, I-L; Milano, F; Simmons, A; Deane, C; Goss, D; Aitken, V; Noel, A; Giraud, J-Y; Sherriff, S; Smith, P; Clarke, G; Almqvist, M; Jansson, T
2005-09-01
Medical physics education and training requires the use of extensive imaging material and specific explanations. These requirements provide an excellent background for application of e-Learning. The EU projects Consortia EMERALD and EMIT developed five volumes of such materials, now used in 65 countries. EMERALD developed e-Learning materials in three areas of medical physics (X-ray diagnostic radiology, nuclear medicine and radiotherapy). EMIT developed e-Learning materials in two further areas: ultrasound and magnetic resonance imaging. This paper describes the development of these e-Learning materials (consisting of e-books and educational image databases). The e-books include tasks helping studying of various equipment and methods. The text of these PDF e-books is hyperlinked with respective images. The e-books are used through the readers' own Internet browser. Each Image Database (IDB) includes a browser, which displays hundreds of images of equipment, block diagrams and graphs, image quality examples, artefacts, etc. Both the e-books and IDB are engraved on five separate CD-ROMs. Demo of these materials can be taken from www.emerald2.net.
Global Ground Motion Prediction Equations Program | Just another WordPress
Motion Task 2: Compile and Critically Review GMPEs Task 3: Select or Derive a Global Set of GMPEs Task 6 : Design the Specifications to Compile a Global Database of Soil Classification Task 5: Build a Database of Update on PEER's Global GMPEs Project from recent workshop in Turkey Posted on June 11, 2012 During May
Executive functions, information sampling, and decision making in narcolepsy with cataplexy.
Delazer, Margarete; Högl, Birgit; Zamarian, Laura; Wenter, Johanna; Gschliesser, Viola; Ehrmann, Laura; Brandauer, Elisabeth; Cevikkol, Zehra; Frauscher, Birgit
2011-07-01
Narcolepsy with cataplexy (NC) affects neurotransmitter systems regulating emotions and cognitive functions. This study aimed to assess executive functions, information sampling, reward processing, and decision making in NC. Twenty-one NC patients and 58 healthy participants performed an extensive neuropsychological test battery. NC patients scored as controls in executive function tasks assessing set shifting, reversal learning, working memory, and planning. Group differences appeared in a task measuring information sampling and reward sensitivity. NC patients gathered less information, tolerated a higher level of uncertainty, and were less influenced by reward contingencies than controls. NC patients also showed reduced learning in decision making and had significantly lower scores than controls in the fifth block of the IOWA gambling task. No correlations were found with measures of sleepiness. NC patients may achieve high performance in several neuropsychological domains, including executive functions. Specific differences between NC patients and controls highlight the importance of the hypocretin system in reward processing and decision making and are in line with previous neuroimaging and neurophysiological studies. PsycINFO Database Record (c) 2011 APA, all rights reserved.
Advanced transportation system studies. Alternate propulsion subsystem concepts: Propulsion database
NASA Technical Reports Server (NTRS)
Levack, Daniel
1993-01-01
The Advanced Transportation System Studies alternate propulsion subsystem concepts propulsion database interim report is presented. The objective of the database development task is to produce a propulsion database which is easy to use and modify while also being comprehensive in the level of detail available. The database is to be available on the Macintosh computer system. The task is to extend across all three years of the contract. Consequently, a significant fraction of the effort in this first year of the task was devoted to the development of the database structure to ensure a robust base for the following years' efforts. Nonetheless, significant point design propulsion system descriptions and parametric models were also produced. Each of the two propulsion databases, parametric propulsion database and propulsion system database, are described. The descriptions include a user's guide to each code, write-ups for models used, and sample output. The parametric database has models for LOX/H2 and LOX/RP liquid engines, solid rocket boosters using three different propellants, a hybrid rocket booster, and a NERVA derived nuclear thermal rocket engine.
Key features for ATA / ATR database design in missile systems
NASA Astrophysics Data System (ADS)
Özertem, Kemal Arda
2017-05-01
Automatic target acquisition (ATA) and automatic target recognition (ATR) are two vital tasks for missile systems, and having a robust detection and recognition algorithm is crucial for overall system performance. In order to have a robust target detection and recognition algorithm, an extensive image database is required. Automatic target recognition algorithms use the database of images in training and testing steps of algorithm. This directly affects the recognition performance, since the training accuracy is driven by the quality of the image database. In addition, the performance of an automatic target detection algorithm can be measured effectively by using an image database. There are two main ways for designing an ATA / ATR database. The first and easy way is by using a scene generator. A scene generator can model the objects by considering its material information, the atmospheric conditions, detector type and the territory. Designing image database by using a scene generator is inexpensive and it allows creating many different scenarios quickly and easily. However the major drawback of using a scene generator is its low fidelity, since the images are created virtually. The second and difficult way is designing it using real-world images. Designing image database with real-world images is a lot more costly and time consuming; however it offers high fidelity, which is critical for missile algorithms. In this paper, critical concepts in ATA / ATR database design with real-world images are discussed. Each concept is discussed in the perspective of ATA and ATR separately. For the implementation stage, some possible solutions and trade-offs for creating the database are proposed, and all proposed approaches are compared to each other with regards to their pros and cons.
Rummel, Jan; Wesslein, Ann-Katrin; Meiser, Thorsten
2017-05-01
Event-based prospective memory (PM) is the ability to remember to perform an intention in response to an environmental cue. Recent microstructure models postulate four distinguishable stages of successful event-based PM fulfillment. That is, (a) the event must be noticed, (b) the intention must be retrieved, (c) the context must be verified, and (d) the intended action must be coordinated with the demands of any currently ongoing task (e.g., Marsh, Hicks, & Watson, 2002b). Whereas the cognitive processes of Stages 1, 2, and 3 have been studied more or less extensively, little is known about the processes of Stage 4 so far. To fill this gap, the authors manipulated the magnitude of response overlap between the ongoing task and the PM task to isolate Stage-4 processes. Results demonstrate that PM performance improves in the presence versus absence of a response overlap, independent of cue saliency (Experiment 1) and of demands from currently ongoing tasks (Experiment 2). Furthermore, working-memory capacity is associated with PM performance, especially when there is little response overlap (Experiments 2 and 3). Finally, PM performance benefits only from strong response overlap, that is, only when the appropriate ongoing-task and PM response keys were identical (Experiment 4). They conclude that coordinating ongoing-task and PM actions puts cognitive demands on the individual which are distinguishable from the demands imposed by cue-detection and intention-retrieval processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
SING: Subgraph search In Non-homogeneous Graphs
2010-01-01
Background Finding the subgraphs of a graph database that are isomorphic to a given query graph has practical applications in several fields, from cheminformatics to image understanding. Since subgraph isomorphism is a computationally hard problem, indexing techniques have been intensively exploited to speed up the process. Such systems filter out those graphs which cannot contain the query, and apply a subgraph isomorphism algorithm to each residual candidate graph. The applicability of such systems is limited to databases of small graphs, because their filtering power degrades on large graphs. Results In this paper, SING (Subgraph search In Non-homogeneous Graphs), a novel indexing system able to cope with large graphs, is presented. The method uses the notion of feature, which can be a small subgraph, subtree or path. Each graph in the database is annotated with the set of all its features. The key point is to make use of feature locality information. This idea is used to both improve the filtering performance and speed up the subgraph isomorphism task. Conclusions Extensive tests on chemical compounds, biological networks and synthetic graphs show that the proposed system outperforms the most popular systems in query time over databases of medium and large graphs. Other specific tests show that the proposed system is effective for single large graphs. PMID:20170516
Supplier's Status for Critical Solid Propellants, Explosive, and Pyrotechnic Ingredients
NASA Technical Reports Server (NTRS)
Sims, B. L.; Painter, C. R.; Nauflett, G. W.; Cramer, R. J.; Mulder, E. J.
2000-01-01
In the early 1970's a program was initiated at the Naval Surface Warfare Center/Indian Head Division (NSWC/IHDIV) to address the well-known problems associated with availability and suppliers of critical ingredients. These critical ingredients are necessary for preparation of solid propellants and explosives manufactured by the Navy. The objective of the program was to identify primary and secondary (or back-up) vendor information for these critical ingredients, and to develop suitable alternative materials if an ingredient is unavailable. In 1992 NSWC/IHDIV funded Chemical Propulsion Information Agency (CPIA) under a Technical Area Task (TAT) to expedite the task of creating a database listing critical ingredients used to manufacture Navy propellant and explosives based on known formulation quantities. Under this task CPIA provided employees that were 100 percent dedicated to the task of obtaining critical ingredient suppliers information, selecting the software and designing the interface between the computer program and the database users. TAT objectives included creating the Explosive Ingredients Source Database (EISD) for Propellant, Explosive and Pyrotechnic (PEP) critical elements. The goal was to create a readily accessible database, to provide users a quick-view summary of critical ingredient supplier's information and create a centralized archive that CPIA would update and distribute. EISD funding ended in 1996. At that time, the database entries included 53 formulations and 108 critical used to manufacture Navy propellant and explosives. CPIA turned the database tasking back over to NSWC/IHDIV to maintain and distribute at their discretion. Due to significant interest in propellant/explosives critical ingredients suppliers' status, the Propellant Development and Characterization Subcommittee (PDCS) approached the JANNAF Executive committee (EC) for authorization to continue the critical ingredient database work. In 1999, JANNAF EC approved the PDCS panel task. This paper is designed to emphasize the necessity of maintaining a JANNAF community supported database, which monitors PEP critical ingredient suppliers' status. The final product of this task is a user friendly, searchable database that provides a quick-view summary of critical ingredient supplier's information. This database must be designed to serve the needs of JANNAF and the propellant and energetic commercial manufacturing community as well. This paper provides a summary of the type of information to archive each critical ingredient.
BIRCH: a user-oriented, locally-customizable, bioinformatics system.
Fristensky, Brian
2007-02-09
Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.
BIRCH: A user-oriented, locally-customizable, bioinformatics system
Fristensky, Brian
2007-01-01
Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere. PMID:17291351
Agustini, Bruna Carla; Silva, Luciano Paulino; Bloch, Carlos; Bonfim, Tania M B; da Silva, Gildo Almeida
2014-06-01
Yeast identification using traditional methods which employ morphological, physiological, and biochemical characteristics can be considered a hard task as it requires experienced microbiologists and a rigorous control in culture conditions that could implicate in different outcomes. Considering clinical or industrial applications, the fast and accurate identification of microorganisms is a crescent demand. Hence, molecular biology approaches has been extensively used and, more recently, protein profiling using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) has proved to be an even more efficient tool for taxonomic purposes. Nonetheless, concerning to mass spectrometry, data available for the differentiation of yeast species for industrial purpose is limited and reference databases commercially available comprise almost exclusively clinical microorganisms. In this context, studies focusing on environmental isolates are required to extend the existing databases. The development of a supplementary database and the assessment of a commercial database for taxonomic identifications of environmental yeast are the aims of this study. We challenge MALDI-TOF MS to create protein profiles for 845 yeast strains isolated from grape must and 67.7 % of the strains were successfully identified according to previously available manufacturer database. The remaining 32.3 % strains were not identified due to the absence of a reference spectrum. After matching the correct taxon for these strains by using molecular biology approaches, the spectra concerning the missing species were added in a supplementary database. This new library was able to accurately predict unidentified species at first instance by MALDI-TOF MS, proving it is a powerful tool for the identification of environmental yeasts.
The immune epitope database: a historical retrospective of the first decade.
Salimi, Nima; Fleri, Ward; Peters, Bjoern; Sette, Alessandro
2012-10-01
As the amount of biomedical information available in the literature continues to increase, databases that aggregate this information continue to grow in importance and scope. The population of databases can occur either through fully automated text mining approaches or through manual curation by human subject experts. We here report our experiences in populating the National Institute of Allergy and Infectious Diseases sponsored Immune Epitope Database and Analysis Resource (IEDB, http://iedb.org), which was created in 2003, and as of 2012 captures the epitope information from approximately 99% of all papers published to date that describe immune epitopes (with the exception of cancer and HIV data). This was achieved using a hybrid model based on automated document categorization and extensive human expert involvement. This task required automated scanning of over 22 million PubMed abstracts followed by classification and curation of over 13 000 references, including over 7000 infectious disease-related manuscripts, over 1000 allergy-related manuscripts, roughly 4000 related to autoimmunity, and 1000 transplant/alloantigen-related manuscripts. The IEDB curation involves an unprecedented level of detail, capturing for each paper the actual experiments performed for each different epitope structure. Key to enabling this process was the extensive use of ontologies to ensure rigorous and consistent data representation as well as interoperability with other bioinformatics resources, including the Protein Data Bank, Chemical Entities of Biological Interest, and the NIAID Bioinformatics Resource Centers. A growing fraction of the IEDB data derives from direct submissions by research groups engaged in epitope discovery, and is being facilitated by the implementation of novel data submission tools. The present explosion of information contained in biological databases demands effective query and display capabilities to optimize the user experience. Accordingly, the development of original ways to query the database, on the basis of ontologically driven hierarchical trees, and display of epitope data in aggregate in a biologically intuitive yet rigorous fashion is now at the forefront of the IEDB efforts. We also highlight advances made in the realm of epitope analysis and predictive tools available in the IEDB. © 2012 The Authors. Immunology © 2012 Blackwell Publishing Ltd.
Head repositioning accuracy in patients with whiplash-associated disorders.
Feipel, Veronique; Salvia, Patrick; Klein, Helene; Rooze, Marcel
2006-01-15
Controlled study, measuring head repositioning error (HRE) using an electrogoniometric device. To compare HRE in neutral position, axial rotation and complex postures of patients with whiplash-associated disorders (WAD) to that of control subjects. The presence of kinesthetic alterations in patients with WAD is controversial. In 26 control subjects and 29 patients with WAD (aged 22-74 years), head kinematics was sampled using a 3-dimensional electrogoniometer mounted using a harness and a helmet. All tasks were realized in seated position. The repositioning tasks included neutral repositioning after maximal flexion-extension, eyes open and blindfolded, repositioning at 50 degrees of axial rotation, and repositioning at 50 degrees of axial rotation combined to 20 degrees of ipsilateral bending. The flexion-extension, ipsilateral bending, and axial rotation components of HRE were considered. A multiple-way repeated-measures analysis of variance was used to compare tasks and groups. The WAD group displayed a reduced flexion-extension range (P = 1.9 x 10(-4)), and larger HRE during flexion-extension and repositioning tasks (P = 0.009) than controls. Neither group nor task affected maximal motion velocity. Neutral HRE of the flexion-extension component was larger in blindfolded condition (P = 0.03). Ipsilateral bending and axial rotation HRE components were smaller than the flexion-extension component (P = 7.1 x 10(-23)). For pure rotation repositioning, axial rotation HRE was significantly larger than flexion-extension and ipsilateral bending repositioning error (P = 3.0 x 10(-23)). Ipsilateral bending component of HRE was significantly larger combined tasks than for pure rotation tasks (P = 0.004). In patients with WAD, range of motion and head repositioning accuracy were reduced. However, the differences were small. Vision suppression and task type influenced HRE.
Cost effective nuclear commercial grade dedication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maletz, J.J.; Marston, M.J.
1991-01-01
This paper describes a new computerized database method to create/edit/view specification technical data sheets (mini-specifications) for procurement of spare parts for nuclear facility maintenance and to develop information that could support possible future facility life extension efforts. This method may reduce cost when compared with current manual methods. The use of standardized technical data sheets (mini-specifications) for items of the same category improves efficiency. This method can be used for a variety of tasks, including: Nuclear safety-related procurement; Non-safety related procurement; Commercial grade item procurement/dedication; Evaluation of replacement items. This program will assist the nuclear facility in upgrading its procurementmore » activities consistent with the recent NUMARC Procurement Initiative. Proper utilization of the program will assist the user in assuring that the procured items are correct for the applications, provide data to assist in detecting fraudulent materials, minimize human error in withdrawing database information, improve data retrievability, improve traceability, and reduce long-term procurement costs.« less
Knowns and unknowns in metabolomics identified by multidimensional NMR and hybrid MS/NMR methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bingol, Kerem; Brüschweiler, Rafael
Metabolomics continues to make rapid progress through the development of new and better methods and their applications to gain insight into the metabolism of a wide range of different biological systems from a systems biology perspective. Customization of NMR databases and search tools allows the faster and more accurate identification of known metabolites, whereas the identification of unknowns, without a need for extensive purification, requires new strategies to integrate NMR with mass spectrometry, cheminformatics, and computational methods. For some applications, the use of covalent and non-covalent attachments in the form of labeled tags or nanoparticles can significantly reduce the complexitymore » of these tasks.« less
PDS: A Performance Database Server
Berry, Michael W.; Dongarra, Jack J.; Larose, Brian H.; ...
1994-01-01
The process of gathering, archiving, and distributing computer benchmark data is a cumbersome task usually performed by computer users and vendors with little coordination. Most important, there is no publicly available central depository of performance data for all ranges of machines from personal computers to supercomputers. We present an Internet-accessible performance database server (PDS) that can be used to extract current benchmark data and literature. As an extension to the X-Windows-based user interface (Xnetlib) to the Netlib archival system, PDS provides an on-line catalog of public domain computer benchmarks such as the LINPACK benchmark, Perfect benchmarks, and the NAS parallelmore » benchmarks. PDS does not reformat or present the benchmark data in any way that conflicts with the original methodology of any particular benchmark; it is thereby devoid of any subjective interpretations of machine performance. We believe that all branches (research laboratories, academia, and industry) of the general computing community can use this facility to archive performance metrics and make them readily available to the public. PDS can provide a more manageable approach to the development and support of a large dynamic database of published performance metrics.« less
Floden, Evan W; Tommaso, Paolo D; Chatzou, Maria; Magis, Cedrik; Notredame, Cedric; Chang, Jia-Ming
2016-07-08
The PSI/TM-Coffee web server performs multiple sequence alignment (MSA) of proteins by combining homology extension with a consistency based alignment approach. Homology extension is performed with Position Specific Iterative (PSI) BLAST searches against a choice of redundant and non-redundant databases. The main novelty of this server is to allow databases of reduced complexity to rapidly perform homology extension. This server also gives the possibility to use transmembrane proteins (TMPs) reference databases to allow even faster homology extension on this important category of proteins. Aside from an MSA, the server also outputs topological prediction of TMPs using the HMMTOP algorithm. Previous benchmarking of the method has shown this approach outperforms the most accurate alignment methods such as MSAProbs, Kalign, PROMALS, MAFFT, ProbCons and PRALINE™. The web server is available at http://tcoffee.crg.cat/tmcoffee. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Coordinating complex decision support activities across distributed applications
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1994-01-01
Knowledge-based technologies have been applied successfully to automate planning and scheduling in many problem domains. Automation of decision support can be increased further by integrating task-specific applications with supporting database systems, and by coordinating interactions between such tools to facilitate collaborative activities. Unfortunately, the technical obstacles that must be overcome to achieve this vision of transparent, cooperative problem-solving are daunting. Intelligent decision support tools are typically developed for standalone use, rely on incompatible, task-specific representational models and application programming interfaces (API's), and run on heterogeneous computing platforms. Getting such applications to interact freely calls for platform independent capabilities for distributed communication, as well as tools for mapping information across disparate representations. Symbiotics is developing a layered set of software tools (called NetWorks! for integrating and coordinating heterogeneous distributed applications. he top layer of tools consists of an extensible set of generic, programmable coordination services. Developers access these services via high-level API's to implement the desired interactions between distributed applications.
Bag-of-visual-ngrams for histopathology image classification
NASA Astrophysics Data System (ADS)
López-Monroy, A. Pastor; Montes-y-Gómez, Manuel; Escalante, Hugo Jair; Cruz-Roa, Angel; González, Fabio A.
2013-11-01
This paper describes an extension of the Bag-of-Visual-Words (BoVW) representation for image categorization (IC) of histophatology images. This representation is one of the most used approaches in several high-level computer vision tasks. However, the BoVW representation has an important limitation: the disregarding of spatial information among visual words. This information may be useful to capture discriminative visual-patterns in specific computer vision tasks. In order to overcome this problem we propose the use of visual n-grams. N-grams based-representations are very popular in the field of natural language processing (NLP), in particular within text mining and information retrieval. We propose building a codebook of n-grams and then representing images by histograms of visual n-grams. We evaluate our proposal in the challenging task of classifying histopathology images. The novelty of our proposal lies in the fact that we use n-grams as attributes for a classification model (together with visual-words, i.e., 1-grams). This is common practice within NLP, although, to the best of our knowledge, this idea has not been explored yet within computer vision. We report experimental results in a database of histopathology images where our proposed method outperforms the traditional BoVWs formulation.
A Communication Framework for Collaborative Defense
2009-02-28
been able to provide sufficient automation to be able to build up the most extensive application signature database in the world with a fraction of...perceived. We have been able to provide sufficient automation to be able to build up the most extensive application signature database in the world with a...that are well understood in the context of databases . These techniques allow users to quickly scan for the existence of a key in a database . 8 To be
NASA Astrophysics Data System (ADS)
Stone, N.; Lafuente, B.; Bristow, T.; Keller, R.; Downs, R. T.; Blake, D. F.; Fonda, M.; Pires, A.
2016-12-01
Working primarily with astrobiology researchers at NASA Ames, the Open Data Repository (ODR) has been conducting a software pilot to meet the varying needs of this multidisciplinary community. Astrobiology researchers often have small communities or operate individually with unique data sets that don't easily fit into existing database structures. The ODR constructed its Data Publisher software to allow researchers to create databases with common metadata structures and subsequently extend them to meet their individual needs and data requirements. The software accomplishes these tasks through a web-based interface that allows collaborative creation and revision of common metadata templates and individual extensions to these templates for custom data sets. This allows researchers to search disparate datasets based on common metadata established through the metadata tools, but still facilitates distinct analyses and data that may be stored alongside the required common metadata. The software produces web pages that can be made publicly available at the researcher's discretion so that users may search and browse the data in an effort to make interoperability and data discovery a human-friendly task while also providing semantic data for machine-based discovery. Once relevant data has been identified, researchers can utilize the built-in application programming interface (API) that exposes the data for machine-based consumption and integration with existing data analysis tools (e.g. R, MATLAB, Project Jupyter - http://jupyter.org). The current evolution of the project has created the Astrobiology Habitable Environments Database (AHED)[1] which provides an interface to databases connected through a common metadata core. In the next project phase, the goal is for small research teams and groups to be self-sufficient in publishing their research data to meet funding mandates and academic requirements as well as fostering increased data discovery and interoperability through human-readable and machine-readable interfaces. This project is supported by the Science-Enabling Research Activity (SERA) and NASA NNX11AP82A, MSL. [1] B. Lafuente et al. (2016) AGU, submitted.
HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing
Karimi, Ramin; Hajdu, Andras
2016-01-01
Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678
HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.
Karimi, Ramin; Hajdu, Andras
2016-01-01
Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis.
Predicting human protein function with multi-task deep neural networks.
Fa, Rui; Cozzetto, Domenico; Wan, Cen; Jones, David T
2018-01-01
Machine learning methods for protein function prediction are urgently needed, especially now that a substantial fraction of known sequences remains unannotated despite the extensive use of functional assignments based on sequence similarity. One major bottleneck supervised learning faces in protein function prediction is the structured, multi-label nature of the problem, because biological roles are represented by lists of terms from hierarchically organised controlled vocabularies such as the Gene Ontology. In this work, we build on recent developments in the area of deep learning and investigate the usefulness of multi-task deep neural networks (MTDNN), which consist of upstream shared layers upon which are stacked in parallel as many independent modules (additional hidden layers with their own output units) as the number of output GO terms (the tasks). MTDNN learns individual tasks partially using shared representations and partially from task-specific characteristics. When no close homologues with experimentally validated functions can be identified, MTDNN gives more accurate predictions than baseline methods based on annotation frequencies in public databases or homology transfers. More importantly, the results show that MTDNN binary classification accuracy is higher than alternative machine learning-based methods that do not exploit commonalities and differences among prediction tasks. Interestingly, compared with a single-task predictor, the performance improvement is not linearly correlated with the number of tasks in MTDNN, but medium size models provide more improvement in our case. One of advantages of MTDNN is that given a set of features, there is no requirement for MTDNN to have a bootstrap feature selection procedure as what traditional machine learning algorithms do. Overall, the results indicate that the proposed MTDNN algorithm improves the performance of protein function prediction. On the other hand, there is still large room for deep learning techniques to further enhance prediction ability.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-05
...-AM08 Federal Acquisition Regulation; Extension of Sunset Date for Protests of Task and Delivery Orders... against the award of task or delivery orders by DoD, NASA, and the Coast Guard from May 27, 2011, to... protests against the award of task and delivery orders from May 27, 2011, to September 30, 2016, but only...
The design and implementation of EPL: An event pattern language for active databases
NASA Technical Reports Server (NTRS)
Giuffrida, G.; Zaniolo, C.
1994-01-01
The growing demand for intelligent information systems requires closer coupling of rule-based reasoning engines, such as CLIPS, with advanced data base management systems (DBMS). For instance, several commercial DBMS now support the notion of triggers that monitor events and transactions occurring in the database and fire induced actions, which perform a variety of critical functions, including safeguarding the integrity of data, monitoring access, and recording volatile information needed by administrators, analysts, and expert systems to perform assorted tasks; examples of these tasks include security enforcement, market studies, knowledge discovery, and link analysis. At UCLA, we designed and implemented the event pattern language (EPL) which is capable of detecting and acting upon complex patterns of events which are temporally related to each other. For instance, a plant manager should be notified when a certain pattern of overheating repeats itself over time in a chemical process; likewise, proper notification is required when a suspicious sequence of bank transactions is executed within a certain time limit. The EPL prototype is built in CLIPS to operate on top of Sybase, a commercial relational DBMS, where actions can be triggered by events such as simple database updates, insertions, and deletions. The rule-based syntax of EPL allows the sequences of goals in rules to be interpreted as sequences of temporal events; each goal can correspond to either (1) a simple event, or (2) a (possibly negated) event/condition predicate, or (3) a complex event defined as the disjunction and repetition of other events. Various extensions have been added to CLIPS in order to tailor the interface with Sybase and its open client/server architecture.
Le, Hoang-Quynh; Tran, Mai-Vu; Dang, Thanh Hai; Ha, Quang-Thuy; Collier, Nigel
2016-07-01
The BioCreative V chemical-disease relation (CDR) track was proposed to accelerate the progress of text mining in facilitating integrative understanding of chemicals, diseases and their relations. In this article, we describe an extension of our system (namely UET-CAM) that participated in the BioCreative V CDR. The original UET-CAM system's performance was ranked fourth among 18 participating systems by the BioCreative CDR track committee. In the Disease Named Entity Recognition and Normalization (DNER) phase, our system employed joint inference (decoding) with a perceptron-based named entity recognizer (NER) and a back-off model with Semantic Supervised Indexing and Skip-gram for named entity normalization. In the chemical-induced disease (CID) relation extraction phase, we proposed a pipeline that includes a coreference resolution module and a Support Vector Machine relation extraction model. The former module utilized a multi-pass sieve to extend entity recall. In this article, the UET-CAM system was improved by adding a 'silver' CID corpus to train the prediction model. This silver standard corpus of more than 50 thousand sentences was automatically built based on the Comparative Toxicogenomics Database (CTD) database. We evaluated our method on the CDR test set. Results showed that our system could reach the state of the art performance with F1 of 82.44 for the DNER task and 58.90 for the CID task. Analysis demonstrated substantial benefits of both the multi-pass sieve coreference resolution method (F1 + 4.13%) and the silver CID corpus (F1 +7.3%).Database URL: SilverCID-The silver-standard corpus for CID relation extraction is freely online available at: https://zenodo.org/record/34530 (doi:10.5281/zenodo.34530). © The Author(s) 2016. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Gross, M. B.; Mayernik, M. S.; Rowan, L. R.; Khan, H.; Boler, F. M.; Maull, K. E.; Stott, D.; Williams, S.; Corson-Rikert, J.; Johns, E. M.; Daniels, M. D.; Krafft, D. B.
2015-12-01
UNAVCO, UCAR, and Cornell University are working together to leverage semantic web technologies to enable discovery of people, datasets, publications and other research products, as well as the connections between them. The EarthCollab project, an EarthCube Building Block, is enhancing an existing open-source semantic web application, VIVO, to address connectivity gaps across distributed networks of researchers and resources related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) UNAVCO, a geodetic facility and consortium that supports diverse research projects informed by geodesy. People, publications, datasets and grant information have been mapped to an extended version of the VIVO-ISF ontology and ingested into VIVO's database. Data is ingested using a custom set of scripts that include the ability to perform basic automated and curated disambiguation. VIVO can display a page for every object ingested, including connections to other objects in the VIVO database. A dataset page, for example, includes the dataset type, time interval, DOI, related publications, and authors. The dataset type field provides a connection to all other datasets of the same type. The author's page will show, among other information, related datasets and co-authors. Information previously spread across several unconnected databases is now stored in a single location. In addition to VIVO's default display, the new database can also be queried using SPARQL, a query language for semantic data. EarthCollab will also extend the VIVO web application. One such extension is the ability to cross-link separate VIVO instances across institutions, allowing local display of externally curated information. For example, Cornell's VIVO faculty pages will display UNAVCO's dataset information and UNAVCO's VIVO will display Cornell faculty member contact and position information. Additional extensions, including enhanced geospatial capabilities, will be developed following task-centered usability testing.
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2018-01-01
This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form. PMID:29608174
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2018-03-19
This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form.
Preschoolers’ Novel Noun Extensions: Shape in Spite of Knowing Better
Saalbach, Henrik; Schalk, Lennart
2011-01-01
We examined the puzzling research findings that when extending novel nouns, preschoolers rely on shape similarity (rather than categorical relations) while in other task contexts (e.g., property induction) they rely on categorical relations. Taking into account research on children’s word learning, categorization, and inductive inference we assume that preschoolers have both a shape-based and a category-based word extension strategy available and can switch between these two depending on which information is easily available. To this end, we tested preschoolers on two versions of a novel-noun label extension task. First, we paralleled the standard extension task commonly used by previous research. In this case, as expected, preschoolers predominantly selected same-shape items. Second, we supported preschoolers’ retrieval of item-related information from memory by asking them simple questions about each item prior to the label extension task. Here, they switched to a category-based strategy, thus, predominantly selecting same-category items. Finally, we revealed that this shape-to-category shift is specific to the word learning context as we did not find it in a non-lexical classification task. These findings support our assumption that preschoolers’ decision about word extension change in accordance with the availability of information (from task context or by memory retrieval). We conclude by suggesting that preschoolers’ noun extensions can be conceptualized within the framework of heuristic decision-making. This provides an ecologically plausible processing account with respect to which information is selected and how this information is integrated to act as a guideline for decision-making when novel words have to be generalized. PMID:22073036
Lessons Learned from Deploying an Analytical Task Management Database
NASA Technical Reports Server (NTRS)
O'Neil, Daniel A.; Welch, Clara; Arceneaux, Joshua; Bulgatz, Dennis; Hunt, Mitch; Young, Stephen
2007-01-01
Defining requirements, missions, technologies, and concepts for space exploration involves multiple levels of organizations, teams of people with complementary skills, and analytical models and simulations. Analytical activities range from filling a To-Be-Determined (TBD) in a requirement to creating animations and simulations of exploration missions. In a program as large as returning to the Moon, there are hundreds of simultaneous analysis activities. A way to manage and integrate efforts of this magnitude is to deploy a centralized database that provides the capability to define tasks, identify resources, describe products, schedule deliveries, and generate a variety of reports. This paper describes a web-accessible task management system and explains the lessons learned during the development and deployment of the database. Through the database, managers and team leaders can define tasks, establish review schedules, assign teams, link tasks to specific requirements, identify products, and link the task data records to external repositories that contain the products. Data filters and spreadsheet export utilities provide a powerful capability to create custom reports. Import utilities provide a means to populate the database from previously filled form files. Within a four month period, a small team analyzed requirements, developed a prototype, conducted multiple system demonstrations, and deployed a working system supporting hundreds of users across the aeros pace community. Open-source technologies and agile software development techniques, applied by a skilled team enabled this impressive achievement. Topics in the paper cover the web application technologies, agile software development, an overview of the system's functions and features, dealing with increasing scope, and deploying new versions of the system.
Advanced instrumentation: Technology database enhancement, volume 4, appendix G
NASA Technical Reports Server (NTRS)
1991-01-01
The purpose of this task was to add to the McDonnell Douglas Space Systems Company's Sensors Database, including providing additional information on the instruments and sensors applicable to physical/chemical Environmental Control and Life Support System (P/C ECLSS) or Closed Ecological Life Support System (CELSS) which were not previously included. The Sensors Database was reviewed in order to determine the types of data required, define the data categories, and develop an understanding of the data record structure. An assessment of the MDSSC Sensors Database identified limitations and problems in the database. Guidelines and solutions were developed to address these limitations and problems in order that the requirements of the task could be fulfilled.
ERIC Educational Resources Information Center
Northeast Regional Center for Rural Development, University Park, PA.
This publication reports accomplishments of the National Extension Task Force for Community Leadership and makes recommendations for strengthening educational programming in community leadership in the Cooperative Extension System. The recommendations are presented in the context of a "white paper" rather than formal policy. High…
WebBee: A Platform for Secure Coordination and Communication in Crisis Scenarios
2008-04-16
implemented through database triggers. The Webbee Database Server contains an Information Server, which is a Postgres database with PostGIS [5] extension...sends it to the target user. The heavy lifting for this mechanism is done through an extension of Postgres triggers (Figures 6.1 and 6.2), resulting...in fewer queries and better performance. Trigger support in Postgres is table-based and comparatively primitive: with n table triggers, an update
Fast 3D shape screening of large chemical databases through alignment-recycling
Fontaine, Fabien; Bolton, Evan; Borodina, Yulia; Bryant, Stephen H
2007-01-01
Background Large chemical databases require fast, efficient, and simple ways of looking for similar structures. Although such tasks are now fairly well resolved for graph-based similarity queries, they remain an issue for 3D approaches, particularly for those based on 3D shape overlays. Inspired by a recent technique developed to compare molecular shapes, we designed a hybrid methodology, alignment-recycling, that enables efficient retrieval and alignment of structures with similar 3D shapes. Results Using a dataset of more than one million PubChem compounds of limited size (< 28 heavy atoms) and flexibility (< 6 rotatable bonds), we obtained a set of a few thousand diverse structures covering entirely the 3D shape space of the conformers of the dataset. Transformation matrices gathered from the overlays between these diverse structures and the 3D conformer dataset allowed us to drastically (100-fold) reduce the CPU time required for shape overlay. The alignment-recycling heuristic produces results consistent with de novo alignment calculation, with better than 80% hit list overlap on average. Conclusion Overlay-based 3D methods are computationally demanding when searching large databases. Alignment-recycling reduces the CPU time to perform shape similarity searches by breaking the alignment problem into three steps: selection of diverse shapes to describe the database shape-space; overlay of the database conformers to the diverse shapes; and non-optimized overlay of query and database conformers using common reference shapes. The precomputation, required by the first two steps, is a significant cost of the method; however, once performed, querying is two orders of magnitude faster. Extensions and variations of this methodology, for example, to handle more flexible and larger small-molecules are discussed. PMID:17880744
Schwartz, Yannick; Barbot, Alexis; Thyreau, Benjamin; Frouin, Vincent; Varoquaux, Gaël; Siram, Aditya; Marcus, Daniel S; Poline, Jean-Baptiste
2012-01-01
As neuroimaging databases grow in size and complexity, the time researchers spend investigating and managing the data increases to the expense of data analysis. As a result, investigators rely more and more heavily on scripting using high-level languages to automate data management and processing tasks. For this, a structured and programmatic access to the data store is necessary. Web services are a first step toward this goal. They however lack in functionality and ease of use because they provide only low-level interfaces to databases. We introduce here PyXNAT, a Python module that interacts with The Extensible Neuroimaging Archive Toolkit (XNAT) through native Python calls across multiple operating systems. The choice of Python enables PyXNAT to expose the XNAT Web Services and unify their features with a higher level and more expressive language. PyXNAT provides XNAT users direct access to all the scientific packages in Python. Finally PyXNAT aims to be efficient and easy to use, both as a back-end library to build XNAT clients and as an alternative front-end from the command line.
Schwartz, Yannick; Barbot, Alexis; Thyreau, Benjamin; Frouin, Vincent; Varoquaux, Gaël; Siram, Aditya; Marcus, Daniel S.; Poline, Jean-Baptiste
2012-01-01
As neuroimaging databases grow in size and complexity, the time researchers spend investigating and managing the data increases to the expense of data analysis. As a result, investigators rely more and more heavily on scripting using high-level languages to automate data management and processing tasks. For this, a structured and programmatic access to the data store is necessary. Web services are a first step toward this goal. They however lack in functionality and ease of use because they provide only low-level interfaces to databases. We introduce here PyXNAT, a Python module that interacts with The Extensible Neuroimaging Archive Toolkit (XNAT) through native Python calls across multiple operating systems. The choice of Python enables PyXNAT to expose the XNAT Web Services and unify their features with a higher level and more expressive language. PyXNAT provides XNAT users direct access to all the scientific packages in Python. Finally PyXNAT aims to be efficient and easy to use, both as a back-end library to build XNAT clients and as an alternative front-end from the command line. PMID:22654752
Nonlinear Deep Kernel Learning for Image Annotation.
Jiu, Mingyuan; Sahbi, Hichem
2017-02-08
Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahrens, J.P.; Shapiro, L.G.; Tanimoto, S.L.
1997-04-01
This paper describes a computing environment which supports computer-based scientific research work. Key features include support for automatic distributed scheduling and execution and computer-based scientific experimentation. A new flexible and extensible scheduling technique that is responsive to a user`s scheduling constraints, such as the ordering of program results and the specification of task assignments and processor utilization levels, is presented. An easy-to-use constraint language for specifying scheduling constraints, based on the relational database query language SQL, is described along with a search-based algorithm for fulfilling these constraints. A set of performance studies show that the environment can schedule and executemore » program graphs on a network of workstations as the user requests. A method for automatically generating computer-based scientific experiments is described. Experiments provide a concise method of specifying a large collection of parameterized program executions. The environment achieved significant speedups when executing experiments; for a large collection of scientific experiments an average speedup of 3.4 on an average of 5.5 scheduled processors was obtained.« less
Airport take-off noise assessment aimed at identify responsible aircraft classes.
Sanchez-Perez, Luis A; Sanchez-Fernandez, Luis P; Shaout, Adnan; Suarez-Guerra, Sergio
2016-01-15
Assessment of aircraft noise is an important task of nowadays airports in order to fight environmental noise pollution given the recent discoveries on the exposure negative effects on human health. Noise monitoring and estimation around airports mostly use aircraft noise signals only for computing statistical indicators and depends on additional data sources so as to determine required inputs such as the aircraft class responsible for noise pollution. In this sense, the noise monitoring and estimation systems have been tried to improve by creating methods for obtaining more information from aircraft noise signals, especially real-time aircraft class recognition. Consequently, this paper proposes a multilayer neural-fuzzy model for aircraft class recognition based on take-off noise signal segmentation. It uses a fuzzy inference system to build a final response for each class p based on the aggregation of K parallel neural networks outputs Op(k) with respect to Linear Predictive Coding (LPC) features extracted from K adjacent signal segments. Based on extensive experiments over two databases with real-time take-off noise measurements, the proposed model performs better than other methods in literature, particularly when aircraft classes are strongly correlated to each other. A new strictly cross-checked database is introduced including more complex classes and real-time take-off noise measurements from modern aircrafts. The new model is at least 5% more accurate with respect to previous database and successfully classifies 87% of measurements in the new database. Copyright © 2015 Elsevier B.V. All rights reserved.
REFGEN and TREENAMER: Automated Sequence Data Handling for Phylogenetic Analysis in the Genomic Era
Leonard, Guy; Stevens, Jamie R.; Richards, Thomas A.
2009-01-01
The phylogenetic analysis of nucleotide sequences and increasingly that of amino acid sequences is used to address a number of biological questions. Access to extensive datasets, including numerous genome projects, means that standard phylogenetic analyses can include many hundreds of sequences. Unfortunately, most phylogenetic analysis programs do not tolerate the sequence naming conventions of genome databases. Managing large numbers of sequences and standardizing sequence labels for use in phylogenetic analysis programs can be a time consuming and laborious task. Here we report the availability of an online resource for the management of gene sequences recovered from public access genome databases such as GenBank. These web utilities include the facility for renaming every sequence in a FASTA alignment file, with each sequence label derived from a user-defined combination of the species name and/or database accession number. This facility enables the user to keep track of the branching order of the sequences/taxa during multiple tree calculations and re-optimisations. Post phylogenetic analysis, these webpages can then be used to rename every label in the subsequent tree files (with a user-defined combination of species name and/or database accession number). Together these programs drastically reduce the time required for managing sequence alignments and labelling phylogenetic figures. Additional features of our platform include the automatic removal of identical accession numbers (recorded in the report file) and generation of species and accession number lists for use in supplementary materials or figure legends. PMID:19812722
Medial temporal lobe contributions to short-term memory for faces.
Race, Elizabeth; LaRocque, Karen F; Keane, Margaret M; Verfaellie, Mieke
2013-11-01
The role of the medial temporal lobes (MTL) in short-term memory (STM) remains a matter of debate. Whereas imaging studies commonly show hippocampal activation during short-delay memory tasks, evidence from amnesic patients with MTL lesions is mixed. It has been argued that apparent STM impairments in amnesia may reflect long-term memory (LTM) contributions to performance. We challenge this conclusion by demonstrating that MTL amnesic patients show impaired delayed matching-to-sample (DMS) for faces in a task that meets both a traditional delay-based and a recently proposed distractor-based criterion for classification as an STM task. In Experiment 1, we demonstrate that our face DMS task meets the proposed distractor-based criterion for STM classification, in that extensive processing of delay-period distractor stimuli disrupts performance of healthy individuals. In Experiment 2, MTL amnesic patients with lesions extending into anterior subhippocampal cortex, but not patients with lesions limited to the hippocampus, show impaired performance on this task without distraction at delays as short as 8 s, within temporal range of delay-based STM classification, in the context of intact perceptual matching performance. Experiment 3 provides support for the hypothesis that STM for faces relies on configural processing by showing that the extent to which healthy participants' performance is disrupted by interference depends on the configural demands of the distractor task. Together, these findings are consistent with the notion that the amnesic impairment in STM for faces reflects a deficit in configural processing associated with subhippocampal cortices and provide novel evidence that the MTL supports cognition beyond the LTM domain. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Irsik, Vanessa C; Vanden Bosch der Nederlanden, Christina M; Snyder, Joel S
2016-11-01
Attention and other processing constraints limit the perception of objects in complex scenes, which has been studied extensively in the visual sense. We used a change deafness paradigm to examine how attention to particular objects helps and hurts the ability to notice changes within complex auditory scenes. In a counterbalanced design, we examined how cueing attention to particular objects affected performance in an auditory change-detection task through the use of valid or invalid cues and trials without cues (Experiment 1). We further examined how successful encoding predicted change-detection performance using an object-encoding task and we addressed whether performing the object-encoding task along with the change-detection task affected performance overall (Experiment 2). Participants had more error for invalid compared to valid and uncued trials, but this effect was reduced in Experiment 2 compared to Experiment 1. When the object-encoding task was present, listeners who completed the uncued condition first had less overall error than those who completed the cued condition first. All participants showed less change deafness when they successfully encoded change-relevant compared to irrelevant objects during valid and uncued trials. However, only participants who completed the uncued condition first also showed this effect during invalid cue trials, suggesting a broader scope of attention. These findings provide converging evidence that attention to change-relevant objects is crucial for successful detection of acoustic changes and that encouraging broad attention to multiple objects is the best way to reduce change deafness. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Cascade heterogeneous face sketch-photo synthesis via dual-scale Markov Network
NASA Astrophysics Data System (ADS)
Yao, Saisai; Chen, Zhenxue; Jia, Yunyi; Liu, Chengyun
2018-03-01
Heterogeneous face sketch-photo synthesis is an important and challenging task in computer vision, which has widely applied in law enforcement and digital entertainment. According to the different synthesis results based on different scales, this paper proposes a cascade sketch-photo synthesis method via dual-scale Markov Network. Firstly, Markov Network with larger scale is used to synthesise the initial sketches and the local vertical and horizontal neighbour search (LVHNS) method is used to search for the neighbour patches of test patches in training set. Then, the initial sketches and test photos are jointly entered into smaller scale Markov Network. Finally, the fine sketches are obtained after cascade synthesis process. Extensive experimental results on various databases demonstrate the superiority of the proposed method compared with several state-of-the-art methods.
Teo, W P; Rodrigues, J P; Mastaglia, F L; Thickbroom, G W
2013-06-01
Repetitive finger tapping is a well-established clinical test for the evaluation of parkinsonian bradykinesia, but few studies have investigated other finger movement modalities. We compared the kinematic changes (movement rate and amplitude) and response to levodopa during a conventional index finger-thumb-tapping task and an unconstrained index finger flexion-extension task performed at maximal voluntary rate (MVR) for 20 s in 11 individuals with levodopa-responsive Parkinson's disease (OFF and ON) and 10 healthy age-matched controls. Between-task comparisons showed that for all conditions, the initial movement rate was greater for the unconstrained flexion-extension task than the tapping task. Movement rate in the OFF state was slower than in controls for both tasks and normalized in the ON state. The movement amplitude was also reduced for both tasks in OFF and increased in the ON state but did not reach control levels. The rate and amplitude of movement declined significantly for both tasks under all conditions (OFF/ON and controls). The time course of rate decline was comparable for both tasks and was similar in OFF/ON and controls, whereas the tapping task was associated with a greater decline in MA, both in controls and ON, but not OFF. The findings indicate that both finger movement tasks show similar kinematic changes during a 20-s sustained MVR, but that movement amplitude is less well sustained during the tapping task than the unconstrained finger movement task. Both movement rate and amplitude improved with levodopa; however, movement rate was more levodopa responsive than amplitude.
Ferreira Junior, José Raniery; Oliveira, Marcelo Costa; de Azevedo-Marques, Paulo Mazzoncini
2016-12-01
Lung cancer is the leading cause of cancer-related deaths in the world, and its main manifestation is pulmonary nodules. Detection and classification of pulmonary nodules are challenging tasks that must be done by qualified specialists, but image interpretation errors make those tasks difficult. In order to aid radiologists on those hard tasks, it is important to integrate the computer-based tools with the lesion detection, pathology diagnosis, and image interpretation processes. However, computer-aided diagnosis research faces the problem of not having enough shared medical reference data for the development, testing, and evaluation of computational methods for diagnosis. In order to minimize this problem, this paper presents a public nonrelational document-oriented cloud-based database of pulmonary nodules characterized by 3D texture attributes, identified by experienced radiologists and classified in nine different subjective characteristics by the same specialists. Our goal with the development of this database is to improve computer-aided lung cancer diagnosis and pulmonary nodule detection and classification research through the deployment of this database in a cloud Database as a Service framework. Pulmonary nodule data was provided by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), image descriptors were acquired by a volumetric texture analysis, and database schema was developed using a document-oriented Not only Structured Query Language (NoSQL) approach. The proposed database is now with 379 exams, 838 nodules, and 8237 images, 4029 of them are CT scans and 4208 manually segmented nodules, and it is allocated in a MongoDB instance on a cloud infrastructure.
X-48B Phase 1 Flight Maneuver Database and ICP Airspace Constraint Analysis
NASA Technical Reports Server (NTRS)
Fast, Peter Alan
2010-01-01
The work preformed during the Summer 2010 by Peter Fast. The main tasks assigned were to update and improve the X-48 Flight Maneuver Database and conduct an Airspace Constraint Analysis for the Remotely Operated Aircraft Area used to flight test Unmanned Arial Vehicles. The final task was to develop and demonstrate a working knowledge of flight control theory.
A dedicated database system for handling multi-level data in systems biology.
Pornputtapong, Natapol; Wanichthanarak, Kwanjeera; Nilsson, Avlant; Nookaew, Intawat; Nielsen, Jens
2014-01-01
Advances in high-throughput technologies have enabled extensive generation of multi-level omics data. These data are crucial for systems biology research, though they are complex, heterogeneous, highly dynamic, incomplete and distributed among public databases. This leads to difficulties in data accessibility and often results in errors when data are merged and integrated from varied resources. Therefore, integration and management of systems biological data remain very challenging. To overcome this, we designed and developed a dedicated database system that can serve and solve the vital issues in data management and hereby facilitate data integration, modeling and analysis in systems biology within a sole database. In addition, a yeast data repository was implemented as an integrated database environment which is operated by the database system. Two applications were implemented to demonstrate extensibility and utilization of the system. Both illustrate how the user can access the database via the web query function and implemented scripts. These scripts are specific for two sample cases: 1) Detecting the pheromone pathway in protein interaction networks; and 2) Finding metabolic reactions regulated by Snf1 kinase. In this study we present the design of database system which offers an extensible environment to efficiently capture the majority of biological entities and relations encountered in systems biology. Critical functions and control processes were designed and implemented to ensure consistent, efficient, secure and reliable transactions. The two sample cases on the yeast integrated data clearly demonstrate the value of a sole database environment for systems biology research.
Navigation integrity monitoring and obstacle detection for enhanced-vision systems
NASA Astrophysics Data System (ADS)
Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter
2001-08-01
Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our approach is demonstrated with real data acquired during extensive flight tests to several airports in Northern Germany.
Wiley, Laura K.; Sivley, R. Michael; Bush, William S.
2013-01-01
Efficient storage and retrieval of genomic annotations based on range intervals is necessary, given the amount of data produced by next-generation sequencing studies. The indexing strategies of relational database systems (such as MySQL) greatly inhibit their use in genomic annotation tasks. This has led to the development of stand-alone applications that are dependent on flat-file libraries. In this work, we introduce MyNCList, an implementation of the NCList data structure within a MySQL database. MyNCList enables the storage, update and rapid retrieval of genomic annotations from the convenience of a relational database system. Range-based annotations of 1 million variants are retrieved in under a minute, making this approach feasible for whole-genome annotation tasks. Database URL: https://github.com/bushlab/mynclist PMID:23894185
Wiley, Laura K; Sivley, R Michael; Bush, William S
2013-01-01
Efficient storage and retrieval of genomic annotations based on range intervals is necessary, given the amount of data produced by next-generation sequencing studies. The indexing strategies of relational database systems (such as MySQL) greatly inhibit their use in genomic annotation tasks. This has led to the development of stand-alone applications that are dependent on flat-file libraries. In this work, we introduce MyNCList, an implementation of the NCList data structure within a MySQL database. MyNCList enables the storage, update and rapid retrieval of genomic annotations from the convenience of a relational database system. Range-based annotations of 1 million variants are retrieved in under a minute, making this approach feasible for whole-genome annotation tasks. Database URL: https://github.com/bushlab/mynclist.
Comparing Top-Down with Bottom-Up Approaches: Teaching Data Modeling
ERIC Educational Resources Information Center
Kung, Hsiang-Jui; Kung, LeeAnn; Gardiner, Adrian
2013-01-01
Conceptual database design is a difficult task for novice database designers, such as students, and is also therefore particularly challenging for database educators to teach. In the teaching of database design, two general approaches are frequently emphasized: top-down and bottom-up. In this paper, we present an empirical comparison of students'…
Teaching Advanced SQL Skills: Text Bulk Loading
ERIC Educational Resources Information Center
Olsen, David; Hauser, Karina
2007-01-01
Studies show that advanced database skills are important for students to be prepared for today's highly competitive job market. A common task for database administrators is to insert a large amount of data into a database. This paper illustrates how an up-to-date, advanced database topic, namely bulk insert, can be incorporated into a database…
Large Survey Database: A Distributed Framework for Storage and Analysis of Large Datasets
NASA Astrophysics Data System (ADS)
Juric, Mario
2011-01-01
The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures. An LSD database consists of a set of vertically and horizontally partitioned tables, physically stored as compressed HDF5 files. Vertically, we partition the tables into groups of related columns ('column groups'), storing together logically related data (e.g., astrometry, photometry). Horizontally, the tables are partitioned into partially overlapping ``cells'' by position in space (lon, lat) and time (t). This organization allows for fast lookups based on spatial and temporal coordinates, as well as data and task distribution. The design was inspired by the success of Google BigTable (Chang et al., 2006). Our programming model is a pipelined extension of MapReduce (Dean and Ghemawat, 2004). An SQL-like query language is used to access data. For complex tasks, map-reduce ``kernels'' that operate on query results on a per-cell basis can be written, with the framework taking care of scheduling and execution. The combination leverages users' familiarity with SQL, while offering a fully distributed computing environment. LSD adds little overhead compared to direct Python file I/O. In tests, we sweeped through 1.1 Grows of PanSTARRS+SDSS data (220GB) less than 15 minutes on a dual CPU machine. In a cluster environment, we achieved bandwidths of 17Gbits/sec (I/O limited). Based on current experience, we believe LSD should scale to be useful for analysis and storage of LSST-scale datasets. It can be downloaded from http://mwscience.net/lsd.
Truck and bus driver task analysis
DOT National Transportation Integrated Search
1973-05-01
This report describes the tasks involved in driving large trucks and buses. The task descriptions are an extension of the task description developed by Human Resources Research Organization (HumRRO) for passenger car drivers and deal with those uniqu...
The IUGS/IAGC Task Group on Global Geochemical Baselines
Smith, David B.; Wang, Xueqiu; Reeder, Shaun; Demetriades, Alecos
2012-01-01
The Task Group on Global Geochemical Baselines, operating under the auspices of both the International Union of Geological Sciences (IUGS) and the International Association of Geochemistry (IAGC), has the long-term goal of establishing a global geochemical database to document the concentration and distribution of chemical elements in the Earth’s surface or near-surface environment. The database and accompanying element distribution maps represent a geochemical baseline against which future human-induced or natural changes to the chemistry of the land surface may be recognized and quantified. In order to accomplish this long-term goal, the activities of the Task Group include: (1) developing partnerships with countries conducting broad-scale geochemical mapping studies; (2) providing consultation and training in the form of workshops and short courses; (3) organizing periodic international symposia to foster communication among the geochemical mapping community; (4) developing criteria for certifying those projects whose data are acceptable in a global geochemical database; (5) acting as a repository for data collected by those projects meeting the criteria for standardization; (6) preparing complete metadata for the certified projects; and (7) preparing, ultimately, a global geochemical database. This paper summarizes the history and accomplishments of the Task Group since its first predecessor project was established in 1988.
DePriest, Adam D; Fiandalo, Michael V; Schlanger, Simon; Heemers, Frederike; Mohler, James L; Liu, Song; Heemers, Hannelore V
2016-01-01
Androgen receptor (AR) is a ligand-activated transcription factor that is the main target for treatment of non-organ-confined prostate cancer (CaP). Failure of life-prolonging AR-targeting androgen deprivation therapy is due to flexibility in steroidogenic pathways that control intracrine androgen levels and variability in the AR transcriptional output. Androgen biosynthesis enzymes, androgen transporters and AR-associated coregulators are attractive novel CaP treatment targets. These proteins, however, are characterized by multiple transcript variants and isoforms, are subject to genomic alterations, and are differentially expressed among CaPs. Determining their therapeutic potential requires evaluation of extensive, diverse datasets that are dispersed over multiple databases, websites and literature reports. Mining and integrating these datasets are cumbersome, time-consuming tasks and provide only snapshots of relevant information. To overcome this impediment to effective, efficient study of AR and potential drug targets, we developed the Regulators of Androgen Action Resource (RAAR), a non-redundant, curated and user-friendly searchable web interface. RAAR centralizes information on gene function, clinical relevance, and resources for 55 genes that encode proteins involved in biosynthesis, metabolism and transport of androgens and for 274 AR-associated coregulator genes. Data in RAAR are organized in two levels: (i) Information pertaining to production of androgens is contained in a 'pre-receptor level' database, and coregulator gene information is provided in a 'post-receptor level' database, and (ii) an 'other resources' database contains links to additional databases that are complementary to and useful to pursue further the information provided in RAAR. For each of its 329 entries, RAAR provides access to more than 20 well-curated publicly available databases, and thus, access to thousands of data points. Hyperlinks provide direct access to gene-specific entries in the respective database(s). RAAR is a novel, freely available resource that provides fast, reliable and easy access to integrated information that is needed to develop alternative CaP therapies. Database URL: http://www.lerner.ccf.org/cancerbio/heemers/RAAR/search/. © The Author(s) 2016. Published by Oxford University Press.
Lee, Taein; Cheng, Chun-Huai; Ficklin, Stephen; Yu, Jing; Humann, Jodi; Main, Dorrie
2017-01-01
Abstract Tripal is an open-source database platform primarily used for development of genomic, genetic and breeding databases. We report here on the release of the Chado Loader, Chado Data Display and Chado Search modules to extend the functionality of the core Tripal modules. These new extension modules provide additional tools for (1) data loading, (2) customized visualization and (3) advanced search functions for supported data types such as organism, marker, QTL/Mendelian Trait Loci, germplasm, map, project, phenotype, genotype and their respective metadata. The Chado Loader module provides data collection templates in Excel with defined metadata and data loaders with front end forms. The Chado Data Display module contains tools to visualize each data type and the metadata which can be used as is or customized as desired. The Chado Search module provides search and download functionality for the supported data types. Also included are the tools to visualize map and species summary. The use of materialized views in the Chado Search module enables better performance as well as flexibility of data modeling in Chado, allowing existing Tripal databases with different metadata types to utilize the module. These Tripal Extension modules are implemented in the Genome Database for Rosaceae (rosaceae.org), CottonGen (cottongen.org), Citrus Genome Database (citrusgenomedb.org), Genome Database for Vaccinium (vaccinium.org) and the Cool Season Food Legume Database (coolseasonfoodlegume.org). Database URL: https://www.citrusgenomedb.org/, https://www.coolseasonfoodlegume.org/, https://www.cottongen.org/, https://www.rosaceae.org/, https://www.vaccinium.org/
Software Application for Supporting the Education of Database Systems
ERIC Educational Resources Information Center
Vágner, Anikó
2015-01-01
The article introduces an application which supports the education of database systems, particularly the teaching of SQL and PL/SQL in Oracle Database Management System environment. The application has two parts, one is the database schema and its content, and the other is a C# application. The schema is to administrate and store the tasks and the…
MET network in PubMed: a text-mined network visualization and curation system.
Dai, Hong-Jie; Su, Chu-Hsien; Lai, Po-Ting; Huang, Ming-Siang; Jonnagaddala, Jitendra; Rose Jue, Toni; Rao, Shruti; Chou, Hui-Jou; Milacic, Marija; Singh, Onkar; Syed-Abdul, Shabbir; Hsu, Wen-Lian
2016-01-01
Metastasis is the dissemination of a cancer/tumor from one organ to another, and it is the most dangerous stage during cancer progression, causing more than 90% of cancer deaths. Improving the understanding of the complicated cellular mechanisms underlying metastasis requires investigations of the signaling pathways. To this end, we developed a METastasis (MET) network visualization and curation tool to assist metastasis researchers retrieve network information of interest while browsing through the large volume of studies in PubMed. MET can recognize relations among genes, cancers, tissues and organs of metastasis mentioned in the literature through text-mining techniques, and then produce a visualization of all mined relations in a metastasis network. To facilitate the curation process, MET is developed as a browser extension that allows curators to review and edit concepts and relations related to metastasis directly in PubMed. PubMed users can also view the metastatic networks integrated from the large collection of research papers directly through MET. For the BioCreative 2015 interactive track (IAT), a curation task was proposed to curate metastatic networks among PubMed abstracts. Six curators participated in the proposed task and a post-IAT task, curating 963 unique metastatic relations from 174 PubMed abstracts using MET.Database URL: http://btm.tmu.edu.tw/metastasisway. © The Author(s) 2016. Published by Oxford University Press.
Mayday - integrative analytics for expression data
2010-01-01
Background DNA Microarrays have become the standard method for large scale analyses of gene expression and epigenomics. The increasing complexity and inherent noisiness of the generated data makes visual data exploration ever more important. Fast deployment of new methods as well as a combination of predefined, easy to apply methods with programmer's access to the data are important requirements for any analysis framework. Mayday is an open source platform with emphasis on visual data exploration and analysis. Many built-in methods for clustering, machine learning and classification are provided for dissecting complex datasets. Plugins can easily be written to extend Mayday's functionality in a large number of ways. As Java program, Mayday is platform-independent and can be used as Java WebStart application without any installation. Mayday can import data from several file formats, database connectivity is included for efficient data organization. Numerous interactive visualization tools, including box plots, profile plots, principal component plots and a heatmap are available, can be enhanced with metadata and exported as publication quality vector files. Results We have rewritten large parts of Mayday's core to make it more efficient and ready for future developments. Among the large number of new plugins are an automated processing framework, dynamic filtering, new and efficient clustering methods, a machine learning module and database connectivity. Extensive manual data analysis can be done using an inbuilt R terminal and an integrated SQL querying interface. Our visualization framework has become more powerful, new plot types have been added and existing plots improved. Conclusions We present a major extension of Mayday, a very versatile open-source framework for efficient micro array data analysis designed for biologists and bioinformaticians. Most everyday tasks are already covered. The large number of available plugins as well as the extension possibilities using compiled plugins and ad-hoc scripting allow for the rapid adaption of Mayday also to very specialized data exploration. Mayday is available at http://microarray-analysis.org. PMID:20214778
2013-12-01
review Task 22 Preparation of Project Final Report 6 A request for a 12 month no- cost extension for this study was approved on 7 November 2012...extending study activities through December 2013. A modified statement of work, approved as part of the no- cost extension, is presented in Table 6. Table...6: MODIFIED SOW for remaining PROJECT Tasks and STUDY TIMETABLE (Nov 2012) A request for an additional 12 month no- cost extension for this study
NASA Technical Reports Server (NTRS)
2003-01-01
The same software controlling autonomous and crew-assisted operations for the International Space Station (ISS) is enabling commercial enterprises to integrate and automate manual operations, also known as decision logic, in real time across complex and disparate networked applications, databases, servers, and other devices, all with quantifiable business benefits. Auspice Corporation, of Framingham, Massachusetts, developed the Auspice TLX (The Logical Extension) software platform to effectively mimic the human decision-making process. Auspice TLX automates operations across extended enterprise systems, where any given infrastructure can include thousands of computers, servers, switches, and modems that are connected, and therefore, dependent upon each other. The concept behind the Auspice software spawned from a computer program originally developed in 1981 by Cambridge, Massachusetts-based Draper Laboratory for simulating tasks performed by astronauts aboard the Space Shuttle. At the time, the Space Shuttle Program was dependent upon paper-based procedures for its manned space missions, which typically averaged 2 weeks in duration. As the Shuttle Program progressed, NASA began increasing the length of manned missions in preparation for a more permanent space habitat. Acknowledging the need to relinquish paper-based procedures in favor of an electronic processing format to properly monitor and manage the complexities of these longer missions, NASA realized that Draper's task simulation software could be applied to its vision of year-round space occupancy. In 1992, Draper was awarded a NASA contract to build User Interface Language software to enable autonomous operations of a multitude of functions on Space Station Freedom (the station was redesigned in 1993 and converted into the international venture known today as the ISS)
Potential for protein surface shape analysis using spherical harmonics and 3D Zernike descriptors.
Venkatraman, Vishwesh; Sael, Lee; Kihara, Daisuke
2009-01-01
With structure databases expanding at a rapid rate, the task at hand is to provide reliable clues to their molecular function and to be able to do so on a large scale. This, however, requires suitable encodings of the molecular structure which are amenable to fast screening. To this end, moment-based representations provide a compact and nonredundant description of molecular shape and other associated properties. In this article, we present an overview of some commonly used representations with specific focus on two schemes namely spherical harmonics and their extension, the 3D Zernike descriptors. Key features and differences of the two are reviewed and selected applications are highlighted. We further discuss recent advances covering aspects of shape and property-based comparison at both global and local levels and demonstrate their applicability through some of our studies.
Wesnes, Keith A; McNamara, Cynthia; Annas, Peter
2016-03-01
The Cognitive Drug Research (CDR) System is a set of nine computerized tests of attention, information processing, working memory, executive control and episodic memory which was designed for repeated assessments in research projects. The CDR System has been used extensively in clinical trials involving healthy volunteers for over 30 years, and a database of 7751 individuals aged 18-87 years has been accumulated for pre-treatment data from these studies. This database has been analysed, and the relationships between the various scores with factors, including age, gender and years of full-time education, have been identified. These analyses are reported in this paper, along with tables of norms for the various key measures from the core tasks stratified by age and gender. These norms can be used for a variety of purposes, including the determination of eligibility for participation in clinical trials and the everyday relevance of research findings from the system. In addition, these norms provide valuable information on gender differences and the effects of normal ageing on major aspects of human cognitive function. © The Author(s) 2016.
The STEP database through the end-users eyes--USABILITY STUDY.
Salunke, Smita; Tuleu, Catherine
2015-08-15
The user-designed database of Safety and Toxicity of Excipients for Paediatrics ("STEP") is created to address the shared need of drug development community to access the relevant information of excipients effortlessly. Usability testing was performed to validate if the database satisfies the need of the end-users. Evaluation framework was developed to assess the usability. The participants performed scenario based tasks and provided feedback and post-session usability ratings. Failure Mode Effect Analysis (FMEA) was performed to prioritize the problems and improvements to the STEP database design and functionalities. The study revealed several design vulnerabilities. Tasks such as limiting the results, running complex queries, location of data and registering to access the database were challenging. The three critical attributes identified to have impact on the usability of the STEP database included (1) content and presentation (2) the navigation and search features (3) potential end-users. Evaluation framework proved to be an effective method for evaluating database effectiveness and user satisfaction. This study provides strong initial support for the usability of the STEP database. Recommendations would be incorporated into the refinement of the database to improve its usability and increase user participation towards the advancement of the database. Copyright © 2015 Elsevier B.V. All rights reserved.
Human-Robot Cooperation with Commands Embedded in Actions
NASA Astrophysics Data System (ADS)
Kobayashi, Kazuki; Yamada, Seiji
In this paper, we first propose a novel interaction model, CEA (Commands Embedded in Actions). It can explain the way how some existing systems reduce the work-load of their user. We next extend the CEA and build ECEA (Extended CEA) model. The ECEA enables robots to achieve more complicated tasks. On this extension, we employ ACS (Action Coding System) which can describe segmented human acts and clarifies the relationship between user's actions and robot's actions in a task. The ACS utilizes the CEA's strong point which enables a user to send a command to a robot by his/her natural action for the task. The instance of the ECEA led by using the ACS is a temporal extension which has the user keep a final state of a previous his/her action. We apply the temporal extension of the ECEA for a sweeping task. The high-level task, a cooperative task between the user and the robot can be realized. The robot with simple reactive behavior can sweep the region of under an object when the user picks up the object. In addition, we measure user's cognitive loads on the ECEA and a traditional method, DCM (Direct Commanding Method) in the sweeping task, and compare between them. The results show that the ECEA has a lower cognitive load than the DCM significantly.
Keeping Track of Our Treasures: Managing Historical Data with Relational Database Software.
ERIC Educational Resources Information Center
Gutmann, Myron P.; And Others
1989-01-01
Describes the way a relational database management system manages a large historical data collection project. Shows that such databases are practical to construct. States that the programing tasks involved are not for beginners, but the rewards of having data organized are worthwhile. (GG)
Using an image-extended relational database to support content-based image retrieval in a PACS.
Traina, Caetano; Traina, Agma J M; Araújo, Myrian R B; Bueno, Josiane M; Chino, Fabio J T; Razente, Humberto; Azevedo-Marques, Paulo M
2005-12-01
This paper presents a new Picture Archiving and Communication System (PACS), called cbPACS, which has content-based image retrieval capabilities. The cbPACS answers range and k-nearest- neighbor similarity queries, employing a relational database manager extended to support images. The images are compared through their features, which are extracted by an image-processing module and stored in the extended relational database. The database extensions were developed aiming at efficiently answering similarity queries by taking advantage of specialized indexing methods. The main concept supporting the extensions is the definition, inside the relational manager, of distance functions based on features extracted from the images. An extension to the SQL language enables the construction of an interpreter that intercepts the extended commands and translates them to standard SQL, allowing any relational database server to be used. By now, the system implemented works on features based on color distribution of the images through normalized histograms as well as metric histograms. Metric histograms are invariant regarding scale, translation and rotation of images and also to brightness transformations. The cbPACS is prepared to integrate new image features, based on texture and shape of the main objects in the image.
Soetens, Oriane; De Bel, Annelies; Echahidi, Fedoua; Vancutsem, Ellen; Vandoorslaer, Kristof; Piérard, Denis
2012-01-01
The performance of matrix-assisted laser desorption–ionization time of flight mass spectrometry (MALDI-TOF MS) for species identification of Prevotella was evaluated and compared with 16S rRNA gene sequencing. Using a Bruker database, 62.7% of the 102 clinical isolates were identified to the species level and 73.5% to the genus level. Extension of the commercial database improved these figures to, respectively, 83.3% and 89.2%. MALDI-TOF MS identification of Prevotella is reliable but needs a more extensive database. PMID:22301022
ERIC Educational Resources Information Center
Cavaleri, Piero
2008-01-01
Purpose: The purpose of this paper is to describe the use of AJAX for searching the Biblioteche Oggi database of bibliographic records. Design/methodology/approach: The paper is a demonstration of how bibliographic database single page interfaces allow the implementation of more user-friendly features for social and collaborative tasks. Findings:…
NASA Astrophysics Data System (ADS)
1991-03-01
This paper documents a very low frequency/low frequency (VLF/LF) Data Analysis task by the Naval Ocean Systems Center to improve the modeling of the nighttime ionosphere when making propagation predictions with the Long Wave Propagation Capability (LWPC) computer program. The task utilizes an extensive database of VLF measured data recorded during the 1985 to 1986 trips of the merchant ship GTS Callaghan in the North Atlantic area. By constraining the Callaghan data to those periods when both the ship and the distant transmitters were in time zones consistent with all-nighttime propagation, and by eliminating data from trips outside the principal area of interest, an aggregated set of recorded data was assembled for each frequency of concern. Four frequencies were examined: 16.0, 19.0, 21.4 and 24.0 kHz. Recorded data sets were graphed as signal vs. distance plots, computing distance from the transmitter for each ship's location. The LWPC program was then utilized to compute signal vs. distance along a typical path in the same ocean area, and the predicted and recorded data were compared. By changing the LWPC parameters different propagation predictions were compared with the recorded data until a best fit was obtained.
Cummine, Jacqueline; Cribben, Ivor; Luu, Connie; Kim, Esther; Bahktiari, Reyhaneh; Georgiou, George; Boliek, Carol A
2016-05-01
The neural circuitry associated with language processing is complex and dynamic. Graphical models are useful for studying complex neural networks as this method provides information about unique connectivity between regions within the context of the entire network of interest. Here, the authors explored the neural networks during covert reading to determine the role of feedforward and feedback loops in covert speech production. Brain activity of skilled adult readers was assessed in real word and pseudoword reading tasks with functional MRI (fMRI). The authors provide evidence for activity coherence in the feedforward system (inferior frontal gyrus-supplementary motor area) during real word reading and in the feedback system (supramarginal gyrus-precentral gyrus) during pseudoword reading. Graphical models provided evidence of an extensive, highly connected, neural network when individuals read real words that relied on coordination of the feedforward system. In contrast, when individuals read pseudowords the authors found a limited/restricted network that relied on coordination of the feedback system. Together, these results underscore the importance of considering multiple pathways and articulatory loops during language tasks and provide evidence for a print-to-speech neural network. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
A multiple maximum scatter difference discriminant criterion for facial feature extraction.
Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei
2007-12-01
Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.
Conjunctive patches subspace learning with side information for collaborative image retrieval.
Zhang, Lining; Wang, Lipo; Lin, Weisi
2012-08-01
Content-Based Image Retrieval (CBIR) has attracted substantial attention during the past few years for its potential practical applications to image management. A variety of Relevance Feedback (RF) schemes have been designed to bridge the semantic gap between the low-level visual features and the high-level semantic concepts for an image retrieval task. Various Collaborative Image Retrieval (CIR) schemes aim to utilize the user historical feedback log data with similar and dissimilar pairwise constraints to improve the performance of a CBIR system. However, existing subspace learning approaches with explicit label information cannot be applied for a CIR task, although the subspace learning techniques play a key role in various computer vision tasks, e.g., face recognition and image classification. In this paper, we propose a novel subspace learning framework, i.e., Conjunctive Patches Subspace Learning (CPSL) with side information, for learning an effective semantic subspace by exploiting the user historical feedback log data for a CIR task. The CPSL can effectively integrate the discriminative information of labeled log images, the geometrical information of labeled log images and the weakly similar information of unlabeled images together to learn a reliable subspace. We formally formulate this problem into a constrained optimization problem and then present a new subspace learning technique to exploit the user historical feedback log data. Extensive experiments on both synthetic data sets and a real-world image database demonstrate the effectiveness of the proposed scheme in improving the performance of a CBIR system by exploiting the user historical feedback log data.
Enge, Sören; Behnke, Alexander; Fleischhauer, Monika; Küttler, Lena; Kliegel, Matthias; Strobel, Alexander
2014-07-01
Recent studies reported that training of working memory may improve performance in the trained function and beyond. Other executive functions, however, have been rarely or not yet systematically examined. The aim of this study was to test the effectiveness of inhibitory control (IC) training to produce true training-related function improvements in a sample of 122 healthy adults using a randomized, double-blind pretest/posttest/follow-up design. Two groups performed either adaptive (training group) or nonadaptive (active control) versions of go/no-go and stop-signal tasks for 3 weeks. Training gains as well as near-transfer to an untrained Stroop task and far-transfer to psychometric fluid intelligence were explored. Although the adaptive group could substantially improve overall IC task performance after training, no differences to the active control group occurred, neither at posttest nor at follow-up testing. A large decrease in response latency from pre- to posttest (and from pretest to 4 months' follow-up testing) was found when the training group was compared to the passive control group, which, however, does not sufficiently control for possible confounds. Thus, no conclusive evidence was found that this performance increase mirrors a true increase in IC function. The fact that training improvement was mainly related to response latency may indicate that individuals were more focused on performance gains in the prepotent go trials but less on the stop trials to meet the requirements of the tasks as well as possible. The challenges for response inhibition training studies are extensively discussed. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Chapter 4 - The LANDFIRE Prototype Project reference database
John F. Caratti
2006-01-01
This chapter describes the data compilation process for the Landscape Fire and Resource Management Planning Tools Prototype Project (LANDFIRE Prototype Project) reference database (LFRDB) and explains the reference data applications for LANDFIRE Prototype maps and models. The reference database formed the foundation for all LANDFIRE tasks. All products generated by the...
Research on Customer Value Based on Extension Data Mining
NASA Astrophysics Data System (ADS)
Chun-Yan, Yang; Wei-Hua, Li
Extenics is a new discipline for dealing with contradiction problems with formulize model. Extension data mining (EDM) is a product combining Extenics with data mining. It explores to acquire the knowledge based on extension transformations, which is called extension knowledge (EK), taking advantage of extension methods and data mining technology. EK includes extensible classification knowledge, conductive knowledge and so on. Extension data mining technology (EDMT) is a new data mining technology that mining EK in databases or data warehouse. Customer value (CV) can weigh the essentiality of customer relationship for an enterprise according to an enterprise as a subject of tasting value and customers as objects of tasting value at the same time. CV varies continually. Mining the changing knowledge of CV in databases using EDMT, including quantitative change knowledge and qualitative change knowledge, can provide a foundation for that an enterprise decides the strategy of customer relationship management (CRM). It can also provide a new idea for studying CV.
ERIC Educational Resources Information Center
Ahmadian, Mohammad Javad
2011-01-01
To date, research results suggest that task repetition positively affects oral task performance. However, researchers have not yet shown the extension of the benefits of repeating the same task to performance of a new task. This article first provides an overview of the currently available research findings on task repetition and then presents the…
BNDB - the Biochemical Network Database.
Küntzer, Jan; Backes, Christina; Blum, Torsten; Gerasch, Andreas; Kaufmann, Michael; Kohlbacher, Oliver; Lenhof, Hans-Peter
2007-10-02
Technological advances in high-throughput techniques and efficient data acquisition methods have resulted in a massive amount of life science data. The data is stored in numerous databases that have been established over the last decades and are essential resources for scientists nowadays. However, the diversity of the databases and the underlying data models make it difficult to combine this information for solving complex problems in systems biology. Currently, researchers typically have to browse several, often highly focused, databases to obtain the required information. Hence, there is a pressing need for more efficient systems for integrating, analyzing, and interpreting these data. The standardization and virtual consolidation of the databases is a major challenge resulting in a unified access to a variety of data sources. We present the Biochemical Network Database (BNDB), a powerful relational database platform, allowing a complete semantic integration of an extensive collection of external databases. BNDB is built upon a comprehensive and extensible object model called BioCore, which is powerful enough to model most known biochemical processes and at the same time easily extensible to be adapted to new biological concepts. Besides a web interface for the search and curation of the data, a Java-based viewer (BiNA) provides a powerful platform-independent visualization and navigation of the data. BiNA uses sophisticated graph layout algorithms for an interactive visualization and navigation of BNDB. BNDB allows a simple, unified access to a variety of external data sources. Its tight integration with the biochemical network library BN++ offers the possibility for import, integration, analysis, and visualization of the data. BNDB is freely accessible at http://www.bndb.org.
Riviere, Guillaume; Klopp, Christophe; Ibouniyamine, Nabihoudine; Huvet, Arnaud; Boudry, Pierre; Favrel, Pascal
2015-12-02
The Pacific oyster, Crassostrea gigas, is one of the most important aquaculture shellfish resources worldwide. Important efforts have been undertaken towards a better knowledge of its genome and transcriptome, which makes now C. gigas becoming a model organism among lophotrochozoans, the under-described sister clade of ecdysozoans within protostomes. These massive sequencing efforts offer the opportunity to assemble gene expression data and make such resource accessible and exploitable for the scientific community. Therefore, we undertook this assembly into an up-to-date publicly available transcriptome database: the GigaTON (Gigas TranscriptOme pipeliNe) database. We assembled 2204 million sequences obtained from 114 publicly available RNA-seq libraries that were realized using all embryo-larval development stages, adult organs, different environmental stressors including heavy metals, temperature, salinity and exposure to air, which were mostly performed as part of the Crassostrea gigas genome project. This data was analyzed in silico and resulted into 56621 newly assembled contigs that were deposited into a publicly available database, the GigaTON database. This database also provides powerful and user-friendly request tools to browse and retrieve information about annotation, expression level, UTRs, splice and polymorphism, and gene ontology associated to all the contigs into each, and between all libraries. The GigaTON database provides a convenient, potent and versatile interface to browse, retrieve, confront and compare massive transcriptomic information in an extensive range of conditions, tissues and developmental stages in Crassostrea gigas. To our knowledge, the GigaTON database constitutes the most extensive transcriptomic database to date in marine invertebrates, thereby a new reference transcriptome in the oyster, a highly valuable resource to physiologists and evolutionary biologists.
78 FR 15110 - Aviation Rulemaking Advisory Committee; Engine Bird Ingestion Requirements-New Task
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-08
...: During the bird-ingestion rulemaking database (BRDB) working group`s reevaluation of the current engine... engine core ingestion. If the BRDB working group`s reevaluation determines that such requirements are... Task ARAC accepted the task and will establish the Engine Harmonization Working Group (EHWG), under the...
Boiler materials for ultra supercritical coal power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purgert, Robert; Shingledecker, John; Pschirer, James
2015-12-29
The U.S. Department of Energy (DOE) and the Ohio Coal Development Office (OCDO) have undertaken a project aimed at identifying, evaluating, and qualifying the materials needed for the construction of the critical components of coal-fired boilers capable of operating at much higher efficiencies than current generation of supercritical plants. This increased efficiency is expected to be achieved principally through the use of advanced ultrasupercritical (A-USC) steam conditions up to 760°C (1400°F) and 35 MPa (5000 psi). A limiting factor to achieving these higher temperatures and pressures for future A-USC plants are the materials of construction. The goal of this projectmore » is to assess/develop materials technology to build and operate an A-USC boiler capable of delivering steam with conditions up to 760°C (1400°F)/35 MPa (5000 psi). The project has successfully met this goal through a focused long-term public-private consortium partnership. The project was based on an R&D plan developed by the Electric Power Research Institute (EPRI) and an industry consortium that supplemented the recommendations of several DOE workshops on the subject of advanced materials. In view of the variety of skills and expertise required for the successful completion of the proposed work, a consortium led by the Energy Industries of Ohio (EIO) with cost-sharing participation of all the major domestic boiler manufacturers, ALSTOM Power (Alstom), Babcock and Wilcox Power Generation Group, Inc. (B&W), Foster Wheeler (FW), and Riley Power, Inc. (Riley), technical management by EPRI and research conducted by Oak Ridge National Laboratory (ORNL) has been developed. The project has clearly identified and tested materials that can withstand 760°C (1400°F) steam conditions and can also make a 700°C (1300°F) plant more economically attractive. In this project, the maximum temperature capabilities of these and other available high-temperature alloys have been assessed to provide a basis for materials selection and application under a range of conditions prevailing in the boiler. A major effort involving eight tasks was completed in Phase 1. In a subsequent Phase 2 extension, the earlier defined tasks were extended to finish and enhance the Phase 1 activities. This extension included efforts in improved weld/weldment performance, development of longer-term material property databases, additional field (in-plant) corrosion testing, improved understanding of long-term oxidation kinetics and exfoliation, cyclic operation, and fabrication methods for waterwalls. In addition, preliminary work was undertaken to model an oxyfuel boiler to define local environments expected to occur and to study corrosion behavior of alloys under these conditions. This final technical report provides a comprehensive summary of all the work undertaken by the consortium and the research findings from all eight (8) technical tasks including A-USC boiler design and economics (Task 1), long-term materials properties (Task 2), steam- side oxidation (Task 3), Fireside Corrosion (Task 4), Welding (Task 5), Fabricability (Task 6), Coatings (Task 7), and Design Data and Rules (Task 8).« less
Extensions to the Speech Disorders Classification System (SDCS)
Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.
2010-01-01
This report describes three extensions to a classification system for pediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). Part I describes a classification extension to the SDCS to differentiate motor speech disorders from speech delay and to differentiate among three subtypes of motor speech disorders. Part II describes the Madison Speech Assessment Protocol (MSAP), an approximately two-hour battery of 25 measures that includes 15 speech tests and tasks. Part III describes the Competence, Precision, and Stability Analytics (CPSA) framework, a current set of approximately 90 perceptual- and acoustic-based indices of speech, prosody, and voice used to quantify and classify subtypes of Speech Sound Disorders (SSD). A companion paper, Shriberg, Fourakis, et al. (2010) provides reliability estimates for the perceptual and acoustic data reduction methods used in the SDCS. The agreement estimates in the companion paper support the reliability of SDCS methods and illustrate the complementary roles of perceptual and acoustic methods in diagnostic analyses of SSD of unknown origin. Examples of research using the extensions to the SDCS described in the present report include diagnostic findings for a sample of youth with motor speech disorders associated with galactosemia (Shriberg, Potter, & Strand, 2010) and a test of the hypothesis of apraxia of speech in a group of children with autism spectrum disorders (Shriberg, Paul, Black, & van Santen, 2010). All SDCS methods and reference databases running in the PEPPER (Programs to Examine Phonetic and Phonologic Evaluation Records; [Shriberg, Allen, McSweeny, & Wilson, 2001]) environment will be disseminated without cost when complete. PMID:20831378
From conscious thought to automatic action: A simulation account of action planning.
Martiny-Huenger, Torsten; Martiny, Sarah E; Parks-Stamm, Elizabeth J; Pfeiffer, Elisa; Gollwitzer, Peter M
2017-10-01
We provide a theoretical framework and empirical evidence for how verbally planning an action creates direct perception-action links and behavioral automaticity. We argue that planning actions in an if (situation)-then (action) format induces sensorimotor simulations (i.e., activity patterns reenacting the event in the sensory and motor brain areas) of the anticipated situation and the intended action. Due to their temporal overlap, these activity patterns become linked. Whenever the previously simulated situation is encountered, the previously simulated action is partially reactivated through spreading activation and thus more likely to be executed. In 4 experiments (N = 363), we investigated the relation between specific if-then action plans worded to activate simulations of elbow flexion versus extension movements and actual elbow flexion versus extension movements in a subsequent, ostensibly unrelated categorization task. As expected, linking a critical stimulus to intended actions that implied elbow flexion movements (e.g., grabbing it for consumption) subsequently facilitated elbow flexion movements upon encountering the critical stimulus. However, linking a critical stimulus to actions that implied elbow extension movements (e.g., pointing at it) subsequently facilitated elbow extension movements upon encountering the critical stimulus. Thus, minor differences (i.e., exchanging the words "point at" with "grab") in verbally formulated action plans (i.e., conscious thought) had systematic consequences on subsequent actions. The question of how conscious thought can induce stimulus-triggered action is illuminated by the provided theoretical framework and the respective empirical evidence, facilitating the understanding of behavioral automaticity and human agency. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Prototyping Visual Database Interface by Object-Oriented Language
1988-06-01
approach is to use object-oriented programming. Object-oriented languages are characterized by three criteria [Ref. 4:p. 1.2.1]: - encapsulation of...made it a sub-class of our DMWindow.Cls, which is discussed later in this chapter. This extension to the application had to be intergrated with our... abnormal behaviors similar to Korth’s discussion of pitfalls in relational database designing. Even extensions like GEM [Ref. 8] that are powerful and
Database Cancellation: The "Hows" and "Whys"
ERIC Educational Resources Information Center
Shapiro, Steven
2012-01-01
Database cancellation is one of the most difficult tasks performed by a librarian. This may seem counter-intuitive but, psychologically, it is certainly true. When a librarian or a team of librarians has invested a great deal of time doing research, talking to potential users, and conducting trials before deciding to subscribe to a database, they…
An Experimental Investigation of Complexity in Database Query Formulation Tasks
ERIC Educational Resources Information Center
Casterella, Gretchen Irwin; Vijayasarathy, Leo
2013-01-01
Information Technology professionals and other knowledge workers rely on their ability to extract data from organizational databases to respond to business questions and support decision making. Structured query language (SQL) is the standard programming language for querying data in relational databases, and SQL skills are in high demand and are…
Active browsing using similarity pyramids
NASA Astrophysics Data System (ADS)
Chen, Jau-Yuen; Bouman, Charles A.; Dalton, John C.
1998-12-01
In this paper, we describe a new approach to managing large image databases, which we call active browsing. Active browsing integrates relevance feedback into the browsing environment, so that users can modify the database's organization to suit the desired task. Our method is based on a similarity pyramid data structure, which hierarchically organizes the database, so that it can be efficiently browsed. At coarse levels, the similarity pyramid allows users to view the database as large clusters of similar images. Alternatively, users can 'zoom into' finer levels to view individual images. We discuss relevance feedback for the browsing process, and argue that it is fundamentally different from relevance feedback for more traditional search-by-query tasks. We propose two fundamental operations for active browsing: pruning and reorganization. Both of these operations depend on a user-defined relevance set, which represents the image or set of images desired by the user. We present statistical methods for accurately pruning the database, and we propose a new 'worm hole' distance metric for reorganizing the database, so that members of the relevance set are grouped together.
76 FR 77504 - Notice of Submission for OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-13
... of Review: Extension. Title of Collection: Charter Schools Program Grand Award Database. OMB Control... collect data necessary for the Charter Schools Program (CSP) Grant Award Database. The CSP is authorized... award information from grantees (State agencies and some schools) for a database of current CSP-funded...
Automating the training development process for mission flight operations
NASA Technical Reports Server (NTRS)
Scott, Carol J.
1994-01-01
Traditional methods of developing training do not effectively support the changing needs of operational users in a multimission environment. The Automated Training Development System (ATDS) provides advantages over conventional methods in quality, quantity, turnaround, database maintenance, and focus on individualized instruction. The Operations System Training Group at the JPL performed a six-month study to assess the potential of ATDS to automate curriculum development and to generate and maintain course materials. To begin the study, the group acquired readily available hardware and participated in a two-week training session to introduce the process. ATDS is a building activity that combines training's traditional information-gathering with a hierarchical method for interleaving the elements. The program can be described fairly simply. A comprehensive list of candidate tasks determines the content of the database; from that database, selected critical tasks dictate which competencies of skill and knowledge to include in course material for the target audience. The training developer adds pertinent planning information about each task to the database, then ATDS generates a tailored set of instructional material, based on the specific set of selection criteria. Course material consistently leads students to a prescribed level of competency.
Burns, Gully APC; Cheng, Wei-Cheng
2006-01-01
Background Knowledge bases that summarize the published literature provide useful online references for specific areas of systems-level biology that are not otherwise supported by large-scale databases. In the field of neuroanatomy, groups of small focused teams have constructed medium size knowledge bases to summarize the literature describing tract-tracing experiments in several species. Despite years of collation and curation, these databases only provide partial coverage of the available published literature. Given that the scientists reading these papers must all generate the interpretations that would normally be entered into such a system, we attempt here to provide general-purpose annotation tools to make it easy for members of the community to contribute to the task of data collation. Results In this paper, we describe an open-source, freely available knowledge management system called 'NeuroScholar' that allows straightforward structured markup of the PDF files according to a well-designed schema to capture the essential details of this class of experiment. Although, the example worked through in this paper is quite specific to neuroanatomical connectivity, the design is freely extensible and could conceivably be used to construct local knowledge bases for other experiment types. Knowledge representations of the experiment are also directly linked to the contributing textual fragments from the original research article. Through the use of this system, not only could members of the community contribute to the collation task, but input data can be gathered for automated approaches to permit knowledge acquisition through the use of Natural Language Processing (NLP). Conclusion We present a functional, working tool to permit users to populate knowledge bases for neuroanatomical connectivity data from the literature through the use of structured questionnaires. This system is open-source, fully functional and available for download from [1]. PMID:16895608
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montgomery, Logan; Kildea, John
We report on the development and clinical deployment of an in-house incident reporting and learning system that implements the taxonomy of the Canadian National System for Incident Reporting – Radiation Treatment (NSIR-RT). In producing our new system, we aimed to: Analyze actual incidents, as well as potentially dangerous latent conditions. Produce recommendations on the NSIR-RT taxonomy. Incorporate features to divide reporting responsibility among clinical staff and expedite incident categorization within the NSIR-RT framework. Share anonymized incident data with the national database. Our multistep incident reporting workflow is focused around an initial report and a detailed follow-up investigation. An investigator, chosenmore » at the time of reporting, is tasked with performing the investigation. The investigation feature is connected to our electronic medical records database to allow automatic field population and quick reference of patient and treatment information. Additional features include a robust visualization suite, as well as the ability to flag incidents for discussion at monthly Risk Management meetings and task ameliorating actions to staff. Our system was deployed into clinical use in January 2016. Over the first three months of use, 45 valid incidents were reported; 31 of which were reported as actual incidents as opposed to near-misses or reportable circumstances. However, we suspect there is ambiguity within our centre in determining the appropriate event type, which may be arising from the taxonomy itself. Preliminary trending analysis aided in revealing workflow issues pertaining to storage of treatment accessories and treatment planning delays. Extensive analysis will be undertaken as more data are accrued.« less
Methods for automatic detection of artifacts in microelectrode recordings.
Bakštein, Eduard; Sieger, Tomáš; Wild, Jiří; Novák, Daniel; Schneider, Jakub; Vostatek, Pavel; Urgošík, Dušan; Jech, Robert
2017-10-01
Extracellular microelectrode recording (MER) is a prominent technique for studies of extracellular single-unit neuronal activity. In order to achieve robust results in more complex analysis pipelines, it is necessary to have high quality input data with a low amount of artifacts. We show that noise (mainly electromagnetic interference and motion artifacts) may affect more than 25% of the recording length in a clinical MER database. We present several methods for automatic detection of noise in MER signals, based on (i) unsupervised detection of stationary segments, (ii) large peaks in the power spectral density, and (iii) a classifier based on multiple time- and frequency-domain features. We evaluate the proposed methods on a manually annotated database of 5735 ten-second MER signals from 58 Parkinson's disease patients. The existing methods for artifact detection in single-channel MER that have been rigorously tested, are based on unsupervised change-point detection. We show on an extensive real MER database that the presented techniques are better suited for the task of artifact identification and achieve much better results. The best-performing classifiers (bagging and decision tree) achieved artifact classification accuracy of up to 89% on an unseen test set and outperformed the unsupervised techniques by 5-10%. This was close to the level of agreement among raters using manual annotation (93.5%). We conclude that the proposed methods are suitable for automatic MER denoising and may help in the efficient elimination of undesirable signal artifacts. Copyright © 2017 Elsevier B.V. All rights reserved.
Akbari, Hamed; Bilello, Michel; Da, Xiao; Davatzikos, Christos
2015-01-01
Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms’ similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations. PMID:24951685
Open source database of images DEIMOS: extension for large-scale subjective image quality assessment
NASA Astrophysics Data System (ADS)
Vítek, Stanislav
2014-09-01
DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.
NRA8-21 Cycle 2 RBCC Turbopump Risk Reduction
NASA Technical Reports Server (NTRS)
Ferguson, Thomas V.; Williams, Morgan; Marcu, Bogdan
2004-01-01
This project was composed of three sub-tasks. The objective of the first task was to use the CFD code INS3D to generate both on- and off-design predictions for the consortium optimized impeller flowfield. The results of the flow simulations are given in the first section. The objective of the second task was to construct a turbomachinery testing database comprised of measurements made on several different impellers, an inducer and a diffuser. The data was in the form of static pressure measurements as well as laser velocimeter measurements of velocities and flow angles within the stated components. Several databases with this information were created for these components. The third subtask objective was two-fold: first, to validate the Enigma CFD code for pump diffuser analysis, and secondly, to perform steady and unsteady analyses on some wide flow range diffuser concepts using Enigma. The code was validated using the consortium optimized impeller database and then applied to two different concepts for wide flow diffusers.
Learning atoms for materials discovery.
Zhou, Quan; Tang, Peizhe; Liu, Shenxiu; Pan, Jinbo; Yan, Qimin; Zhang, Shou-Cheng
2018-06-26
Exciting advances have been made in artificial intelligence (AI) during recent decades. Among them, applications of machine learning (ML) and deep learning techniques brought human-competitive performances in various tasks of fields, including image recognition, speech recognition, and natural language understanding. Even in Go, the ancient game of profound complexity, the AI player has already beat human world champions convincingly with and without learning from the human. In this work, we show that our unsupervised machines (Atom2Vec) can learn the basic properties of atoms by themselves from the extensive database of known compounds and materials. These learned properties are represented in terms of high-dimensional vectors, and clustering of atoms in vector space classifies them into meaningful groups consistent with human knowledge. We use the atom vectors as basic input units for neural networks and other ML models designed and trained to predict materials properties, which demonstrate significant accuracy. Copyright © 2018 the Author(s). Published by PNAS.
Hogan, Bernie; Melville, Joshua R.; Philips, Gregory Lee; Janulis, Patrick; Contractor, Noshir; Mustanski, Brian S.; Birkett, Michelle
2016-01-01
While much social network data exists online, key network metrics for high-risk populations must still be captured through self-report. This practice has suffered from numerous limitations in workflow and response burden. However, advances in technology, network drawing libraries and databases are making interactive network drawing increasingly feasible. We describe the translation of an analog-based technique for capturing personal networks into a digital framework termed netCanvas that addresses many existing shortcomings such as: 1) complex data entry; 2) extensive interviewer intervention and field setup; 3) difficulties in data reuse; and 4) a lack of dynamic visualizations. We test this implementation within a health behavior study of a high-risk and difficult-to-reach population. We provide a within–subjects comparison between paper and touchscreens. We assert that touchscreen-based social network capture is now a viable alternative for highly sensitive data and social network data entry tasks. PMID:28018995
Visual analysis and exploration of complex corporate shareholder networks
NASA Astrophysics Data System (ADS)
Tekušová, Tatiana; Kohlhammer, Jörn
2008-01-01
The analysis of large corporate shareholder network structures is an important task in corporate governance, in financing, and in financial investment domains. In a modern economy, large structures of cross-corporation, cross-border shareholder relationships exist, forming complex networks. These networks are often difficult to analyze with traditional approaches. An efficient visualization of the networks helps to reveal the interdependent shareholding formations and the controlling patterns. In this paper, we propose an effective visualization tool that supports the financial analyst in understanding complex shareholding networks. We develop an interactive visual analysis system by combining state-of-the-art visualization technologies with economic analysis methods. Our system is capable to reveal patterns in large corporate shareholder networks, allows the visual identification of the ultimate shareholders, and supports the visual analysis of integrated cash flow and control rights. We apply our system on an extensive real-world database of shareholder relationships, showing its usefulness for effective visual analysis.
Integrating computer programs for engineering analysis and design
NASA Technical Reports Server (NTRS)
Wilhite, A. W.; Crisp, V. K.; Johnson, S. C.
1983-01-01
The design of a third-generation system for integrating computer programs for engineering and design has been developed for the Aerospace Vehicle Interactive Design (AVID) system. This system consists of an engineering data management system, program interface software, a user interface, and a geometry system. A relational information system (ARIS) was developed specifically for the computer-aided engineering system. It is used for a repository of design data that are communicated between analysis programs, for a dictionary that describes these design data, for a directory that describes the analysis programs, and for other system functions. A method is described for interfacing independent analysis programs into a loosely-coupled design system. This method emphasizes an interactive extension of analysis techniques and manipulation of design data. Also, integrity mechanisms exist to maintain database correctness for multidisciplinary design tasks by an individual or a team of specialists. Finally, a prototype user interface program has been developed to aid in system utilization.
Designing learning management system interoperability in semantic web
NASA Astrophysics Data System (ADS)
Anistyasari, Y.; Sarno, R.; Rochmawati, N.
2018-01-01
The extensive adoption of learning management system (LMS) has set the focus on the interoperability requirement. Interoperability is the ability of different computer systems, applications or services to communicate, share and exchange data, information, and knowledge in a precise, effective and consistent way. Semantic web technology and the use of ontologies are able to provide the required computational semantics and interoperability for the automation of tasks in LMS. The purpose of this study is to design learning management system interoperability in the semantic web which currently has not been investigated deeply. Moodle is utilized to design the interoperability. Several database tables of Moodle are enhanced and some features are added. The semantic web interoperability is provided by exploited ontology in content materials. The ontology is further utilized as a searching tool to match user’s queries and available courses. It is concluded that LMS interoperability in Semantic Web is possible to be performed.
An integrated hospital information system in Geneva.
Scherrer, J R; Baud, R H; Hochstrasser, D; Ratib, O
1990-01-01
Since the initial design phase from 1971 to 1973, the DIOGENE hospital information system at the University Hospital of Geneva has been treated as a whole and has retained its architectural unity, despite the need for modification and extension over the years. In addition to having a centralized patient database with the mechanisms for data protection and recovery of a transaction-oriented system, the DIOGENE system has a centralized pool of operators who provide support and training to the users; a separate network of remote printers that provides a telex service between the hospital buildings, offices, medical departments, and wards; and a three-component structure that avoids barriers between administrative and medical applications. In 1973, after a 2-year design period, the project was approved and funded. The DIOGENE system has led to more efficient sharing of costly resources, more rapid performance of administrative tasks, and more comprehensive collection of information about the institution and its patients.
Hogan, Bernie; Melville, Joshua R; Philips, Gregory Lee; Janulis, Patrick; Contractor, Noshir; Mustanski, Brian S; Birkett, Michelle
2016-05-01
While much social network data exists online, key network metrics for high-risk populations must still be captured through self-report. This practice has suffered from numerous limitations in workflow and response burden. However, advances in technology, network drawing libraries and databases are making interactive network drawing increasingly feasible. We describe the translation of an analog-based technique for capturing personal networks into a digital framework termed netCanvas that addresses many existing shortcomings such as: 1) complex data entry; 2) extensive interviewer intervention and field setup; 3) difficulties in data reuse; and 4) a lack of dynamic visualizations. We test this implementation within a health behavior study of a high-risk and difficult-to-reach population. We provide a within-subjects comparison between paper and touchscreens. We assert that touchscreen-based social network capture is now a viable alternative for highly sensitive data and social network data entry tasks.
NREL Opens Large Database of Inorganic Thin-Film Materials | News | NREL
Inorganic Thin-Film Materials April 3, 2018 An extensive experimental database of inorganic thin-film Energy Laboratory (NREL) is now publicly available. The High Throughput Experimental Materials (HTEM Schroeder / NREL) "All existing experimental databases either contain many entries or have all this
Use of Genomic Databases for Inquiry-Based Learning about Influenza
ERIC Educational Resources Information Center
Ledley, Fred; Ndung'u, Eric
2011-01-01
The genome projects of the past decades have created extensive databases of biological information with applications in both research and education. We describe an inquiry-based exercise that uses one such database, the National Center for Biotechnology Information Influenza Virus Resource, to advance learning about influenza. This database…
NASA Technical Reports Server (NTRS)
Moroh, Marsha
1988-01-01
A methodology for building interfaces of resident database management systems to a heterogeneous distributed database management system under development at NASA, the DAVID system, was developed. The feasibility of that methodology was demonstrated by construction of the software necessary to perform the interface task. The interface terminology developed in the course of this research is presented. The work performed and the results are summarized.
BC4GO: a full-text corpus for the BioCreative IV GO Task
USDA-ARS?s Scientific Manuscript database
Gene function curation via Gene Ontology (GO) annotation is a common task among Model Organism Database (MOD) groups. Due to its manual nature, this task is time-consuming and labor-intensive, and thus considered one of the bottlenecks in literature curation. There have been many previous attempts a...
Intelligent robot control using an adaptive critic with a task control center and dynamic database
NASA Astrophysics Data System (ADS)
Hall, E. L.; Ghaffari, M.; Liao, X.; Alhaj Ali, S. M.
2006-10-01
The purpose of this paper is to describe the design, development and simulation of a real time controller for an intelligent, vision guided robot. The use of a creative controller that can select its own tasks is demonstrated. This creative controller uses a task control center and dynamic database. The dynamic database stores both global environmental information and local information including the kinematic and dynamic models of the intelligent robot. The kinematic model is very useful for position control and simulations. However, models of the dynamics of the manipulators are needed for tracking control of the robot's motions. Such models are also necessary for sizing the actuators, tuning the controller, and achieving superior performance. Simulations of various control designs are shown. Also, much of the model has also been used for the actual prototype Bearcat Cub mobile robot. This vision guided robot was designed for the Intelligent Ground Vehicle Contest. A novel feature of the proposed approach is that the method is applicable to both robot arm manipulators and robot bases such as wheeled mobile robots. This generality should encourage the development of more mobile robots with manipulator capability since both models can be easily stored in the dynamic database. The multi task controller also permits wide applications. The use of manipulators and mobile bases with a high-level control are potentially useful for space exploration, certain rescue robots, defense robots, and medical robotics aids.
Database Search Engines: Paradigms, Challenges and Solutions.
Verheggen, Kenneth; Martens, Lennart; Berven, Frode S; Barsnes, Harald; Vaudel, Marc
2016-01-01
The first step in identifying proteins from mass spectrometry based shotgun proteomics data is to infer peptides from tandem mass spectra, a task generally achieved using database search engines. In this chapter, the basic principles of database search engines are introduced with a focus on open source software, and the use of database search engines is demonstrated using the freely available SearchGUI interface. This chapter also discusses how to tackle general issues related to sequence database searching and shows how to minimize their impact.
DeAngelo, Jacob
1983-01-01
GEOTHERM is a comprehensive system of public databases and software used to store, locate, and evaluate information on the geology, geochemistry, and hydrology of geothermal systems. Three main databases address the general characteristics of geothermal wells and fields, and the chemical properties of geothermal fluids; the last database is currently the most active. System tasks are divided into four areas: (1) data acquisition and entry, involving data entry via word processors and magnetic tape; (2) quality assurance, including the criteria and standards handbook and front-end data-screening programs; (3) operation, involving database backups and information extraction; and (4) user assistance, preparation of such items as application programs, and a quarterly newsletter. The principal task of GEOTHERM is to provide information and research support for the conduct of national geothermal-resource assessments. The principal users of GEOTHERM are those involved with the Geothermal Research Program of the U.S. Geological Survey.
Runtime support for data parallel tasks
NASA Technical Reports Server (NTRS)
Haines, Matthew; Hess, Bryan; Mehrotra, Piyush; Vanrosendale, John; Zima, Hans
1994-01-01
We have recently introduced a set of Fortran language extensions that allow for integrated support of task and data parallelism, and provide for shared data abstractions (SDA's) as a method for communications and synchronization among these tasks. In this paper we discuss the design and implementation issues of the runtime system necessary to support these extensions, and discuss the underlying requirements for such a system. To test the feasibility of this approach, we implement a prototype of the runtime system and use this to support an abstract multidisciplinary optimization (MDO) problem for aircraft design. We give initial results and discuss future plans.
Front-End and Back-End Database Design and Development: Scholar's Academy Case Study
ERIC Educational Resources Information Center
Parks, Rachida F.; Hall, Chelsea A.
2016-01-01
This case study consists of a real database project for a charter school--Scholar's Academy--and provides background information on the school and its cafeteria processing system. Also included are functional requirements and some illustrative data. Students are tasked with the design and development of a database for the purpose of improving the…
Enhancing Geoscience Research Discovery Through the Semantic Web
NASA Astrophysics Data System (ADS)
Rowan, Linda R.; Gross, M. Benjamin; Mayernik, Matthew; Khan, Huda; Boler, Frances; Maull, Keith; Stott, Don; Williams, Steve; Corson-Rikert, Jon; Johns, Erica M.; Daniels, Michael; Krafft, Dean B.; Meertens, Charles
2016-04-01
UNAVCO, UCAR, and Cornell University are working together to leverage semantic web technologies to enable discovery of people, datasets, publications and other research products, as well as the connections between them. The EarthCollab project, a U.S. National Science Foundation EarthCube Building Block, is enhancing an existing open-source semantic web application, VIVO, to enhance connectivity across distributed networks of researchers and resources related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) UNAVCO, a geodetic facility and consortium that supports diverse research projects informed by geodesy. People, publications, datasets and grant information have been mapped to an extended version of the VIVO-ISF ontology and ingested into VIVO's database. Much of the VIVO ontology was built for the life sciences, so we have added some components of existing geoscience-based ontologies and a few terms from a local ontology that we created. The UNAVCO VIVO instance, connect.unavco.org, utilizes persistent identifiers whenever possible; for example using ORCIDs for people, publication DOIs, data DOIs and unique NSF grant numbers. Data is ingested using a custom set of scripts that include the ability to perform basic automated and curated disambiguation. VIVO can display a page for every object ingested, including connections to other objects in the VIVO database. A dataset page, for example, includes the dataset type, time interval, DOI, related publications, and authors. The dataset type field provides a connection to all other datasets of the same type. The author's page shows, among other information, related datasets and co-authors. Information previously spread across several unconnected databases is now stored in a single location. In addition to VIVO's default display, the new database can be queried using SPARQL, a query language for semantic data. EarthCollab is extending the VIVO web application. One such extension is the ability to cross-link separate VIVO instances across institutions, allowing local display of externally curated information. For example, Cornell's VIVO faculty pages will display UNAVCO's dataset information and UNAVCO's VIVO will display Cornell faculty member contact and position information. About half of UNAVCO's membership is international and we hope to connect our data to institutions in other countries with a similar approach. Additional extensions, including enhanced geospatial capabilities, will be developed based on task-centered usability testing.
NASA Astrophysics Data System (ADS)
Bijl, Piet; Reynolds, Joseph P.; Vos, Wouter K.; Hogervorst, Maarten A.; Fanning, Jonathan D.
2011-05-01
The TTP (Targeting Task Performance) metric, developed at NVESD, is the current standard US Army model to predict EO/IR Target Acquisition performance. This model however does not have a corresponding lab or field test to empirically assess the performance of a camera system. The TOD (Triangle Orientation Discrimination) method, developed at TNO in The Netherlands, provides such a measurement. In this study, we make a direct comparison between TOD performance for a range of sensors and the extensive historical US observer performance database built to develop and calibrate the TTP metric. The US perception data were collected doing an identification task by military personnel on a standard 12 target, 12 aspect tactical vehicle image set that was processed through simulated sensors for which the most fundamental sensor parameters such as blur, sampling, spatial and temporal noise were varied. In the present study, we measured TOD sensor performance using exactly the same sensors processing a set of TOD triangle test patterns. The study shows that good overall agreement is obtained when the ratio between target characteristic size and TOD test pattern size at threshold equals 6.3. Note that this number is purely based on empirical data without any intermediate modeling. The calibration of the TOD to the TTP is highly beneficial to the sensor modeling and testing community for a variety of reasons. These include: i) a connection between requirement specification and acceptance testing, and ii) a very efficient method to quickly validate or extend the TTP range prediction model to new systems and tasks.
Hartigan, Erin; Aucoin, Jennifer; Carlson, Rita; Klieber-Kusak, Melanie; Murray, Thomas; Shaw, Bernadette; Lawrence, Michael
Weighted gait increases internal knee extension moment impulses (KEMI) in the anterior cruciate ligament-reconstructed (ACLR) limb; however, limb differences persist. (1) KEMI during normal gait will influence KEMI during weighted gait and (2) peak knee extension (PKE) torque and time to reach PKE torque will predict KEMI during gait tasks. Descriptive laboratory study. Twenty-four women and 14 men completed 3 gait tasks (unweighted, vest, sled) and strength testing after discharge from rehabilitation and clearance to return to sports. KEMI were calculated during the first 25% of stance. PKE torque and time to reach PKE torque were obtained using a dynamometer. Data on the ACLR limb and symmetry indices (SIs) were analyzed for each sex. Women presented with asymmetrical PKE torques and KEMI across tasks. There were three correlations noted for KEMI: between the walk and vest, walk and sled, and vest and sled tasks. Slower time to PKE torque predicted limb asymmetries across tasks and KEMI in the ACLR limb during the sled task. Men presented with asymmetrical PKE torques and KEMI during the sled task. There was a correlation noted for KEMI between walk and vest tasks only. During the sled task, ACLR limb time to PKE torque predicted KEMI in the ACLR limb and PKE torque SI predicted KEMI SI. Women use asymmetrical KEMI profiles during all gait tasks, and those with worse KEMI during walking have worse KEMI during weighted gait. Men have asymmetrical KEMI when sled towing, and these KEMIs do not correlate with KEMI during walking or vest tasks. PKE torque deficits persist when attempting to return to sports. Only men use gains in PKE torque to improve KEMI profiles. Although quicker PKE torque generation will increase KEMI in women, normalization of KEMI profiles will not occur by increasing rate of force development only. Gait retraining is recommended to correct asymmetrical KEMI profiles used across gait tasks in women.
How Many People Search the ERIC Database Each Day?
ERIC Educational Resources Information Center
Rudner, Lawrence
This study estimated the number of people searching the ERIC database each day. The Educational Resources Information Center (ERIC) is a national information system designed to provide ready access to an extensive body of education-related literature. Federal funds traditionally have paid for the development of the database, but not the…
ERIC Educational Resources Information Center
Pinquart, Martin; Pfeiffer, Jens P.
2015-01-01
Chronic illnesses and disabilities may impair the attainment of age-typical developmental tasks, such as forming relationships with peers and gaining autonomy. Based on a systematic search in electronic databases and cross-referencing, 447 quantitative empirical studies were included which compared the attainment of developmental tasks of…
McClintock, Shawn M; Reti, Irving M; Carpenter, Linda L; McDonald, William M; Dubin, Marc; Taylor, Stephan F; Cook, Ian A; O'Reardon, John; Husain, Mustafa M; Wall, Christopher; Krystal, Andrew D; Sampson, Shirlene M; Morales, Oscar; Nelson, Brent G; Latoussakis, Vassilios; George, Mark S; Lisanby, Sarah H
To provide expert recommendations for the safe and effective application of repetitive transcranial magnetic stimulation (rTMS) in the treatment of major depressive disorder (MDD). Participants included a group of 17 expert clinicians and researchers with expertise in the clinical application of rTMS, representing both the National Network of Depression Centers (NNDC) rTMS Task Group and the American Psychiatric Association Council on Research (APA CoR) Task Force on Novel Biomarkers and Treatments. The consensus statement is based on a review of extensive literature from 2 databases (OvidSP MEDLINE and PsycINFO) searched from 1990 through 2016. The search terms included variants of major depressive disorder and transcranial magnetic stimulation. The results were limited to articles written in English that focused on adult populations. Of the approximately 1,500 retrieved studies, a total of 118 publications were included in the consensus statement and were supplemented with expert opinion to achieve consensus recommendations on key issues surrounding the administration of rTMS for MDD in clinical practice settings. In cases in which the research evidence was equivocal or unclear, a consensus decision on how rTMS should be administered was reached by the authors of this article and is denoted in the article as "expert opinion." Multiple randomized controlled trials and published literature have supported the safety and efficacy of rTMS antidepressant therapy. These consensus recommendations, developed by the NNDC rTMS Task Group and APA CoR Task Force on Novel Biomarkers and Treatments, provide comprehensive information for the safe and effective clinical application of rTMS in the treatment of MDD. © Copyright 2017 Physicians Postgraduate Press, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, T.A.
1998-11-01
The objectives of this task are to: Develop a model (paper) to estimate the cost and waste generation of cleanup within the Environmental Management (EM) complex; Identify technologies applicable to decontamination and decommissioning (D and D) operations within the EM complex; Develop a database of facility information as linked to project baseline summaries (PBSs). The above objectives are carried out through the following four subtasks: Subtask 1--D and D Model Development, Subtask 2--Technology List; Subtask 3--Facility Database, and Subtask 4--Incorporation into a User Model.
2004-01-01
Cognitive Task Analysis Abstract As Department of Defense (DoD) leaders rely more on modeling and simulation to provide information on which to base...capabilities and intent. Cognitive Task Analysis (CTA) Cognitive Task Analysis (CTA) is an extensive/detailed look at tasks and subtasks performed by a...Domain Analysis and Task Analysis: A Difference That Matters. In Cognitive Task Analysis , edited by J. M. Schraagen, S.
Upper Stage Engine Composite Nozzle Extensions
NASA Technical Reports Server (NTRS)
Valentine, Peter G.; Allen, Lee R.; Gradl, Paul R.; Greene, Sandra E.; Sullivan, Brian J.; Weller, Leslie J.; Koenig, John R.; Cuneo, Jacques C.; Thompson, James; Brown, Aaron;
2015-01-01
Carbon-carbon (C-C) composite nozzle extensions are of interest for use on a variety of launch vehicle upper stage engines and in-space propulsion systems. The C-C nozzle extension technology and test capabilities being developed are intended to support National Aeronautics and Space Administration (NASA) and United States Air Force (USAF) requirements, as well as broader industry needs. Recent and on-going efforts at the Marshall Space Flight Center (MSFC) are aimed at both (a) further developing the technology and databases for nozzle extensions fabricated from specific CC materials, and (b) developing and demonstrating low-cost capabilities for testing composite nozzle extensions. At present, materials development work is concentrating on developing a database for lyocell-based C-C that can be used for upper stage engine nozzle extension design, modeling, and analysis efforts. Lyocell-based C-C behaves in a manner similar to rayon-based CC, but does not have the environmental issues associated with the use of rayon. Future work will also further investigate technology and database gaps and needs for more-established polyacrylonitrile- (PAN-) based C-C's. As a low-cost means of being able to rapidly test and screen nozzle extension materials and structures, MSFC has recently established and demonstrated a test rig at MSFC's Test Stand (TS) 115 for testing subscale nozzle extensions with 3.5-inch inside diameters at the attachment plane. Test durations of up to 120 seconds have been demonstrated using oxygen/hydrogen propellants. Other propellant combinations, including the use of hydrocarbon fuels, can be used if desired. Another test capability being developed will allow the testing of larger nozzle extensions (13.5- inch inside diameters at the attachment plane) in environments more similar to those of actual oxygen/hydrogen upper stage engines. Two C-C nozzle extensions (one lyocell-based, one PAN-based) have been fabricated for testing with the larger-scale facility.
ERIC Educational Resources Information Center
Kamienkowski, Juan E.; Pashler, Harold; Dehaene, Stanislas; Sigman, Mariano
2011-01-01
Does extensive practice reduce or eliminate central interference in dual-task processing? We explored the reorganization of task architecture with practice by combining interference analysis (delays in dual-task experiment) and random-walk models of decision making (measuring the decision and non-decision contributions to RT). The main delay…
Examining the relationship between skilled music training and attention.
Wang, Xiao; Ossher, Lynn; Reuter-Lorenz, Patricia A
2015-11-01
While many aspects of cognition have been investigated in relation to skilled music training, surprisingly little work has examined the connection between music training and attentional abilities. The present study investigated the performance of skilled musicians on cognitively demanding sustained attention tasks, measuring both temporal and visual discrimination over a prolonged duration. Participants with extensive formal music training were found to have superior performance on a temporal discrimination task, but not a visual discrimination task, compared to participants with no music training. In addition, no differences were found between groups in vigilance decrement in either type of task. Although no differences were evident in vigilance per se, the results indicate that performance in an attention-demanding temporal discrimination task was superior in individuals with extensive music training. We speculate that this basic cognitive ability may contribute to advantages that musicians show in other cognitive measures. Copyright © 2015 Elsevier Inc. All rights reserved.
Overview of FEED, the feeding experiments end-user database.
Wall, Christine E; Vinyard, Christopher J; Williams, Susan H; Gapeyev, Vladimir; Liu, Xianhua; Lapp, Hilmar; German, Rebecca Z
2011-08-01
The Feeding Experiments End-user Database (FEED) is a research tool developed by the Mammalian Feeding Working Group at the National Evolutionary Synthesis Center that permits synthetic, evolutionary analyses of the physiology of mammalian feeding. The tasks of the Working Group are to compile physiologic data sets into a uniform digital format stored at a central source, develop a standardized terminology for describing and organizing the data, and carry out a set of novel analyses using FEED. FEED contains raw physiologic data linked to extensive metadata. It serves as an archive for a large number of existing data sets and a repository for future data sets. The metadata are stored as text and images that describe experimental protocols, research subjects, and anatomical information. The metadata incorporate controlled vocabularies to allow consistent use of the terms used to describe and organize the physiologic data. The planned analyses address long-standing questions concerning the phylogenetic distribution of phenotypes involving muscle anatomy and feeding physiology among mammals, the presence and nature of motor pattern conservation in the mammalian feeding muscles, and the extent to which suckling constrains the evolution of feeding behavior in adult mammals. We expect FEED to be a growing digital archive that will facilitate new research into understanding the evolution of feeding anatomy.
NASA Astrophysics Data System (ADS)
Rogers, Steven P.; Hamilton, David B.
1994-06-01
To employ the most readily comprehensible presentation methods and symbology with helmet-mounted displays (HMDs), it is critical to identify the information elements needed to perform each pilot function and to analytically determine the attributes of these elements. The extensive analyses of mission requirements currently performed for pilot-vehicle interface design can be aided and improved by the new capabilities of intelligent systems and relational databases. An intelligent system, named ACIDTEST, has been developed specifically for organizing and applying rules to identify the best display modalities, locations, and formats. The primary objectives of the ACIDTEST system are to provide rapid accessibility to pertinent display research data, to integrate guidelines from many disciplines and identify conflicts among these guidelines, to force a consistent display approach among the design team members, and to serve as an 'audit trail' of design decisions and justifications. A powerful relational database called TAWL ORDIR has been developed to document information requirements and attributes for use by ACIDTEST as well as to greatly augment the applicability of mission analysis data. TAWL ORDIR can be used to rapidly reorganize mission analysis data components for study, perform commonality analyses for groups of tasks, determine the information content requirement for tailored display modes, and identify symbology integration opportunities.
Kamatuka, Kenta; Hattori, Masahiro; Sugiyama, Tomoyasu
2016-12-01
RNA interference (RNAi) screening is extensively used in the field of reverse genetics. RNAi libraries constructed using random oligonucleotides have made this technology affordable. However, the new methodology requires exploration of the RNAi target gene information after screening because the RNAi library includes non-natural sequences that are not found in genes. Here, we developed a web-based tool to support RNAi screening. The system performs short hairpin RNA (shRNA) target prediction that is informed by comprehensive enquiry (SPICE). SPICE automates several tasks that are laborious but indispensable to evaluate the shRNAs obtained by RNAi screening. SPICE has four main functions: (i) sequence identification of shRNA in the input sequence (the sequence might be obtained by sequencing clones in the RNAi library), (ii) searching the target genes in the database, (iii) demonstrating biological information obtained from the database, and (iv) preparation of search result files that can be utilized in a local personal computer (PC). Using this system, we demonstrated that genes targeted by random oligonucleotide-derived shRNAs were not different from those targeted by organism-specific shRNA. The system facilitates RNAi screening, which requires sequence analysis after screening. The SPICE web application is available at http://www.spice.sugysun.org/.
Search extension transforms Wiki into a relational system: a case for flavonoid metabolite database.
Arita, Masanori; Suwa, Kazuhiro
2008-09-17
In computer science, database systems are based on the relational model founded by Edgar Codd in 1970. On the other hand, in the area of biology the word 'database' often refers to loosely formatted, very large text files. Although such bio-databases may describe conflicts or ambiguities (e.g. a protein pair do and do not interact, or unknown parameters) in a positive sense, the flexibility of the data format sacrifices a systematic query mechanism equivalent to the widely used SQL. To overcome this disadvantage, we propose embeddable string-search commands on a Wiki-based system and designed a half-formatted database. As proof of principle, a database of flavonoid with 6902 molecular structures from over 1687 plant species was implemented on MediaWiki, the background system of Wikipedia. Registered users can describe any information in an arbitrary format. Structured part is subject to text-string searches to realize relational operations. The system was written in PHP language as the extension of MediaWiki. All modifications are open-source and publicly available. This scheme benefits from both the free-formatted Wiki style and the concise and structured relational-database style. MediaWiki supports multi-user environments for document management, and the cost for database maintenance is alleviated.
Search extension transforms Wiki into a relational system: A case for flavonoid metabolite database
Arita, Masanori; Suwa, Kazuhiro
2008-01-01
Background In computer science, database systems are based on the relational model founded by Edgar Codd in 1970. On the other hand, in the area of biology the word 'database' often refers to loosely formatted, very large text files. Although such bio-databases may describe conflicts or ambiguities (e.g. a protein pair do and do not interact, or unknown parameters) in a positive sense, the flexibility of the data format sacrifices a systematic query mechanism equivalent to the widely used SQL. Results To overcome this disadvantage, we propose embeddable string-search commands on a Wiki-based system and designed a half-formatted database. As proof of principle, a database of flavonoid with 6902 molecular structures from over 1687 plant species was implemented on MediaWiki, the background system of Wikipedia. Registered users can describe any information in an arbitrary format. Structured part is subject to text-string searches to realize relational operations. The system was written in PHP language as the extension of MediaWiki. All modifications are open-source and publicly available. Conclusion This scheme benefits from both the free-formatted Wiki style and the concise and structured relational-database style. MediaWiki supports multi-user environments for document management, and the cost for database maintenance is alleviated. PMID:18822113
Occupational exposure to silica in construction workers: a literature-based exposure database.
Beaudry, Charles; Lavoué, Jérôme; Sauvé, Jean-François; Bégin, Denis; Senhaji Rhazi, Mounia; Perrault, Guy; Dion, Chantal; Gérin, Michel
2013-01-01
We created an exposure database of respirable crystalline silica levels in the construction industry from the literature. We extracted silica and dust exposure levels in publications reporting silica exposure levels or quantitative evaluations of control effectiveness published in or after 1990. The database contains 6118 records (2858 of respirable crystalline silica) extracted from 115 sources, summarizing 11,845 measurements. Four hundred and eighty-eight records represent summarized exposure levels instead of individual values. For these records, the reported summary parameters were standardized into a geometric mean and a geometric standard deviation. Each record is associated with 80 characteristics, including information on trade, task, materials, tools, sampling strategy, analytical methods, and control measures. Although the database was constructed in French, 38 essential variables were standardized and translated into English. The data span the period 1974-2009, with 92% of the records corresponding to personal measurements. Thirteen standardized trades and 25 different standardized tasks are associated with at least five individual silica measurements. Trade-specific respirable crystalline silica geometric means vary from 0.01 (plumber) to 0.30 mg/m³ (tunnel construction skilled labor), while tasks vary from 0.01 (six categories, including sanding and electrical maintenance) to 1.59 mg/m³ (abrasive blasting). Despite limitations associated with the use of literature data, this database can be analyzed using meta-analytical and multivariate techniques and currently represents the most important source of exposure information about silica exposure in the construction industry. It is available on request to the research community.
A spatial database for landslides in northern Bavaria: A methodological approach
NASA Astrophysics Data System (ADS)
Jäger, Daniel; Kreuzer, Thomas; Wilde, Martina; Bemm, Stefan; Terhorst, Birgit
2018-04-01
Landslide databases provide essential information for hazard modeling, damages on buildings and infrastructure, mitigation, and research needs. This study presents the development of a landslide database system named WISL (Würzburg Information System on Landslides), currently storing detailed landslide data for northern Bavaria, Germany, in order to enable scientific queries as well as comparisons with other regional landslide inventories. WISL is based on free open source software solutions (PostgreSQL, PostGIS) assuring good correspondence of the various softwares and to enable further extensions with specific adaptions of self-developed software. Apart from that, WISL was designed to be particularly compatible for easy communication with other databases. As a central pre-requisite for standardized, homogeneous data acquisition in the field, a customized data sheet for landslide description was compiled. This sheet also serves as an input mask for all data registration procedures in WISL. A variety of "in-database" solutions for landslide analysis provides the necessary scalability for the database, enabling operations at the local server. In its current state, WISL already enables extensive analysis and queries. This paper presents an example analysis of landslides in Oxfordian Limestones in the northeastern Franconian Alb, northern Bavaria. The results reveal widely differing landslides in terms of geometry and size. Further queries related to landslide activity classifies the majority of the landslides as currently inactive, however, they clearly possess a certain potential for remobilization. Along with some active mass movements, a significant percentage of landslides potentially endangers residential areas or infrastructure. The main aspect of future enhancements of the WISL database is related to data extensions in order to increase research possibilities, as well as to transfer the system to other regions and countries.
Extension of the sasCIF format and its applications for data processing and deposition
Kachala, Michael; Westbrook, John; Svergun, Dmitri
2016-02-01
Recent advances in small-angle scattering (SAS) experimental facilities and data analysis methods have prompted a dramatic increase in the number of users and of projects conducted, causing an upsurge in the number of objects studied, experimental data available and structural models generated. To organize the data and models and make them accessible to the community, the Task Forces on SAS and hybrid methods for the International Union of Crystallography and the Worldwide Protein Data Bank envisage developing a federated approach to SAS data and model archiving. Within the framework of this approach, the existing databases may exchange information and providemore » independent but synchronized entries to users. At present, ways of exchanging information between the various SAS databases are not established, leading to possible duplication and incompatibility of entries, and limiting the opportunities for data-driven research for SAS users. In this work, a solution is developed to resolve these issues and provide a universal exchange format for the community, based on the use of the widely adopted crystallographic information framework (CIF). The previous version of the sasCIF format, implemented as an extension of the core CIF dictionary, has been available since 2000 to facilitate SAS data exchange between laboratories. The sasCIF format has now been extended to describe comprehensively the necessary experimental information, results and models, including relevant metadata for SAS data analysis and for deposition into a database. Processing tools for these files (sasCIFtools) have been developed, and these are available both as standalone open-source programs and integrated into the SAS Biological Data Bank, allowing the export and import of data entries as sasCIF files. Software modules to save the relevant information directly from beamline data-processing pipelines in sasCIF format are also developed. Lastly, this update of sasCIF and the relevant tools are an important step in the standardization of the way SAS data are presented and exchanged, to make the results easily accessible to users and to promote further the application of SAS in the structural biology community.« less
Managing Multiple Tasks in Complex, Dynamic Environments
NASA Technical Reports Server (NTRS)
Freed, Michael; Null, Cynthia H. (Technical Monitor)
1998-01-01
Sketchy planners are designed to achieve goals in realistically complex, time-pressured, and uncertain task environments. However, the ability to manage multiple, potentially interacting tasks in such environments requires extensions to the functionality these systems typically provide. This paper identifies a number of factors affecting how interacting tasks should be prioritized, interrupted, and resumed, and then describes a sketchy planner called APEX that takes account of these factors when managing multiple tasks.
Locality constrained joint dynamic sparse representation for local matching based face recognition.
Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun
2014-01-01
Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.
Cieslewicz, Artur; Dutkiewicz, Jakub; Jedrzejek, Czeslaw
2018-01-01
Abstract Information retrieval from biomedical repositories has become a challenging task because of their increasing size and complexity. To facilitate the research aimed at improving the search for relevant documents, various information retrieval challenges have been launched. In this article, we present the improved medical information retrieval systems designed by Poznan University of Technology and Poznan University of Medical Sciences as a contribution to the bioCADDIE 2016 challenge—a task focusing on information retrieval from a collection of 794 992 datasets generated from 20 biomedical repositories. The system developed by our team utilizes the Terrier 4.2 search platform enhanced by a query expansion method using word embeddings. This approach, after post-challenge modifications and improvements (with particular regard to assigning proper weights for original and expanded terms), allowed us achieving the second best infNDCG measure (0.4539) compared with the challenge results and infAP 0.3978. This demonstrates that proper utilization of word embeddings can be a valuable addition to the information retrieval process. Some analysis is provided on related work involving other bioCADDIE contributions. We discuss the possibility of improving our results by using better word embedding schemes to find candidates for query expansion. Database URL: https://biocaddie.org/benchmark-data PMID:29688372
Integrating Scientific Array Processing into Standard SQL
NASA Astrophysics Data System (ADS)
Misev, Dimitar; Bachhuber, Johannes; Baumann, Peter
2014-05-01
We live in a time that is dominated by data. Data storage is cheap and more applications than ever accrue vast amounts of data. Storing the emerging multidimensional data sets efficiently, however, and allowing them to be queried by their inherent structure, is a challenge many databases have to face today. Despite the fact that multidimensional array data is almost always linked to additional, non-array information, array databases have mostly developed separately from relational systems, resulting in a disparity between the two database categories. The current SQL standard and SQL DBMS supports arrays - and in an extension also multidimensional arrays - but does so in a very rudimentary and inefficient way. This poster demonstrates the practicality of an SQL extension for array processing, implemented in a proof-of-concept multi-faceted system that manages a federation of array and relational database systems, providing transparent, efficient and scalable access to the heterogeneous data in them.
ERIC Educational Resources Information Center
Hoover, Ryan E.
This study examines (1) subject content, (2) file size, (3) types of documents indexed, (4) range of years spanned, and (5) level of indexing and abstracting in five databases which collectively provide extensive coverage of the forestry and forest products industries: AGRICOLA, CAB ABSTRACTS, FOREST PRODUCTS (AIDS), PAPERCHEM, and PIRA. The…
Task-Driven Dynamic Text Summarization
ERIC Educational Resources Information Center
Workman, Terri Elizabeth
2011-01-01
The objective of this work is to examine the efficacy of natural language processing (NLP) in summarizing bibliographic text for multiple purposes. Researchers have noted the accelerating growth of bibliographic databases. Information seekers using traditional information retrieval techniques when searching large bibliographic databases are often…
biochem4j: Integrated and extensible biochemical knowledge through graph databases.
Swainston, Neil; Batista-Navarro, Riza; Carbonell, Pablo; Dobson, Paul D; Dunstan, Mark; Jervis, Adrian J; Vinaixa, Maria; Williams, Alan R; Ananiadou, Sophia; Faulon, Jean-Loup; Mendes, Pedro; Kell, Douglas B; Scrutton, Nigel S; Breitling, Rainer
2017-01-01
Biologists and biochemists have at their disposal a number of excellent, publicly available data resources such as UniProt, KEGG, and NCBI Taxonomy, which catalogue biological entities. Despite the usefulness of these resources, they remain fundamentally unconnected. While links may appear between entries across these databases, users are typically only able to follow such links by manual browsing or through specialised workflows. Although many of the resources provide web-service interfaces for computational access, performing federated queries across databases remains a non-trivial but essential activity in interdisciplinary systems and synthetic biology programmes. What is needed are integrated repositories to catalogue both biological entities and-crucially-the relationships between them. Such a resource should be extensible, such that newly discovered relationships-for example, those between novel, synthetic enzymes and non-natural products-can be added over time. With the introduction of graph databases, the barrier to the rapid generation, extension and querying of such a resource has been lowered considerably. With a particular focus on metabolic engineering as an illustrative application domain, biochem4j, freely available at http://biochem4j.org, is introduced to provide an integrated, queryable database that warehouses chemical, reaction, enzyme and taxonomic data from a range of reliable resources. The biochem4j framework establishes a starting point for the flexible integration and exploitation of an ever-wider range of biological data sources, from public databases to laboratory-specific experimental datasets, for the benefit of systems biologists, biosystems engineers and the wider community of molecular biologists and biological chemists.
biochem4j: Integrated and extensible biochemical knowledge through graph databases
Batista-Navarro, Riza; Dunstan, Mark; Jervis, Adrian J.; Vinaixa, Maria; Ananiadou, Sophia; Faulon, Jean-Loup; Kell, Douglas B.
2017-01-01
Biologists and biochemists have at their disposal a number of excellent, publicly available data resources such as UniProt, KEGG, and NCBI Taxonomy, which catalogue biological entities. Despite the usefulness of these resources, they remain fundamentally unconnected. While links may appear between entries across these databases, users are typically only able to follow such links by manual browsing or through specialised workflows. Although many of the resources provide web-service interfaces for computational access, performing federated queries across databases remains a non-trivial but essential activity in interdisciplinary systems and synthetic biology programmes. What is needed are integrated repositories to catalogue both biological entities and–crucially–the relationships between them. Such a resource should be extensible, such that newly discovered relationships–for example, those between novel, synthetic enzymes and non-natural products–can be added over time. With the introduction of graph databases, the barrier to the rapid generation, extension and querying of such a resource has been lowered considerably. With a particular focus on metabolic engineering as an illustrative application domain, biochem4j, freely available at http://biochem4j.org, is introduced to provide an integrated, queryable database that warehouses chemical, reaction, enzyme and taxonomic data from a range of reliable resources. The biochem4j framework establishes a starting point for the flexible integration and exploitation of an ever-wider range of biological data sources, from public databases to laboratory-specific experimental datasets, for the benefit of systems biologists, biosystems engineers and the wider community of molecular biologists and biological chemists. PMID:28708831
Hierarchical Control of Semi-Autonomous Teams Under Uncertainty (HICST)
2004-05-01
17 2.4 Module 4: Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.5... Database SoW 1 2 34 5 Txt file: paths Figure 3: Integration of modules 1-5. The modules make provision for human intervention, not indicated in the...figure. SoW is ‘state of the world’. 3. Task execution; 4. Database for state estimation; 5. Java interface to OEP; 6. Robust dynamic programming for
Bibliographies without Tears: Bibliography-Managers Round-Up.
ERIC Educational Resources Information Center
Science Software Quarterly, 1984
1984-01-01
Reviews and compares "Sci-Mate,""Reference Manager," and "BIBLIOPHILE" software packages used for storage and retrieval tasks involving bibliographic data. Each program handles search tasks well; major differences are in the amount of flexibility in customizing the database structure, their import and export…
Oral cancer databases: A comprehensive review.
Sarode, Gargi S; Sarode, Sachin C; Maniyar, Nikunj; Anand, Rahul; Patil, Shankargouda
2017-11-29
Cancer database is a systematic collection and analysis of information on various human cancers at genomic and molecular level that can be utilized to understand various steps in carcinogenesis and for therapeutic advancement in cancer field. Oral cancer is one of the leading causes of morbidity and mortality all over the world. The current research efforts in this field are aimed at cancer etiology and therapy. Advanced genomic technologies including microarrays, proteomics, transcrpitomics, and gene sequencing development have culminated in generation of extensive data and subjection of several genes and microRNAs that are distinctively expressed and this information is stored in the form of various databases. Extensive data from various resources have brought the need for collaboration and data sharing to make effective use of this new knowledge. The current review provides comprehensive information of various publicly accessible databases that contain information pertinent to oral squamous cell carcinoma (OSCC) and databases designed exclusively for OSCC. The databases discussed in this paper are Protein-Coding Gene Databases and microRNA Databases. This paper also describes gene overlap in various databases, which will help researchers to reduce redundancy and focus on only those genes, which are common to more than one databases. We hope such introduction will promote awareness and facilitate the usage of these resources in the cancer research community, and researchers can explore the molecular mechanisms involved in the development of cancer, which can help in subsequent crafting of therapeutic strategies. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Database for propagation models
NASA Astrophysics Data System (ADS)
Kantak, Anil V.
1991-07-01
A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.
Geotherm: the U.S. geological survey geothermal information system
Bliss, J.D.; Rapport, A.
1983-01-01
GEOTHERM is a comprehensive system of public databases and software used to store, locate, and evaluate information on the geology, geochemistry, and hydrology of geothermal systems. Three main databases address the general characteristics of geothermal wells and fields, and the chemical properties of geothermal fluids; the last database is currently the most active. System tasks are divided into four areas: (1) data acquisition and entry, involving data entry via word processors and magnetic tape; (2) quality assurance, including the criteria and standards handbook and front-end data-screening programs; (3) operation, involving database backups and information extraction; and (4) user assistance, preparation of such items as application programs, and a quarterly newsletter. The principal task of GEOTHERM is to provide information and research support for the conduct of national geothermal-resource assessments. The principal users of GEOTHERM are those involved with the Geothermal Research Program of the U.S. Geological Survey. Information in the system is available to the public on request. ?? 1983.
QuakeML - An XML Schema for Seismology
NASA Astrophysics Data System (ADS)
Wyss, A.; Schorlemmer, D.; Maraini, S.; Baer, M.; Wiemer, S.
2004-12-01
We propose an extensible format-definition for seismic data (QuakeML). Sharing data and seismic information efficiently is one of the most important issues for research and observational seismology in the future. The eXtensible Markup Language (XML) is playing an increasingly important role in the exchange of a variety of data. Due to its extensible definition capabilities, its wide acceptance and the existing large number of utilities and libraries for XML, a structured representation of various types of seismological data should in our opinion be developed by defining a 'QuakeML' standard. Here we present the QuakeML definitions for parameter databases and further efforts, e.g. a central QuakeML catalog database and a web portal for exchanging codes and stylesheets.
Dictionary-driven prokaryotic gene finding.
Shibuya, Tetsuo; Rigoutsos, Isidore
2002-06-15
Gene identification, also known as gene finding or gene recognition, is among the important problems of molecular biology that have been receiving increasing attention with the advent of large scale sequencing projects. Previous strategies for solving this problem can be categorized into essentially two schools of thought: one school employs sequence composition statistics, whereas the other relies on database similarity searches. In this paper, we propose a new gene identification scheme that combines the best characteristics from each of these two schools. In particular, our method determines gene candidates among the ORFs that can be identified in a given DNA strand through the use of the Bio-Dictionary, a database of patterns that covers essentially all of the currently available sample of the natural protein sequence space. Our approach relies entirely on the use of redundant patterns as the agents on which the presence or absence of genes is predicated and does not employ any additional evidence, e.g. ribosome-binding site signals. The Bio-Dictionary Gene Finder (BDGF), the algorithm's implementation, is a single computational engine able to handle the gene identification task across distinct archaeal and bacterial genomes. The engine exhibits performance that is characterized by simultaneous very high values of sensitivity and specificity, and a high percentage of correctly predicted start sites. Using a collection of patterns derived from an old (June 2000) release of the Swiss-Prot/TrEMBL database that contained 451 602 proteins and fragments, we demonstrate our method's generality and capabilities through an extensive analysis of 17 complete archaeal and bacterial genomes. Examples of previously unreported genes are also shown and discussed in detail.
2014-10-01
number/communicate to site coordinator N/A Task V.5 (mo 6-47): implement methods to educate/monitor participants on aspects of vit D3 and calcium...potential side effects of vit D3 supplementation CRFs for adverse event reporting have been developed and included in the protocols submitted for IRB...Task VI.3 (mo 6-47): document acceptance of storage sample in the CGRP database and vit D3 study database The process for storage of sample in the
Writing Panels Articulate Extension Public Value in the West
ERIC Educational Resources Information Center
Carroll, Jan. B.; Dinstel, Roxie Rogers; Manton, Linda Marie
2015-01-01
In every era, publicly funded programs seek to document their value. During the centennial celebrations of Cooperative Extension's legislation and establishment, this cry for data became even louder and the demand more intense. The Western Extension Directors Association (WEDA) tasked their Western Region Program Leadership Committee (WRPLC) to…
Development of a Task-Exposure Matrix (TEM) for Pesticide Use (TEMPEST).
Dick, F D; Semple, S E; van Tongeren, M; Miller, B G; Ritchie, P; Sherriff, D; Cherrie, J W
2010-06-01
Pesticides have been associated with increased risks for a range of conditions including Parkinson's disease, but identifying the agents responsible has proven challenging. Improved pesticide exposure estimates would increase the power of epidemiological studies to detect such an association if one exists. Categories of pesticide use were identified from the tasks reported in a previous community-based case-control study in Scotland. Typical pesticides used in each task in each decade were identified from published scientific and grey literature and from expert interviews, with the number of potential agents collapsed into 10 groups of pesticides. A pesticide usage database was then created, using the task list and the typical pesticide groups employed in those tasks across seven decades spanning the period 1945-2005. Information about the method of application and concentration of pesticides used in these tasks was then incorporated into the database. A list was generated of 81 tasks involving pesticide exposure in Scotland covering seven decades producing a total of 846 task per pesticide per decade combinations. A Task-Exposure Matrix for PESTicides (TEMPEST) was produced by two occupational hygienists who quantified the likely probability and intensity of inhalation and dermal exposures for each pesticide group for a given use during each decade. TEMPEST provides a basis for assessing exposures to specific pesticide groups in Scotland covering the period 1945-2005. The methods used to develop TEMPEST could be used in a retrospective assessment of occupational exposure to pesticides for Scottish epidemiological studies or adapted for use in other countries.
Identifying relevant data for a biological database: handcrafted rules versus machine learning.
Sehgal, Aditya Kumar; Das, Sanmay; Noto, Keith; Saier, Milton H; Elkan, Charles
2011-01-01
With well over 1,000 specialized biological databases in use today, the task of automatically identifying novel, relevant data for such databases is increasingly important. In this paper, we describe practical machine learning approaches for identifying MEDLINE documents and Swiss-Prot/TrEMBL protein records, for incorporation into a specialized biological database of transport proteins named TCDB. We show that both learning approaches outperform rules created by hand by a human expert. As one of the first case studies involving two different approaches to updating a deployed database, both the methods compared and the results will be of interest to curators of many specialized databases.
The MAO NASU Plate Archive Database. Current Status and Perspectives
NASA Astrophysics Data System (ADS)
Pakuliak, L. K.; Sergeeva, T. P.
2006-04-01
The preliminary online version of the database of the MAO NASU plate archive is constructed on the basis of the relational database management system MySQL and permits an easy supplement of database with new collections of astronegatives, provides a high flexibility in constructing SQL-queries for data search optimization, PHP Basic Authorization protected access to administrative interface and wide range of search parameters. The current status of the database will be reported and the brief description of the search engine and means of the database integrity support will be given. Methods and means of the data verification and tasks for the further development will be discussed.
Olympic Information in the SPORT Database.
ERIC Educational Resources Information Center
Belna, Alison M.; And Others
1984-01-01
Profiles the SPORT database, produced by Sport Information Resource Centre, Ottawa, Ontario, which provides extensive coverage of individual sports including practice, training and equipment, recreation, sports medicine, physical education, sport facilities, and international sport history. Olympic coverage in SPORT, sports sciences, online…
Shim, Jae Kun; Karol, Sohit; Hsu, Jeffrey; de Oliveira, Marcio Alves
2008-04-01
The aim of this study was to investigate the contralateral motor overflow in children during single-finger and multi-finger maximum force production tasks. Forty-five right handed children, 5-11 years of age produced maximum isometric pressing force in flexion or extension with single fingers or all four fingers of their right hand. The forces produced by individual fingers of the right and left hands were recorded and analyzed in four-dimensional finger force vector space. The results showed that increases in task (right) hand finger forces were linearly associated with non-task (left) hand finger forces. The ratio of the non-task hand finger force magnitude to the corresponding task hand finger force magnitude, termed motor overflow magnitude (MOM), was greater in extension than flexion. The index finger flexion task showed the smallest MOM values. The similarity between the directions of task hand and non-task hand finger force vectors in four-dimensional finger force vector space, termed motor overflow direction (MOD), was the greatest for index and smallest for little finger tasks. MOM of a four-finger task was greater than the sum of MOMs of single-finger tasks, and this phenomenon was termed motor overflow surplus. Contrary to previous studies, no single-finger or four-finger tasks showed significant changes of MOM or MOD with the age of children. We conclude that the contralateral motor overflow in children during finger maximum force production tasks is dependent upon the task fingers and the magnitude and direction of task finger forces.
Spectroscopic data for an astronomy database
NASA Technical Reports Server (NTRS)
Parkinson, W. H.; Smith, Peter L.
1995-01-01
Very few of the atomic and molecular data used in analyses of astronomical spectra are currently available in World Wide Web (WWW) databases that are searchable with hypertext browsers. We have begun to rectify this situation by making extensive atomic data files available with simple search procedures. We have also established links to other on-line atomic and molecular databases. All can be accessed from our database homepage with URL: http:// cfa-www.harvard.edu/ amp/ data/ amdata.html.
Extending SQL to Support Privacy Policies
NASA Astrophysics Data System (ADS)
Ghazinour, Kambiz; Pun, Sampson; Majedi, Maryam; Chinaci, Amir H.; Barker, Ken
Increasing concerns over Internet applications that violate user privacy by exploiting (back-end) database vulnerabilities must be addressed to protect both customer privacy and to ensure corporate strategic assets remain trustworthy. This chapter describes an extension onto database catalogues and Structured Query Language (SQL) for supporting privacy in Internet applications, such as in social networks, e-health, e-governmcnt, etc. The idea is to introduce new predicates to SQL commands to capture common privacy requirements, such as purpose, visibility, generalization, and retention for both mandatory and discretionary access control policies. The contribution is that corporations, when creating the underlying databases, will be able to define what their mandatory privacy policies arc with which all application users have to comply. Furthermore, each application user, when providing their own data, will be able to define their own privacy policies with which other users have to comply. The extension is supported with underlying catalogues and algorithms. The experiments demonstrate a very reasonable overhead for the extension. The result is a low-cost mechanism to create new systems that arc privacy aware and also to transform legacy databases to their privacy-preserving equivalents. Although the examples arc from social networks, one can apply the results to data security and user privacy of other enterprises as well.
An Initial Design of ISO 19152:2012 LADM Based Valuation and Taxation Data Model
NASA Astrophysics Data System (ADS)
Çağdaş, V.; Kara, A.; van Oosterom, P.; Lemmen, C.; Işıkdağ, Ü.; Kathmann, R.; Stubkjær, E.
2016-10-01
A fiscal registry or database is supposed to record geometric, legal, physical, economic, and environmental characteristics in relation to property units, which are subject to immovable property valuation and taxation. Apart from procedural standards, there is no internationally accepted data standard that defines the semantics of fiscal databases. The ISO 19152:2012 Land Administration Domain Model (LADM), as an international land administration standard focuses on legal requirements, but considers out of scope specifications of external information systems including valuation and taxation databases. However, it provides a formalism which allows for an extension that responds to the fiscal requirements. This paper introduces an initial version of a LADM - Fiscal Extension Module for the specification of databases used in immovable property valuation and taxation. The extension module is designed to facilitate all stages of immovable property taxation, namely the identification of properties and taxpayers, assessment of properties through single or mass appraisal procedures, automatic generation of sales statistics, and the management of tax collection, dealing with arrears and appeals. It is expected that the initial version will be refined through further activities held by a possible joint working group under FIG Commission 7 (Cadastre and Land Management) and FIG Commission 9 (Valuation and the Management of Real Estate) in collaboration with other relevant international bodies.
OrChem - An open source chemistry search engine for Oracle(R).
Rijnbeek, Mark; Steinbeck, Christoph
2009-10-22
Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net.
Bakshi, Sonal R; Shukla, Shilin N; Shah, Pankaj M
2009-01-01
We developed a Microsoft Access-based laboratory management system to facilitate database management of leukemia patients referred for cytogenetic tests in regards to karyotyping and fluorescence in situ hybridization (FISH). The database is custom-made for entry of patient data, clinical details, sample details, cytogenetics test results, and data mining for various ongoing research areas. A number of clinical research laboratoryrelated tasks are carried out faster using specific "queries." The tasks include tracking clinical progression of a particular patient for multiple visits, treatment response, morphological and cytogenetics response, survival time, automatic grouping of patient inclusion criteria in a research project, tracking various processing steps of samples, turn-around time, and revenue generated. Since 2005 we have collected of over 5,000 samples. The database is easily updated and is being adapted for various data maintenance and mining needs.
Enhancements to Demilitarization Process Maps Program (ProMap)
2016-10-14
map tool, ProMap, was improved by implementing new features, and sharing data with MIDAS and AMDIT databases . Specifically, process efficiency was...improved by 1) providing access to APE information contained in the AMDIT database directly from inside ProMap when constructing a process map, 2...what equipment can be efficiently used to demil a particular munition. Associated with this task was the upgrade of the AMDIT database so that
Web application for detailed real-time database transaction monitoring for CMS condition data
NASA Astrophysics Data System (ADS)
de Gruttola, Michele; Di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio
2012-12-01
In the upcoming LHC era, database have become an essential part for the experiments collecting data from LHC, in order to safely store, and consistently retrieve, a wide amount of data, which are produced by different sources. In the CMS experiment at CERN, all this information is stored in ORACLE databases, allocated in several servers, both inside and outside the CERN network. In this scenario, the task of monitoring different databases is a crucial database administration issue, since different information may be required depending on different users' tasks such as data transfer, inspection, planning and security issues. We present here a web application based on Python web framework and Python modules for data mining purposes. To customize the GUI we record traces of user interactions that are used to build use case models. In addition the application detects errors in database transactions (for example identify any mistake made by user, application failure, unexpected network shutdown or Structured Query Language (SQL) statement error) and provides warning messages from the different users' perspectives. Finally, in order to fullfill the requirements of the CMS experiment community, and to meet the new development in many Web client tools, our application was further developed, and new features were deployed.
Grau-Sánchez, Jennifer; Ramos, Neus; Duarte, Esther; Särkämö, Teppo; Rodríguez-Fornells, Antoni
2017-09-01
Previous studies have shown that Music-Supported Therapy (MST) can improve the motor function and promote functional neuroplastic changes in motor areas; however, the time course of motor gains across MST sessions and treatment periods remain unknown. The aim of this study was to explore the progression of the rehabilitation of motor deficits in a chronic stroke patient for a period of 7 months. A reversal design (ABAB) was implemented in a chronic stroke patient where no treatment was provided in the A periods and MST was applied in the B periods. Each period comprised of 4 weeks and an extensive evaluation of the motor function using clinical motor tests and 3D movement analysis was performed weekly. During the MST periods, a keyboard task was recorded daily. A follow-up evaluation was performed 3 months after the second MST treatment. Improvements were observed during the first sessions in the keyboard task but clinical gains were noticeable only at the end of the first treatment and during the second treatment period. These gains were maintained in the follow-up evaluation. This is the first study examining the pattern of motor recovery progression in MST, evidencing that gradual and continuous motor improvements are possible with the repeated application of MST training. Fast-acquisition in specific motor abilities was observed at the beginning of the MST training but generalization of these improvements to other motor tasks took place at the end or when another treatment period was provided. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Emotion effects on implicit and explicit musical memory in normal aging.
Narme, Pauline; Peretz, Isabelle; Strub, Marie-Laure; Ergis, Anne-Marie
2016-12-01
Normal aging affects explicit memory while leaving implicit memory relatively spared. Normal aging also modifies how emotions are processed and experienced, with increasing evidence that older adults (OAs) focus more on positive information than younger adults (YAs). The aim of the present study was to investigate how age-related changes in emotion processing influence explicit and implicit memory. We used emotional melodies that differed in terms of valence (positive or negative) and arousal (high or low). Implicit memory was assessed with a preference task exploiting exposure effects, and explicit memory with a recognition task. Results indicated that effects of valence and arousal interacted to modulate both implicit and explicit memory in YAs. In OAs, recognition was poorer than in YAs; however, recognition of positive and high-arousal (happy) studied melodies was comparable. Insofar as socioemotional selectivity theory (SST) predicts a preservation of the recognition of positive information, our findings are not fully consistent with the extension of this theory to positive melodies since recognition of low-arousal (peaceful) studied melodies was poorer in OAs. In the preference task, YAs showed stronger exposure effects than OAs, suggesting an age-related decline of implicit memory. This impairment is smaller than the one observed for explicit memory (recognition), extending to the musical domain the dissociation between explicit memory decline and implicit memory relative preservation in aging. Finally, the disproportionate preference for positive material seen in OAs did not translate into stronger exposure effects for positive material suggesting no age-related emotional bias in implicit memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Finger Interdependence: Linking the Kinetic and Kinematic Variables
Kim, Sun Wook; Shim, Jae Kun; Zatsiorsky, Vladimir M.; Latash, Mark L.
2008-01-01
We studied the dependence between voluntary motion of a finger and pressing forces produced by the tips of other fingers of the hand. Subjects moved one of the fingers (task finger) of the right hand trying to follow a cyclic, ramp-like flexion-extension template at different frequencies. The other fingers (slave fingers) were restricted from moving; their flexion forces were recorded and analyzed. Index finger motion caused the smallest force production by the slave fingers. Larger forces were produced by the neighbors of the task finger; these forces showed strong modulation over the range of motion of the task finger. The enslaved forces were higher during the flexion phase of the movement cycle as compared to the extension phase. The index of enslaving expressed in N/rad was higher when the task finger moved through the more flexed postures. The dependence of enslaving on both range and direction of task finger motion poses problems for methods of analysis of finger coordination based on an assumption of universal matrices of finger inter-dependence. PMID:18255182
Improving accuracy and power with transfer learning using a meta-analytic database.
Schwartz, Yannick; Varoquaux, Gaël; Pallier, Christophe; Pinel, Philippe; Poline, Jean-Baptiste; Thirion, Bertrand
2012-01-01
Typical cohorts in brain imaging studies are not large enough for systematic testing of all the information contained in the images. To build testable working hypotheses, investigators thus rely on analysis of previous work, sometimes formalized in a so-called meta-analysis. In brain imaging, this approach underlies the specification of regions of interest (ROIs) that are usually selected on the basis of the coordinates of previously detected effects. In this paper, we propose to use a database of images, rather than coordinates, and frame the problem as transfer learning: learning a discriminant model on a reference task to apply it to a different but related new task. To facilitate statistical analysis of small cohorts, we use a sparse discriminant model that selects predictive voxels on the reference task and thus provides a principled procedure to define ROIs. The benefits of our approach are twofold. First it uses the reference database for prediction, i.e., to provide potential biomarkers in a clinical setting. Second it increases statistical power on the new task. We demonstrate on a set of 18 pairs of functional MRI experimental conditions that our approach gives good prediction. In addition, on a specific transfer situation involving different scanners at different locations, we show that voxel selection based on transfer learning leads to higher detection power on small cohorts.
A wrist tendon travel assessment of hand movements associated with industrial repetitive activities.
Ugbolue, U Chris; Nicol, Alexander C
2012-01-01
To investigate slow and fast paced industrial activity hand repetitive movements associated with carpal tunnel syndrome where movements are evaluated based on finger and wrist tendon travel measurements. Nine healthy subjects were recruited for the study aged between 23 and 33 years. Participants mimicked an industrial repetitive task by performing the following activities: wrist flexion and extension task, palm open and close task; and pinch task. Each task was performed for a period of 5 minutes at a slow (0.33 Hz) and fast (1 Hz) pace for a duration of 3 minutes and 2 minutes respectively. Tendon displacement produced higher flexor digitorum superficialis (FDS) tendon travel when compared to the flexor digitorum profundus (FDP) tendons. The left hand mean (SD) tendon travel for the FDS tendon and FDP tendon were 11108 (5188) mm and 9244 (4328) mm while the right hand mean tendon travel (SD) for the FDS tendon and FDP tendon were 9225 (3441) mm and 7670 (2856) mm respectively. Of the three tasks mimicking an industrial repetitive activity, the wrist flexion and extension task produced the most tendon travel. The findings may be useful to researchers in classifying the level of strenuous activity in relation to tendon travel.
ERIC Educational Resources Information Center
Johnson, Donald M.; Ferguson, James A.; Vokins, Nancy W.; Lester, Melissa L.
2000-01-01
Over 50% of faculty teaching undergraduate agriculture courses (n=58) required use of word processing, Internet, and electronic mail; less than 50% required spreadsheets, databases, graphics, or specialized software. They planned to maintain or increase required computer tasks in their courses. (SK)
When intensions do not map onto extensions: Individual differences in conceptualization.
Hampton, James A; Passanisi, Alessia
2016-04-01
Concepts are represented in the mind through knowledge of their extensions (the class of items to which the concept applies) and intensions (features that distinguish that class of items). A common assumption among theories of concepts is that the 2 aspects are intimately related. Hence if there is systematic individual variation in concept representation, the variation should correlate between extensional and intensional measures. A pair of individuals with similar extensional beliefs about a given concept should also share similar intensional beliefs. To test this notion, exemplars (extensions) and features (intensions) of common categories were rated for typicality and importance respectively across 2 occasions. Within-subject consistency was greater than between-subjects consensus on each task, providing evidence for systematic individual variation. Furthermore, the similarity structure between individuals for each task was stable across occasions. However, across 5 samples, similarity between individuals for extensional judgments did not map onto similarity between individuals for intensional judgments. The results challenge the assumption common to many theories of conceptual representation that intensions determine extensions and support a hybrid view of concepts where there is a disconnection between the conceptual resources that are used for the 2 tasks. (c) 2016 APA, all rights reserved).
Barbosa-Silva, A; Pafilis, E; Ortega, J M; Schneider, R
2007-12-11
Data integration has become an important task for biological database providers. The current model for data exchange among different sources simplifies the manner that distinct information is accessed by users. The evolution of data representation from HTML to XML enabled programs, instead of humans, to interact with biological databases. We present here SRS.php, a PHP library that can interact with the data integration Sequence Retrieval System (SRS). The library has been written using SOAP definitions, and permits the programmatic communication through webservices with the SRS. The interactions are possible by invoking the methods described in WSDL by exchanging XML messages. The current functions available in the library have been built to access specific data stored in any of the 90 different databases (such as UNIPROT, KEGG and GO) using the same query syntax format. The inclusion of the described functions in the source of scripts written in PHP enables them as webservice clients to the SRS server. The functions permit one to query the whole content of any SRS database, to list specific records in these databases, to get specific fields from the records, and to link any record among any pair of linked databases. The case study presented exemplifies the library usage to retrieve information regarding registries of a Plant Defense Mechanisms database. The Plant Defense Mechanisms database is currently being developed, and the proposal of SRS.php library usage is to enable the data acquisition for the further warehousing tasks related to its setup and maintenance.
A Priority Fuzzy Logic Extension of the XQuery Language
NASA Astrophysics Data System (ADS)
Škrbić, Srdjan; Wettayaprasit, Wiphada; Saeueng, Pannipa
2011-09-01
In recent years there have been significant research findings in flexible XML querying techniques using fuzzy set theory. Many types of fuzzy extensions to XML data model and XML query languages have been proposed. In this paper, we introduce priority fuzzy logic extensions to XQuery language. Describing these extensions we introduce a new query language. Moreover, we describe a way to implement an interpreter for this language using an existing XML native database.
Teaching Database Design with Constraint-Based Tutors
ERIC Educational Resources Information Center
Mitrovic, Antonija; Suraweera, Pramuditha
2016-01-01
Design tasks are difficult to teach, due to large, unstructured solution spaces, underspecified problems, non-existent problem solving algorithms and stopping criteria. In this paper, we comment on our approach to develop KERMIT, a constraint-based tutor that taught database design. In later work, we re-implemented KERMIT as EER-Tutor, and…
A Systems Development Life Cycle Project for the AIS Class
ERIC Educational Resources Information Center
Wang, Ting J.; Saemann, Georgia; Du, Hui
2007-01-01
The Systems Development Life Cycle (SDLC) project was designed for use by an accounting information systems (AIS) class. Along the tasks in the SDLC, this project integrates students' knowledge of transaction and business processes, systems documentation techniques, relational database concepts, and hands-on skills in relational database use.…
Integrated Functional and Executional Modelling of Software Using Web-Based Databases
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak; Marietta, Roberta
1998-01-01
NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases. To appear in an article of Journal of Database Management.
Burn Injury Assessment Tool with Morphable 3D Human Body Models
2017-04-21
waist, arms and legs measurements) as stored in most anthropometry databases . To improve on bum area estimations, the bum tool will allow the user to...different algorithm for morphing that relies on searching of an extensive anthropometric database , which is created from thousands of randomly...interpolation methods are required. Develop Patient Database : Patient data entered (name, gender, age, anthropometric measurements), collected (photographic
Bridging the Gap between Receptive and Productive Vocabulary Size through Extensive Reading
ERIC Educational Resources Information Center
Yamamoto, Yuka
2011-01-01
It is well established that extensive reading promotes the incidental learning of L1 and L2 receptive vocabulary; however, little is known about its effectiveness on productive gains in vocabulary knowledge. This paper investigates the extent to which extensive reading combined with writing tasks promotes productive vocabulary growth of Japanese…
Extensive Reading in Enhancing Lexical Chunks Acquisition
ERIC Educational Resources Information Center
Pereyra, Nilsa
2015-01-01
The purpose of this action research was to investigate the effect of extensive reading and related activities on the acquisition of lexical chunks in EFL students. Seven adult EFL learners with an Intermediate level volunteered to take part in the 16 week project following Extensive Reading principles combined with tasks based on the Lexical…
Palaeo sea-level and ice-sheet databases: problems, strategies and perspectives
NASA Astrophysics Data System (ADS)
Rovere, Alessio; Düsterhus, André; Carlson, Anders; Barlow, Natasha; Bradwell, Tom; Dutton, Andrea; Gehrels, Roland; Hibbert, Fiona; Hijma, Marc; Horton, Benjamin; Klemann, Volker; Kopp, Robert; Sivan, Dorit; Tarasov, Lev; Törnqvist, Torbjorn
2016-04-01
Databases of palaeoclimate data have driven many major developments in understanding the Earth system. The measurement and interpretation of palaeo sea-level and ice-sheet data that form such databases pose considerable challenges to the scientific communities that use them for further analyses. In this paper, we build on the experience of the PALSEA (PALeo constraints on SEA level rise) community, which is a working group inside the PAGES (Past Global Changes) project, to describe the challenges and best strategies that can be adopted to build a self-consistent and standardised database of geological and geochemical data related to palaeo sea levels and ice sheets. Our aim in this paper is to identify key points that need attention and subsequent funding when undertaking the task of database creation. We conclude that any sea-level or ice-sheet database must be divided into three instances: i) measurement; ii) interpretation; iii) database creation. Measurement should include postion, age, description of geological features, and quantification of uncertainties. All must be described as objectively as possible. Interpretation can be subjective, but it should always include uncertainties and include all the possible interpretations, without unjustified a priori exclusions. We propose that, in the creation of a database, an approach based on Accessibility, Transparency, Trust, Availability, Continued updating, Completeness and Communication of content (ATTAC3) must be adopted. Also, it is essential to consider the community structure that creates and benefits of a database. We conclude that funding sources should consider to address not only the creation of original data in specific research-question oriented projects, but also include the possibility to use part of the funding for IT-related and database creation tasks, which are essential to guarantee accessibility and maintenance of the collected data.
Predicting the Extension of Biomedical Ontologies
Pesquita, Catia; Couto, Francisco M.
2012-01-01
Developing and extending a biomedical ontology is a very demanding task that can never be considered complete given our ever-evolving understanding of the life sciences. Extension in particular can benefit from the automation of some of its steps, thus releasing experts to focus on harder tasks. Here we present a strategy to support the automation of change capturing within ontology extension where the need for new concepts or relations is identified. Our strategy is based on predicting areas of an ontology that will undergo extension in a future version by applying supervised learning over features of previous ontology versions. We used the Gene Ontology as our test bed and obtained encouraging results with average f-measure reaching 0.79 for a subset of biological process terms. Our strategy was also able to outperform state of the art change capturing methods. In addition we have identified several issues concerning prediction of ontology evolution, and have delineated a general framework for ontology extension prediction. Our strategy can be applied to any biomedical ontology with versioning, to help focus either manual or semi-automated extension methods on areas of the ontology that need extension. PMID:23028267
An ergonomics study of thumb movements on smartphone touch screen.
Xiong, Jinghong; Muraki, Satoshi
2014-01-01
This study investigated the relationships between thumb muscle activity and thumb operating tasks on a smartphone touch screen with one-hand posture. Six muscles in the right thumb and forearm were targeted in this study, namely adductor pollicis, flexor pollicis brevis, abductor pollicis brevis (APB), abductor pollicis longus, first dorsal interosseous (FDI) and extensor digitorum. The performance measures showed that the thumb developed fatigue rapidly when tapping on smaller buttons (diameter: 9 mm compared with 3 mm), and moved more slowly in flexion-extension than in adduction-abduction orientation. Meanwhile, the electromyography and perceived exertion values of FDI significantly increased in small button and flexion-extension tasks, while those of APB were greater in the adduction-abduction task. This study reveals that muscle effort among thumb muscles on a touch screen smartphone varies according to the task, and suggests that the use of small touch buttons should be minimised for better thumb performance.
A software architecture for multidisciplinary applications: Integrating task and data parallelism
NASA Technical Reports Server (NTRS)
Chapman, Barbara; Mehrotra, Piyush; Vanrosendale, John; Zima, Hans
1994-01-01
Data parallel languages such as Vienna Fortran and HPF can be successfully applied to a wide range of numerical applications. However, many advanced scientific and engineering applications are of a multidisciplinary and heterogeneous nature and thus do not fit well into the data parallel paradigm. In this paper we present new Fortran 90 language extensions to fill this gap. Tasks can be spawned as asynchronous activities in a homogeneous or heterogeneous computing environment; they interact by sharing access to Shared Data Abstractions (SDA's). SDA's are an extension of Fortran 90 modules, representing a pool of common data, together with a set of Methods for controlled access to these data and a mechanism for providing persistent storage. Our language supports the integration of data and task parallelism as well as nested task parallelism and thus can be used to express multidisciplinary applications in a natural and efficient way.
Fernández, José M; Valencia, Alfonso
2004-10-12
Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance.
Multi-task feature learning by using trace norm regularization
NASA Astrophysics Data System (ADS)
Jiangmei, Zhang; Binfeng, Yu; Haibo, Ji; Wang, Kunpeng
2017-11-01
Multi-task learning can extract the correlation of multiple related machine learning problems to improve performance. This paper considers applying the multi-task learning method to learn a single task. We propose a new learning approach, which employs the mixture of expert model to divide a learning task into several related sub-tasks, and then uses the trace norm regularization to extract common feature representation of these sub-tasks. A nonlinear extension of this approach by using kernel is also provided. Experiments conducted on both simulated and real data sets demonstrate the advantage of the proposed approach.
Brohée, Sylvain; Barriot, Roland; Moreau, Yves
2010-09-01
In recent years, the number of knowledge bases developed using Wiki technology has exploded. Unfortunately, next to their numerous advantages, classical Wikis present a critical limitation: the invaluable knowledge they gather is represented as free text, which hinders their computational exploitation. This is in sharp contrast with the current practice for biological databases where the data is made available in a structured way. Here, we present WikiOpener an extension for the classical MediaWiki engine that augments Wiki pages by allowing on-the-fly querying and formatting resources external to the Wiki. Those resources may provide data extracted from databases or DAS tracks, or even results returned by local or remote bioinformatics analysis tools. This also implies that structured data can be edited via dedicated forms. Hence, this generic resource combines the structure of biological databases with the flexibility of collaborative Wikis. The source code and its documentation are freely available on the MediaWiki website: http://www.mediawiki.org/wiki/Extension:WikiOpener.
ERIC Educational Resources Information Center
Gazan, Rich
2000-01-01
Surveys the current state of Extensible Markup Language (XML), a metalanguage for creating structured documents that describe their own content, and its implications for information professionals. Predicts that XML will become the common language underlying Web, word processing, and database formats. Also discusses Extensible Stylesheet Language…
ERIC Educational Resources Information Center
Saraç, Hatice Sezgi
2018-01-01
In this study, it was aimed to compare two distinct methodologies of grammar instruction: task-based and form-focused teaching. Within the application procedure, which lasted for one academic term, two groups of tertiary level learners (N = 53) were exposed to the same sequence of target structures, extensive writing activities and evaluation…
Map and data for Quaternary faults and folds in New Mexico
Machette, M.N.; Personius, S.F.; Kelson, K.I.; Haller, K.M.; Dart, R.L.
1998-01-01
The "World Map of Major Active Faults" Task Group is compiling a series of digital maps for the United States and other countries in the Western Hemisphere that show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds; the companion database includes published information on these seismogenic features. The Western Hemisphere effort is sponsored by International Lithosphere Program (ILP) Task Group H-2, whereas the effort to compile a new map and database for the United States is funded by the Earthquake Reduction Program (ERP) through the U.S. Geological Survey. The maps and accompanying databases represent a key contribution to the new Global Seismic Hazards Assessment Program (ILP Task Group II-O) for the International Decade for Natural Disaster Reduction. This compilation, which describes evidence for surface faulting and folding in New Mexico, is the third of many similar State and regional compilations that are planned for the U.S. The compilation for West Texas is available as U.S. Geological Survey Open-File Report 96-002 (Collins and others, 1996 #993) and the compilation for Montana will be released as a Montana Bureau of Mines product (Haller and others, in press #1750).
Face recognition: database acquisition, hybrid algorithms, and human studies
NASA Astrophysics Data System (ADS)
Gutta, Srinivas; Huang, Jeffrey R.; Singh, Dig; Wechsler, Harry
1997-02-01
One of the most important technologies absent in traditional and emerging frontiers of computing is the management of visual information. Faces are accessible `windows' into the mechanisms that govern our emotional and social lives. The corresponding face recognition tasks considered herein include: (1) Surveillance, (2) CBIR, and (3) CBIR subject to correct ID (`match') displaying specific facial landmarks such as wearing glasses. We developed robust matching (`classification') and retrieval schemes based on hybrid classifiers and showed their feasibility using the FERET database. The hybrid classifier architecture consist of an ensemble of connectionist networks--radial basis functions-- and decision trees. The specific characteristics of our hybrid architecture include (a) query by consensus as provided by ensembles of networks for coping with the inherent variability of the image formation and data acquisition process, and (b) flexible and adaptive thresholds as opposed to ad hoc and hard thresholds. Experimental results, proving the feasibility of our approach, yield (i) 96% accuracy, using cross validation (CV), for surveillance on a data base consisting of 904 images (ii) 97% accuracy for CBIR tasks, on a database of 1084 images, and (iii) 93% accuracy, using CV, for CBIR subject to correct ID match tasks on a data base of 200 images.
Kierkegaard, Signe; Jørgensen, Peter Bo; Dalgas, Ulrik; Søballe, Kjeld; Mechlenburg, Inger
2015-09-01
During movement tasks, patients with medial compartment knee osteoarthritis use compensatory strategies to minimise the joint load of the affected leg. Movement strategies of the knees and trunk have been investigated, but less is known about movement strategies of the pelvis during advancing functional tasks, and how these strategies are associated with leg extension power. The aim of the study was to investigate pelvic movement strategies and leg extension power in patients with end-stage medial compartment knee osteoarthritis compared with controls. 57 patients (mean age 65.6 years) scheduled for medial uni-compartmental knee arthroplasty, and 29 age and gender matched controls were included in this cross-sectional study. Leg extension power was tested with the Nottingham Leg Extension Power-Rig. Pelvic range of motion was derived from an inertia-based measurement unit placed over the sacrum bone during walking, stair climbing and stepping. Patients had lower leg extension power than controls (20-39 %, P < 0.01) and used greater pelvic range of motion during stair and step ascending and descending (P ≤ 0.03, except for pelvic range of motion in the frontal plane during ascending, P > 0.06). Furthermore, an inverse association (coefficient: -0.03 to -0.04; R (2) = 13-22 %) between leg extension power and pelvic range of motion during stair and step descending was found in the patients. Compared to controls, patients with medial compartment knee osteoarthritis use greater pelvic movements during advanced functional performance tests, particularly when these involve descending tasks. Further studies should investigate if it is possible to alter these movement strategies by an intervention aimed at increasing strength and power for the patients.
Classification of arrhythmia using hybrid networks.
Haseena, Hassan H; Joseph, Paul K; Mathew, Abraham T
2011-12-01
Reliable detection of arrhythmias based on digital processing of Electrocardiogram (ECG) signals is vital in providing suitable and timely treatment to a cardiac patient. Due to corruption of ECG signals with multiple frequency noise and presence of multiple arrhythmic events in a cardiac rhythm, computerized interpretation of abnormal ECG rhythms is a challenging task. This paper focuses a Fuzzy C- Mean (FCM) clustered Probabilistic Neural Network (PNN) and Multi Layered Feed Forward Network (MLFFN) for the discrimination of eight types of ECG beats. Parameters such as fourth order Auto Regressive (AR) coefficients along with Spectral Entropy (SE) are extracted from each ECG beat and feature reduction has been carried out using FCM clustering. The cluster centers form the input of neural network classifiers. The extensive analysis of Massachusetts Institute of Technology- Beth Israel Hospital (MIT-BIH) arrhythmia database shows that FCM clustered PNNs is superior in cardiac arrhythmia classification than FCM clustered MLFFN with an overall accuracy of 99.05%, 97.14%, respectively.
Person Re-Identification via Distance Metric Learning With Latent Variables.
Sun, Chong; Wang, Dong; Lu, Huchuan
2017-01-01
In this paper, we propose an effective person re-identification method with latent variables, which represents a pedestrian as the mixture of a holistic model and a number of flexible models. Three types of latent variables are introduced to model uncertain factors in the re-identification problem, including vertical misalignments, horizontal misalignments and leg posture variations. The distance between two pedestrians can be determined by minimizing a given distance function with respect to latent variables, and then be used to conduct the re-identification task. In addition, we develop a latent metric learning method for learning the effective metric matrix, which can be solved via an iterative manner: once latent information is specified, the metric matrix can be obtained based on some typical metric learning methods; with the computed metric matrix, the latent variables can be determined by searching the state space exhaustively. Finally, extensive experiments are conducted on seven databases to evaluate the proposed method. The experimental results demonstrate that our method achieves better performance than other competing algorithms.
A Method for Automated Detection of Usability Problems from Client User Interface Events
Saadawi, Gilan M.; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S.
2005-01-01
Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121
Generating Neuron Geometries for Detailed Three-Dimensional Simulations Using AnaMorph.
Mörschel, Konstantin; Breit, Markus; Queisser, Gillian
2017-07-01
Generating realistic and complex computational domains for numerical simulations is often a challenging task. In neuroscientific research, more and more one-dimensional morphology data is becoming publicly available through databases. This data, however, only contains point and diameter information not suitable for detailed three-dimensional simulations. In this paper, we present a novel framework, AnaMorph, that automatically generates water-tight surface meshes from one-dimensional point-diameter files. These surface triangulations can be used to simulate the electrical and biochemical behavior of the underlying cell. In addition to morphology generation, AnaMorph also performs quality control of the semi-automatically reconstructed cells coming from anatomical reconstructions. This toolset allows an extension from the classical dimension-reduced modeling and simulation of cellular processes to a full three-dimensional and morphology-including method, leading to novel structure-function interplay studies in the medical field. The developed numerical methods can further be employed in other areas where complex geometries are an essential component of numerical simulations.
Gabadinho, José; Beteva, Antonia; Guijarro, Matias; Rey-Bakaikoa, Vicente; Spruce, Darren; Bowler, Matthew W.; Brockhauser, Sandor; Flot, David; Gordon, Elspeth J.; Hall, David R.; Lavault, Bernard; McCarthy, Andrew A.; McCarthy, Joanne; Mitchell, Edward; Monaco, Stéphanie; Mueller-Dieckmann, Christoph; Nurizzo, Didier; Ravelli, Raimond B. G.; Thibault, Xavier; Walsh, Martin A.; Leonard, Gordon A.; McSweeney, Sean M.
2010-01-01
The design and features of a beamline control software system for macromolecular crystallography (MX) experiments developed at the European Synchrotron Radiation Facility (ESRF) are described. This system, MxCuBE, allows users to easily and simply interact with beamline hardware components and provides automated routines for common tasks in the operation of a synchrotron beamline dedicated to experiments in MX. Additional functionality is provided through intuitive interfaces that enable the assessment of the diffraction characteristics of samples, experiment planning, automatic data collection and the on-line collection and analysis of X-ray emission spectra. The software can be run in a tandem client-server mode that allows for remote control and relevant experimental parameters and results are automatically logged in a relational database, ISPyB. MxCuBE is modular, flexible and extensible and is currently deployed on eight macromolecular crystallography beamlines at the ESRF. Additionally, the software is installed at MAX-lab beamline I911-3 and at BESSY beamline BL14.1. PMID:20724792
Final Report: Demographic Tools for Climate Change and Environmental Assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Neill, Brian
2017-01-24
This report summarizes work over the course of a three-year project (2012-2015, with one year no-cost extension to 2016). The full proposal detailed six tasks: Task 1: Population projection model Task 2: Household model Task 3: Spatial population model Task 4: Integrated model development Task 5: Population projections for Shared Socio-economic Pathways (SSPs) Task 6: Population exposure to climate extremes We report on all six tasks, provide details on papers that have appeared or been submitted as a result of this project, and list selected key presentations that have been made within the university community and at professional meetings.
Sequence modelling and an extensible data model for genomic database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Peter Wei-Der
1992-01-01
The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS's do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data modelmore » that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the Extensible Object Model'', to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.« less
Sequence modelling and an extensible data model for genomic database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Peter Wei-Der
1992-01-01
The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS`s do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data modelmore » that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the ``Extensible Object Model``, to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.« less
Zhang, Shu-Dong; Gant, Timothy W
2009-07-31
Connectivity mapping is a process to recognize novel pharmacological and toxicological properties in small molecules by comparing their gene expression signatures with others in a database. A simple and robust method for connectivity mapping with increased specificity and sensitivity was recently developed, and its utility demonstrated using experimentally derived gene signatures. This paper introduces sscMap (statistically significant connections' map), a Java application designed to undertake connectivity mapping tasks using the recently published method. The software is bundled with a default collection of reference gene-expression profiles based on the publicly available dataset from the Broad Institute Connectivity Map 02, which includes data from over 7000 Affymetrix microarrays, for over 1000 small-molecule compounds, and 6100 treatment instances in 5 human cell lines. In addition, the application allows users to add their custom collections of reference profiles and is applicable to a wide range of other 'omics technologies. The utility of sscMap is two fold. First, it serves to make statistically significant connections between a user-supplied gene signature and the 6100 core reference profiles based on the Broad Institute expanded dataset. Second, it allows users to apply the same improved method to custom-built reference profiles which can be added to the database for future referencing. The software can be freely downloaded from http://purl.oclc.org/NET/sscMap.
Fused man-machine classification schemes to enhance diagnosis of breast microcalcifications
NASA Astrophysics Data System (ADS)
Andreadis, Ioannis; Sevastianos, Chatzistergos; George, Spyrou; Konstantina, Nikita
2017-11-01
Computer aided diagnosis (CAD x ) approaches are developed towards the effective discrimination between benign and malignant clusters of microcalcifications. Different sources of information are exploited, such as features extracted from the image analysis of the region of interest, features related to the location of the cluster inside the breast, age of the patient and descriptors provided by the radiologists while performing their diagnostic task. A series of different CAD x schemes are implemented, each of which uses a different category of features and adopts a variety of machine learning algorithms and alternative image processing techniques. A novel framework is introduced where these independent diagnostic components are properly combined according to features critical to a radiologist in an attempt to identify the most appropriate CAD x schemes for the case under consideration. An open access database (Digital Database of Screening Mammography (DDSM)) has been elaborated to construct a large dataset with cases of varying subtlety, in order to ensure the development of schemes with high generalization ability, as well as extensive evaluation of their performance. The obtained results indicate that the proposed framework succeeds in improving the diagnostic procedure, as the achieved overall classification performance outperforms all the independent single diagnostic components, as well as the radiologists that assessed the same cases, in terms of accuracy, sensitivity, specificity and area under the curve following receiver operating characteristic analysis.
A flood geodatabase and its climatological applications: the case of Catalonia for the last century
NASA Astrophysics Data System (ADS)
Barnolas, M.; Llasat, M. C.
2007-04-01
Floods are the natural hazards that produce the highest number of casualties and material damage in the Western Mediterranean. An improvement in flood risk assessment and study of a possible increase in flooding occurrence are therefore needed. To carry out these tasks it is important to have at our disposal extensive knowledge on historical floods and to find an efficient way to manage this geographical data. In this paper we present a complete flood database spanning the 20th century for the whole of Catalonia (NE Spain), which includes documentary information (affected areas and damage) and instrumental information (meteorological and hydrological records). This geodatabase, named Inungama, has been implemented on a GIS (Geographical Information System) in order to display all the information within a given geographical scenario, as well as to carry out an analysis thereof using queries, overlays and calculus. Following a description of the type and amount of information stored in the database and the structure of the information system, the first applications of Inungama are presented. The geographical distribution of floods shows the localities which are more likely to be flooded, confirming that the most affected municipalities are the most densely populated ones in coastal areas. Regarding the existence of an increase in flooding occurrence, a temporal analysis has been carried out, showing a steady increase over the last 30 years.
NASA Technical Reports Server (NTRS)
Gallagher, Seana; Olson, Matt; Blythe, Doug; Heletz, Jacob; Hamilton, Griff; Kolb, Bill; Homans, Al; Zemrowski, Ken; Decker, Steve; Tegge, Cindy
2000-01-01
This document is the NASA AATT Task Order 24 Final Report. NASA Research Task Order 24 calls for the development of eleven distinct task reports. Each task was a necessary exercise in the development of comprehensive communications systems architecture (CSA) for air traffic management and aviation weather information dissemination for 2015, the definition of the interim architecture for 2007, and the transition plan to achieve the desired End State. The eleven tasks are summarized along with the associated Task Order reference. The output of each task was an individual task report. The task reports that make up the main body of this document include Task 5, Task 6, Task 7, Task 8, Task 10, and Task 11. The other tasks provide the supporting detail used in the development of the architecture. These reports are included in the appendices. The detailed user needs, functional communications requirements and engineering requirements associated with Tasks 1, 2, and 3 have been put into a relational database and are provided electronically.
The Stream-Catchment (StreamCat) Dataset: A database of watershed metrics for the conterminous USA
We developed an extensive database of landscape metrics for ~2.65 million streams, and their associated catchments, within the conterminous USA: The Stream-Catchment (StreamCat) Dataset. These data are publically available and greatly reduce the specialized geospatial expertise n...
Working Memory Training Improves Dual-Task Performance on Motor Tasks.
Kimura, Takehide; Kaneko, Fuminari; Nagahata, Keita; Shibata, Eriko; Aoki, Nobuhiro
2017-01-01
The authors investigated whether working memory training improves motor-motor dual-task performance consisted of upper and lower limb tasks. The upper limb task was a simple reaction task and the lower limb task was an isometric knee extension task. 45 participants (age = 21.8 ± 1.6 years) were classified into a working memory training group (WM-TRG), dual-task training group, or control group. The training duration was 2 weeks (15 min, 4 times/week). Our results indicated that working memory capacity increased significantly only in the WM-TRG. Dual-task performance improved in the WM-TRG and dual-task training group. Our study provides the novel insight that working memory training improves dual-task performance without specific training on the target motor task.
76 FR 64859 - Pilot Loading of Navigation and Terrain Awareness Database Updates
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-19
... category the task of updating databases used in self-contained, front-panel or pedestal-mounted navigation... Rule This rulemaking would allow pilots of all certificated aircraft equipped with self-contained... verification, or by errors in ATC assignments which may occur during redirection of the flight. Both types of...
One for All: Maintaining a Single Schedule Database for Large Development Projects
NASA Technical Reports Server (NTRS)
Hilscher, R.; Howerton, G.
1999-01-01
Efficiently maintaining and controlling a single schedule database in an Integrated Product Team environment is a significant challenge. It's accomplished effectively with the right combination of tools, skills, strategy, creativity, and teamwork. We'll share our lessons learned maintaining a 20,000 plus task network on a 36 month project.
Contributions of TetrUSS to Project Orion
NASA Technical Reports Server (NTRS)
Mcmillin, Susan N.; Frink, Neal T.; Kerimo, Johannes; Ding, Djiang; Nayani, Sudheer; Parlette, Edward B.
2011-01-01
The NASA Constellation program has relied heavily on Computational Fluid Dynamics simulations for generating aerodynamic databases and design loads. The Orion Project focuses on the Orion Crew Module and the Orion Launch Abort Vehicle. NASA TetrUSS codes (GridTool/VGRID/USM3D) have been applied in a supporting role to the Crew Exploration Vehicle Aerosciences Project for investigating various aerodynamic sensitivities and supplementing the aerodynamic database. This paper provides an overview of the contributions from the TetrUSS team to the Project Orion Crew Module and Launch Abort Vehicle aerodynamics, along with selected examples to highlight the challenges encountered along the way. A brief description of geometries and tasks will be discussed followed by a description of the flow solution process that produced production level computational solutions. Four tasks conducted by the USM3D team will be discussed to show how USM3D provided aerodynamic data for inclusion in the Orion aero-database, contributed data for the build-up of aerodynamic uncertainties for the aero-database, and provided insight into the flow features about the Crew Module and the Launch Abort Vehicle.
Where Field Staff Get Information. Approaching the Electronic Times.
ERIC Educational Resources Information Center
Shih, Win-Yuan; Evans, James F.
1991-01-01
Top 3 information sources identified in a survey of 109 extension agents were extension publications, specialists, and personal files. Electronic sources such as satellite programing and bibliographic databases were used infrequently, because of lack of access, user friendliness, and ready applicability of information. (SK)
Relational databases for rare disease study: application to vascular anomalies.
Perkins, Jonathan A; Coltrera, Marc D
2008-01-01
To design a relational database integrating clinical and basic science data needed for multidisciplinary treatment and research in the field of vascular anomalies. Based on data points agreed on by the American Society of Pediatric Otolaryngology (ASPO) Vascular Anomalies Task Force. The database design enables sharing of data subsets in a Health Insurance Portability and Accountability Act (HIPAA)-compliant manner for multisite collaborative trials. Vascular anomalies pose diagnostic and therapeutic challenges. Our understanding of these lesions and treatment improvement is limited by nonstandard terminology, severity assessment, and measures of treatment efficacy. The rarity of these lesions places a premium on coordinated studies among multiple participant sites. The relational database design is conceptually centered on subjects having 1 or more lesions. Each anomaly can be tracked individually along with their treatment outcomes. This design allows for differentiation between treatment responses and untreated lesions' natural course. The relational database design eliminates data entry redundancy and results in extremely flexible search and data export functionality. Vascular anomaly programs in the United States. A relational database correlating clinical findings and photographic, radiologic, histologic, and treatment data for vascular anomalies was created for stand-alone and multiuser networked systems. Proof of concept for independent site data gathering and HIPAA-compliant sharing of data subsets was demonstrated. The collaborative effort by the ASPO Vascular Anomalies Task Force to create the database helped define a common vascular anomaly data set. The resulting relational database software is a powerful tool to further the study of vascular anomalies and the development of evidence-based treatment innovation.
Papa, Evan V; Garg, Hina; Dibble, Leland E
2015-01-01
Falls are the leading cause of traumatic brain injury and fractures and the No. 1 cause of emergency department visits by older adults. Although declines in muscle strength and sensory function contribute to increased falls in older adults, skeletal muscle fatigue is often overlooked as an additional contributor to fall risk. In an effort to increase awareness of the detrimental effects of skeletal muscle fatigue on postural control, we sought to systematically review research studies examining this issue. The specific purpose of this review was to provide a detailed assessment of how anticipatory and reactive postural control tasks are influenced by acute muscle fatigue in healthy older individuals. An extensive search was performed using the CINAHL, Scopus, PubMed, SPORTDiscus, and AgeLine databases for the period from inception of each database to June 2013. This systematic review used standardized search criteria and quality assessments via the American Academy for Cerebral Palsy and Developmental Medicine Methodology to Develop Systematic Reviews of Treatment Interventions (2008 version, revision 1.2, AACPDM, Milwaukee, Wisconsin). A total of 334 citations were found. Six studies were selected for inclusion, whereas 328 studies were excluded from the analytical review. The majority of articles (5 of 6) utilized reactive postural control paradigms. All studies incorporated extrinsic measures of muscle fatigue, such as declines in maximal voluntary contraction or available active range of motion. The most common biomechanical postural control task outcomes were spatial measures, temporal measures, and end-points of lower extremity joint kinetics. On the basis of systematic review of relevant literature, it appears that muscle fatigue induces clear deteriorations in reactive postural control. A paucity of high-quality studies examining anticipatory postural control supports the need for further research in this area. These results should serve to heighten awareness regarding the potential negative effects of acute muscle fatigue on postural control and support the examination of muscle endurance training as a fall risk intervention in future studies.
Two-Stage Categorization in Brand Extension Evaluation: Electrophysiological Time Course Evidence
Wang, Xiaoyi
2014-01-01
A brand name can be considered a mental category. Similarity-based categorization theory has been used to explain how consumers judge a new product as a member of a known brand, a process called brand extension evaluation. This study was an event-related potential study conducted in two experiments. The study found a two-stage categorization process reflected by the P2 and N400 components in brand extension evaluation. In experiment 1, a prime–probe paradigm was presented in a pair consisting of a brand name and a product name in three conditions, i.e., in-category extension, similar-category extension, and out-of-category extension. Although the task was unrelated to brand extension evaluation, P2 distinguished out-of-category extensions from similar-category and in-category ones, and N400 distinguished similar-category extensions from in-category ones. In experiment 2, a prime–probe paradigm with a related task was used, in which product names included subcategory and major-category product names. The N400 elicited by subcategory products was more significantly negative than that elicited by major-category products, with no salient difference in P2. We speculated that P2 could reflect the early low-level and similarity-based processing in the first stage, whereas N400 could reflect the late analytic and category-based processing in the second stage. PMID:25438152
NASA Astrophysics Data System (ADS)
Liang, Y.; Gallaher, D. W.; Grant, G.; Lv, Q.
2011-12-01
Change over time, is the central driver of climate change detection. The goal is to diagnose the underlying causes, and make projections into the future. In an effort to optimize this process we have developed the Data Rod model, an object-oriented approach that provides the ability to query grid cell changes and their relationships to neighboring grid cells through time. The time series data is organized in time-centric structures called "data rods." A single data rod can be pictured as the multi-spectral data history at one grid cell: a vertical column of data through time. This resolves the long-standing problem of managing time-series data and opens new possibilities for temporal data analysis. This structure enables rapid time- centric analysis at any grid cell across multiple sensors and satellite platforms. Collections of data rods can be spatially and temporally filtered, statistically analyzed, and aggregated for use with pattern matching algorithms. Likewise, individual image pixels can be extracted to generate multi-spectral imagery at any spatial and temporal location. The Data Rods project has created a series of prototype databases to store and analyze massive datasets containing multi-modality remote sensing data. Using object-oriented technology, this method overcomes the operational limitations of traditional relational databases. To demonstrate the speed and efficiency of time-centric analysis using the Data Rods model, we have developed a sea ice detection algorithm. This application determines the concentration of sea ice in a small spatial region across a long temporal window. If performed using traditional analytical techniques, this task would typically require extensive data downloads and spatial filtering. Using Data Rods databases, the exact spatio-temporal data set is immediately available No extraneous data is downloaded, and all selected data querying occurs transparently on the server side. Moreover, fundamental statistical calculations such as running averages are easily implemented against the time-centric columns of data.
High Temperature, high pressure equation of state density correlations and viscosity correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tapriyal, D.; Enick, R.; McHugh, M.
2012-07-31
Global increase in oil demand and depleting reserves has derived a need to find new oil resources. To find these untapped reservoirs, oil companies are exploring various remote and harsh locations such as deep waters in Gulf of Mexico, remote arctic regions, unexplored deep deserts, etc. Further, the depth of new oil/gas wells being drilled has increased considerably to tap these new resources. With the increase in the well depth, the bottomhole temperature and pressure are also increasing to extreme values (i.e. up to 500 F and 35,000 psi). The density and viscosity of natural gas and crude oil atmore » reservoir conditions are critical fundamental properties required for accurate assessment of the amount of recoverable petroleum within a reservoir and the modeling of the flow of these fluids within the porous media. These properties are also used to design appropriate drilling and production equipment such as blow out preventers, risers, etc. With the present state of art, there is no accurate database for these fluid properties at extreme conditions. As we have begun to expand this experimental database it has become apparent that there are neither equations of state for density or transport models for viscosity that can be used to predict these fundamental properties of multi-component hydrocarbon mixtures over a wide range of temperature and pressure. Presently, oil companies are using correlations based on lower temperature and pressure databases that exhibit an unsatisfactory predictive capability at extreme conditions (e.g. as great as {+-} 50%). From the perspective of these oil companies that are committed to safely producing these resources, accurately predicting flow rates, and assuring the integrity of the flow, the absence of an extensive experimental database at extreme conditions and models capable of predicting these properties over an extremely wide range of temperature and pressure (including extreme conditions) makes their task even more daunting.« less
Feedback and the rationing of time and effort among competing tasks.
Northcraft, Gregory B; Schmidt, Aaron M; Ashford, Susan J
2011-09-01
The study described here tested a model of how characteristics of the feedback environment influence the allocation of resources (time and effort) among competing tasks. Results demonstrated that performers invest more resources on tasks for which higher quality (more timely and more specific) feedback is available; this effect was partially mediated by task salience and task expectancies. Feedback timing and feedback specificity demonstrated both main and interaction effects on resource allocations. Results also demonstrated that performers do better on tasks for which higher quality feedback is available; this effect was mediated by resources allocated to tasks. The practical and theoretical implications of the role of the feedback environment in managing performance are discussed. PsycINFO Database Record (c) 2011 APA, all rights reserved
Variability sensitivity of dynamic texture based recognition in clinical CT data
NASA Astrophysics Data System (ADS)
Kwitt, Roland; Razzaque, Sharif; Lowell, Jeffrey; Aylward, Stephen
2014-03-01
Dynamic texture recognition using a database of template models has recently shown promising results for the task of localizing anatomical structures in Ultrasound video. In order to understand its clinical value, it is imperative to study the sensitivity with respect to inter-patient variability as well as sensitivity to acquisition parameters such as Ultrasound probe angle. Fully addressing patient and acquisition variability issues, however, would require a large database of clinical Ultrasound from many patients, acquired in a multitude of controlled conditions, e.g., using a tracked transducer. Since such data is not readily attainable, we advocate an alternative evaluation strategy using abdominal CT data as a surrogate. In this paper, we describe how to replicate Ultrasound variabilities by extracting subvolumes from CT and interpreting the image material as an ordered sequence of video frames. Utilizing this technique, and based on a database of abdominal CT from 45 patients, we report recognition results on an organ (kidney) recognition task, where we try to discriminate kidney subvolumes/videos from a collection of randomly sampled negative instances. We demonstrate that (1) dynamic texture recognition is relatively insensitive to inter-patient variation while (2) viewing angle variability needs to be accounted for in the template database. Since naively extending the template database to counteract variability issues can lead to impractical database sizes, we propose an alternative strategy based on automated identification of a small set of representative models.
Perceptual load corresponds with factors known to influence visual search.
Roper, Zachary J J; Cosman, Joshua D; Vecera, Shaun P
2013-10-01
One account of the early versus late selection debate in attention proposes that perceptual load determines the locus of selection. Attention selects stimuli at a late processing level under low-load conditions but selects stimuli at an early level under high-load conditions. Despite the successes of perceptual load theory, a noncircular definition of perceptual load remains elusive. We investigated the factors that influence perceptual load by using manipulations that have been studied extensively in visual search, namely target-distractor similarity and distractor-distractor similarity. Consistent with previous work, search was most efficient when targets and distractors were dissimilar and the displays contained homogeneous distractors; search became less efficient when target-distractor similarity increased irrespective of display heterogeneity. Importantly, we used these same stimuli in a typical perceptual load task that measured attentional spillover to a task-irrelevant flanker. We found a strong correspondence between search efficiency and perceptual load; stimuli that generated efficient searches produced flanker interference effects, suggesting that such displays involved low perceptual load. Flanker interference effects were reduced in displays that produced less efficient searches. Furthermore, our results demonstrate that search difficulty, as measured by search intercept, has little bearing on perceptual load. We conclude that rather than be arbitrarily defined, perceptual load might be defined by well-characterized, continuous factors that influence visual search. PsycINFO Database Record (c) 2013 APA, all rights reserved.
A unified classifier for robust face recognition based on combining multiple subspace algorithms
NASA Astrophysics Data System (ADS)
Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad
2012-10-01
Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.
Positive valence music restores executive control over sustained attention
Lewis, Bridget A.
2017-01-01
Music sometimes improves performance in sustained attention tasks. But the type of music employed in previous investigations has varied considerably, which can account for equivocal results. Progress has been hampered by lack of a systematic database of music varying in key characteristics like tempo and valence. The aims of this study were to establish a database of popular music varying along the dimensions of tempo and valence and to examine the impact of music varying along these dimensions on restoring attentional resources following performance of a sustained attention to response task (SART) vigil. Sixty-nine participants rated popular musical selections that varied in valence and tempo to establish a database of four musical types: fast tempo positive valence, fast tempo negative valence, slow tempo positive valence, and slow tempo negative valence. A second group of 89 participants performed two blocks of the SART task interspersed with either no break or a rest break consisting of 1 of the 4 types of music or silence. Presenting positive valence music (particularly of slow tempo) during an intermission between two successive blocks of the SART significantly decreased miss rates relative to negative valence music or silence. Results support an attentional restoration theory of the impact of music on sustained attention, rather than arousal theory and demonstrate a means of restoring sustained attention. Further, the results establish the validity of a music database that will facilitate further investigations of the impact of music on performance. PMID:29145395
Positive valence music restores executive control over sustained attention.
Baldwin, Carryl L; Lewis, Bridget A
2017-01-01
Music sometimes improves performance in sustained attention tasks. But the type of music employed in previous investigations has varied considerably, which can account for equivocal results. Progress has been hampered by lack of a systematic database of music varying in key characteristics like tempo and valence. The aims of this study were to establish a database of popular music varying along the dimensions of tempo and valence and to examine the impact of music varying along these dimensions on restoring attentional resources following performance of a sustained attention to response task (SART) vigil. Sixty-nine participants rated popular musical selections that varied in valence and tempo to establish a database of four musical types: fast tempo positive valence, fast tempo negative valence, slow tempo positive valence, and slow tempo negative valence. A second group of 89 participants performed two blocks of the SART task interspersed with either no break or a rest break consisting of 1 of the 4 types of music or silence. Presenting positive valence music (particularly of slow tempo) during an intermission between two successive blocks of the SART significantly decreased miss rates relative to negative valence music or silence. Results support an attentional restoration theory of the impact of music on sustained attention, rather than arousal theory and demonstrate a means of restoring sustained attention. Further, the results establish the validity of a music database that will facilitate further investigations of the impact of music on performance.
A Data Preparation Methodology in Data Mining Applied to Mortality Population Databases.
Pérez, Joaquín; Iturbide, Emmanuel; Olivares, Víctor; Hidalgo, Miguel; Martínez, Alicia; Almanza, Nelva
2015-11-01
It is known that the data preparation phase is the most time consuming in the data mining process, using up to 50% or up to 70% of the total project time. Currently, data mining methodologies are of general purpose and one of their limitations is that they do not provide a guide about what particular task to develop in a specific domain. This paper shows a new data preparation methodology oriented to the epidemiological domain in which we have identified two sets of tasks: General Data Preparation and Specific Data Preparation. For both sets, the Cross-Industry Standard Process for Data Mining (CRISP-DM) is adopted as a guideline. The main contribution of our methodology is fourteen specialized tasks concerning such domain. To validate the proposed methodology, we developed a data mining system and the entire process was applied to real mortality databases. The results were encouraging because it was observed that the use of the methodology reduced some of the time consuming tasks and the data mining system showed findings of unknown and potentially useful patterns for the public health services in Mexico.
SAR target recognition and posture estimation using spatial pyramid pooling within CNN
NASA Astrophysics Data System (ADS)
Peng, Lijiang; Liu, Xiaohua; Liu, Ming; Dong, Liquan; Hui, Mei; Zhao, Yuejin
2018-01-01
Many convolution neural networks(CNN) architectures have been proposed to strengthen the performance on synthetic aperture radar automatic target recognition (SAR-ATR) and obtained state-of-art results on targets classification on MSTAR database, but few methods concern about the estimation of depression angle and azimuth angle of targets. To get better effect on learning representation of hierarchies of features on both 10-class target classification task and target posture estimation tasks, we propose a new CNN architecture with spatial pyramid pooling(SPP) which can build high hierarchy of features map by dividing the convolved feature maps from finer to coarser levels to aggregate local features of SAR images. Experimental results on MSTAR database show that the proposed architecture can get high recognition accuracy as 99.57% on 10-class target classification task as the most current state-of-art methods, and also get excellent performance on target posture estimation tasks which pays attention to depression angle variety and azimuth angle variety. What's more, the results inspire us the application of deep learning on SAR target posture description.
Tacutu, Robi; Craig, Thomas; Budovsky, Arie; Wuttke, Daniel; Lehmann, Gilad; Taranukha, Dmitri; Costa, Joana; Fraifeld, Vadim E.; de Magalhães, João Pedro
2013-01-01
The Human Ageing Genomic Resources (HAGR, http://genomics.senescence.info) is a freely available online collection of research databases and tools for the biology and genetics of ageing. HAGR features now several databases with high-quality manually curated data: (i) GenAge, a database of genes associated with ageing in humans and model organisms; (ii) AnAge, an extensive collection of longevity records and complementary traits for >4000 vertebrate species; and (iii) GenDR, a newly incorporated database, containing both gene mutations that interfere with dietary restriction-mediated lifespan extension and consistent gene expression changes induced by dietary restriction. Since its creation about 10 years ago, major efforts have been undertaken to maintain the quality of data in HAGR, while further continuing to develop, improve and extend it. This article briefly describes the content of HAGR and details the major updates since its previous publications, in terms of both structure and content. The completely redesigned interface, more intuitive and more integrative of HAGR resources, is also presented. Altogether, we hope that through its improvements, the current version of HAGR will continue to provide users with the most comprehensive and accessible resources available today in the field of biogerontology. PMID:23193293
OrChem - An open source chemistry search engine for Oracle®
2009-01-01
Background Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Results Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. Availability OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net. PMID:20298521
Knowledge Based Engineering for Spatial Database Management and Use
NASA Technical Reports Server (NTRS)
Peuquet, D. (Principal Investigator)
1984-01-01
The use of artificial intelligence techniques that are applicable to Geographic Information Systems (GIS) are examined. Questions involving the performance and modification to the database structure, the definition of spectra in quadtree structures and their use in search heuristics, extension of the knowledge base, and learning algorithm concepts are investigated.
ERIC Educational Resources Information Center
Lynch, Clifford A.
1991-01-01
Describes several aspects of the problem of supporting information retrieval system query requirements in the relational database management system (RDBMS) environment and proposes an extension to query processing called nonmaterialized relations. User interactions with information retrieval systems are discussed, and nonmaterialized relations are…
78 FR 25095 - Notice of an Extension of an Information Collection (1028-0092)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-29
... the development of The National Map and other national geospatial databases. In FY 2010, projects for... including elevation, orthoimagery, hydrography and other layers in the national databases may be possible. We will accept applications from State, local or tribal governments and academic institutions to...
76 FR 26776 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-09
... current collection of information to the Office of Management and Budget for approval. The Securities and Exchange Commission has begun the design of a new Electronic Data Collection System database (the Database..., Washington, DC 20549-0213. Extension: Electronic Data Collection System; OMB Control No. 3235-0672; SEC File...
76 FR 12617 - Airworthiness Directives; The Boeing Company Model 777-200 and -300 Series Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-08
... installing new operational software for the electrical load management system and configuration database... the electrical load management system operational software and configuration database software, in... Management, P.O. Box 3707, MC 2H-65, Seattle, Washington 98124-2207; telephone 206- 544-5000, extension 1...
Large scale database scrubbing using object oriented software components.
Herting, R L; Barnes, M R
1998-01-01
Now that case managers, quality improvement teams, and researchers use medical databases extensively, the ability to share and disseminate such databases while maintaining patient confidentiality is paramount. A process called scrubbing addresses this problem by removing personally identifying information while keeping the integrity of the medical information intact. Scrubbing entire databases, containing multiple tables, requires that the implicit relationships between data elements in different tables of the database be maintained. To address this issue we developed DBScrub, a Java program that interfaces with any JDBC compliant database and scrubs the database while maintaining the implicit relationships within it. DBScrub uses a small number of highly configurable object-oriented software components to carry out the scrubbing. We describe the structure of these software components and how they maintain the implicit relationships within the database.
Research on high availability architecture of SQL and NoSQL
NASA Astrophysics Data System (ADS)
Wang, Zhiguo; Wei, Zhiqiang; Liu, Hao
2017-03-01
With the advent of the era of big data, amount and importance of data have increased dramatically. SQL database develops in performance and scalability, but more and more companies tend to use NoSQL database as their databases, because NoSQL database has simpler data model and stronger extension capacity than SQL database. Almost all database designers including SQL database and NoSQL database aim to improve performance and ensure availability by reasonable architecture which can reduce the effects of software failures and hardware failures, so that they can provide better experiences for their customers. In this paper, I mainly discuss the architectures of MySQL, MongoDB, and Redis, which are high available and have been deployed in practical application environment, and design a hybrid architecture.
Dictionary-driven prokaryotic gene finding
Shibuya, Tetsuo; Rigoutsos, Isidore
2002-01-01
Gene identification, also known as gene finding or gene recognition, is among the important problems of molecular biology that have been receiving increasing attention with the advent of large scale sequencing projects. Previous strategies for solving this problem can be categorized into essentially two schools of thought: one school employs sequence composition statistics, whereas the other relies on database similarity searches. In this paper, we propose a new gene identification scheme that combines the best characteristics from each of these two schools. In particular, our method determines gene candidates among the ORFs that can be identified in a given DNA strand through the use of the Bio-Dictionary, a database of patterns that covers essentially all of the currently available sample of the natural protein sequence space. Our approach relies entirely on the use of redundant patterns as the agents on which the presence or absence of genes is predicated and does not employ any additional evidence, e.g. ribosome-binding site signals. The Bio-Dictionary Gene Finder (BDGF), the algorithm’s implementation, is a single computational engine able to handle the gene identification task across distinct archaeal and bacterial genomes. The engine exhibits performance that is characterized by simultaneous very high values of sensitivity and specificity, and a high percentage of correctly predicted start sites. Using a collection of patterns derived from an old (June 2000) release of the Swiss-Prot/TrEMBL database that contained 451 602 proteins and fragments, we demonstrate our method’s generality and capabilities through an extensive analysis of 17 complete archaeal and bacterial genomes. Examples of previously unreported genes are also shown and discussed in detail. PMID:12060689
Benchmarking protein classification algorithms via supervised cross-validation.
Kertész-Farkas, Attila; Dhir, Somdutta; Sonego, Paolo; Pacurar, Mircea; Netoteia, Sergiu; Nijveen, Harm; Kuzniar, Arnold; Leunissen, Jack A M; Kocsor, András; Pongor, Sándor
2008-04-24
Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and random sampling was used to construct model datasets, suitable for algorithm comparison.
NASA Astrophysics Data System (ADS)
Vacca, G.; Pili, D.; Fiorino, D. R.; Pintus, V.
2017-05-01
The presented work is part of the research project, titled "Tecniche murarie tradizionali: conoscenza per la conservazione ed il miglioramento prestazionale" (Traditional building techniques: from knowledge to conservation and performance improvement), with the purpose of studying the building techniques of the 13th-18th centuries in the Sardinia Region (Italy) for their knowledge, conservation, and promotion. The end purpose of the entire study is to improve the performance of the examined structures. In particular, the task of the authors within the research project was to build a WebGIS to manage the data collected during the examination and study phases. This infrastructure was entirely built using Open Source software. The work consisted of designing a database built in PostgreSQL and its spatial extension PostGIS, which allows to store and manage feature geometries and spatial data. The data input is performed via a form built in HTML and PHP. The HTML part is based on Bootstrap, an open tools library for websites and web applications. The implementation of this template used both PHP and Javascript code. The PHP code manages the reading and writing of data to the database, using embedded SQL queries. As of today, we surveyed and archived more than 300 buildings, belonging to three main macro categories: fortification architectures, religious architectures, residential architectures. The masonry samples investigated in relation to the construction techniques are more than 150. The database is published on the Internet as a WebGIS built using the Leaflet Javascript open libraries, which allows creating map sites with background maps and navigation, input and query tools. This too uses an interaction of HTML, Javascript, PHP and SQL code.
Private database queries based on counterfactual quantum key distribution
NASA Astrophysics Data System (ADS)
Zhang, Jia-Li; Guo, Fen-Zhuo; Gao, Fei; Liu, Bin; Wen, Qiao-Yan
2013-08-01
Based on the fundamental concept of quantum counterfactuality, we propose a protocol to achieve quantum private database queries, which is a theoretical study of how counterfactuality can be employed beyond counterfactual quantum key distribution (QKD). By adding crucial detecting apparatus to the device of QKD, the privacy of both the distrustful user and the database owner can be guaranteed. Furthermore, the proposed private-database-query protocol makes full use of the low efficiency in the counterfactual QKD, and by adjusting the relevant parameters, the protocol obtains excellent flexibility and extensibility.
Zhang, Guang Lan; Riemer, Angelika B.; Keskin, Derin B.; Chitkushev, Lou; Reinherz, Ellis L.; Brusic, Vladimir
2014-01-01
High-risk human papillomaviruses (HPVs) are the causes of many cancers, including cervical, anal, vulvar, vaginal, penile and oropharyngeal. To facilitate diagnosis, prognosis and characterization of these cancers, it is necessary to make full use of the immunological data on HPV available through publications, technical reports and databases. These data vary in granularity, quality and complexity. The extraction of knowledge from the vast amount of immunological data using data mining techniques remains a challenging task. To support integration of data and knowledge in virology and vaccinology, we developed a framework called KB-builder to streamline the development and deployment of web-accessible immunological knowledge systems. The framework consists of seven major functional modules, each facilitating a specific aspect of the knowledgebase construction process. Using KB-builder, we constructed the Human Papillomavirus T cell Antigen Database (HPVdb). It contains 2781 curated antigen entries of antigenic proteins derived from 18 genotypes of high-risk HPV and 18 genotypes of low-risk HPV. The HPVdb also catalogs 191 verified T cell epitopes and 45 verified human leukocyte antigen (HLA) ligands. Primary amino acid sequences of HPV antigens were collected and annotated from the UniProtKB. T cell epitopes and HLA ligands were collected from data mining of scientific literature and databases. The data were subject to extensive quality control (redundancy elimination, error detection and vocabulary consolidation). A set of computational tools for an in-depth analysis, such as sequence comparison using BLAST search, multiple alignments of antigens, classification of HPV types based on cancer risk, T cell epitope/HLA ligand visualization, T cell epitope/HLA ligand conservation analysis and sequence variability analysis, has been integrated within the HPVdb. Predicted Class I and Class II HLA binding peptides for 15 common HLA alleles are included in this database as putative targets. HPVdb is a knowledge-based system that integrates curated data and information with tailored analysis tools to facilitate data mining for HPV vaccinology and immunology. To our best knowledge, HPVdb is a unique data source providing a comprehensive list of HPV antigens and peptides. Database URL: http://cvc.dfci.harvard.edu/hpv/ PMID:24705205
Zhang, Guang Lan; Riemer, Angelika B; Keskin, Derin B; Chitkushev, Lou; Reinherz, Ellis L; Brusic, Vladimir
2014-01-01
High-risk human papillomaviruses (HPVs) are the causes of many cancers, including cervical, anal, vulvar, vaginal, penile and oropharyngeal. To facilitate diagnosis, prognosis and characterization of these cancers, it is necessary to make full use of the immunological data on HPV available through publications, technical reports and databases. These data vary in granularity, quality and complexity. The extraction of knowledge from the vast amount of immunological data using data mining techniques remains a challenging task. To support integration of data and knowledge in virology and vaccinology, we developed a framework called KB-builder to streamline the development and deployment of web-accessible immunological knowledge systems. The framework consists of seven major functional modules, each facilitating a specific aspect of the knowledgebase construction process. Using KB-builder, we constructed the Human Papillomavirus T cell Antigen Database (HPVdb). It contains 2781 curated antigen entries of antigenic proteins derived from 18 genotypes of high-risk HPV and 18 genotypes of low-risk HPV. The HPVdb also catalogs 191 verified T cell epitopes and 45 verified human leukocyte antigen (HLA) ligands. Primary amino acid sequences of HPV antigens were collected and annotated from the UniProtKB. T cell epitopes and HLA ligands were collected from data mining of scientific literature and databases. The data were subject to extensive quality control (redundancy elimination, error detection and vocabulary consolidation). A set of computational tools for an in-depth analysis, such as sequence comparison using BLAST search, multiple alignments of antigens, classification of HPV types based on cancer risk, T cell epitope/HLA ligand visualization, T cell epitope/HLA ligand conservation analysis and sequence variability analysis, has been integrated within the HPVdb. Predicted Class I and Class II HLA binding peptides for 15 common HLA alleles are included in this database as putative targets. HPVdb is a knowledge-based system that integrates curated data and information with tailored analysis tools to facilitate data mining for HPV vaccinology and immunology. To our best knowledge, HPVdb is a unique data source providing a comprehensive list of HPV antigens and peptides. Database URL: http://cvc.dfci.harvard.edu/hpv/.
TrSDB: a proteome database of transcription factors
Hermoso, Antoni; Aguilar, Daniel; Aviles, Francesc X.; Querol, Enrique
2004-01-01
TrSDB—TranScout Database—(http://ibb.uab.es/trsdb) is a proteome database of eukaryotic transcription factors based upon predicted motifs by TranScout and data sources such as InterPro and Gene Ontology Annotation. Nine eukaryotic proteomes are included in the current version. Extensive and diverse information for each database entry, different analyses considering TranScout classification and similarity relationships are offered for research on transcription factors or gene expression. PMID:14681387
Borst, G; Poirel, N; Pineau, A; Cassotti, M; Houdé, O
2013-07-01
Most children under 7 years of age presented with 10 daisies and 2 roses fail to indicate that there are more flowers than daisies. Instead of the appropriate comparison of the relative numerosities of the superordinate class (flowers) to its subordinate class (daisies), they perform a direct perceptual comparison of the extensions of the 2 subordinate classes (daisies vs. roses). In our experiment, we investigated whether increasing efficiency in solving the Piagetian class-inclusion task is related to increasing efficiency in the ability to resist (inhibit) this direct comparison of the subordinate classes' extensions. Ten-year-old and young adult participants performed a computerized priming version of a Piaget-like class-inclusion task. The experimental design was such that the misleading perceptual strategy to inhibit on the prime (in which a superordinate class had to be compared with a subordinate class) became a congruent strategy to activate on the probe (in which the two subordinate classes' extensions were directly compared). We found a negative priming effect of 291 ms in children and 129 ms in adults. These results provide evidence for the first time (a) that adults still need to inhibit the comparison of the subordinate classes' extensions in class-inclusion tasks and (b) that the ability to inhibit this heuristic increases with age (resulting in a lower executive cost). Taken together, these findings provide additional support for the neo-Piagetian approach of cognitive development that suggests that the acquisition of increasingly complex knowledge is based on the ability to resist (inhibit) heuristics and previously acquired knowledge.
The stability of working memory: do previous tasks influence complex span?
Healey, M Karl; Hasher, Lynn; Danilova, Elena
2011-11-01
Schmeichel (2007) reported that performing an initial task before completing a working memory span task can lower span scores and suggested that the effect was due to depleted cognitive resources. We showed that the detrimental effect of prior tasks depends on a match between the stimuli used in the span task and the preceding task. A task requiring participants to ignore words reduced performance on a subsequent word-based verbal span task but not on an arrow-based spatial span task. Ignoring arrows had the opposite pattern of effects: reducing performance on the spatial span task but not on the word-based span task. Finally, we showed that antisaccade, a nonverbal task that taxes domain-general processes implicated in working memory, did not influence subsequent performance of either a verbal or a spatial span task. Together these results suggest that while span is sensitive to prior tasks, that sensitivity does not stem from depleted resources. (PsycINFO Database Record (c) 2011 APA, all rights reserved).
Virus taxonomy: the database of the International Committee on Taxonomy of Viruses (ICTV)
Dempsey, Donald M; Hendrickson, Robert Curtis; Orton, Richard J; Siddell, Stuart G; Smith, Donald B
2018-01-01
Abstract The International Committee on Taxonomy of Viruses (ICTV) is charged with the task of developing, refining, and maintaining a universal virus taxonomy. This task encompasses the classification of virus species and higher-level taxa according to the genetic and biological properties of their members; naming virus taxa; maintaining a database detailing the currently approved taxonomy; and providing the database, supporting proposals, and other virus-related information from an open-access, public web site. The ICTV web site (http://ictv.global) provides access to the current taxonomy database in online and downloadable formats, and maintains a complete history of virus taxa back to the first release in 1971. The ICTV has also published the ICTV Report on Virus Taxonomy starting in 1971. This Report provides a comprehensive description of all virus taxa covering virus structure, genome structure, biology and phylogenetics. The ninth ICTV report, published in 2012, is available as an open-access online publication from the ICTV web site. The current, 10th report (http://ictv.global/report/), is being published online, and is replacing the previous hard-copy edition with a completely open access, continuously updated publication. No other database or resource exists that provides such a comprehensive, fully annotated compendium of information on virus taxa and taxonomy. PMID:29040670
A human factors analysis of EVA time requirements
NASA Technical Reports Server (NTRS)
Pate, D. W.
1996-01-01
Human Factors Engineering (HFE), also known as Ergonomics, is a discipline whose goal is to engineer a safer, more efficient interface between humans and machines. HFE makes use of a wide range of tools and techniques to fulfill this goal. One of these tools is known as motion and time study, a technique used to develop time standards for given tasks. A human factors motion and time study was initiated with the goal of developing a database of EVA task times and a method of utilizing the database to predict how long an ExtraVehicular Activity (EVA) should take. Initial development relied on the EVA activities performed during the STS-61 mission (Hubble repair). The first step of the analysis was to become familiar with EVAs and with the previous studies and documents produced on EVAs. After reviewing these documents, an initial set of task primitives and task time modifiers was developed. Videotaped footage of STS-61 EVAs were analyzed using these primitives and task time modifiers. Data for two entire EVA missions and portions of several others, each with two EVA astronauts, was collected for analysis. Feedback from the analysis of the data will be used to further refine the primitives and task time modifiers used. Analysis of variance techniques for categorical data will be used to determine which factors may, individually or by interactions, effect the primitive times and how much of an effect they have.
Database Design Learning: A Project-Based Approach Organized through a Course Management System
ERIC Educational Resources Information Center
Dominguez, Cesar; Jaime, Arturo
2010-01-01
This paper describes an active method for database design learning through practical tasks development by student teams in a face-to-face course. This method integrates project-based learning, and project management techniques and tools. Some scaffolding is provided at the beginning that forms a skeleton that adapts to a great variety of…
Trials and Triumphs of Expanded Extension Programs.
ERIC Educational Resources Information Center
Leavengood, Scott; Love, Bob
1998-01-01
Oregon extension faced challenges in presenting programs in the wood products industry. Several traditional tactics, revised to suit a new audience, have proved successful: personal coaching, building partnerships, and providing a high level of service. Newer methods, such as database marketing and distance learning, are also proving to be…
The Implications of Well-Formedness on Web-Based Educational Resources.
ERIC Educational Resources Information Center
Mohler, James L.
Within all institutions, Web developers are beginning to utilize technologies that make sites more than static information resources. Databases such as XML (Extensible Markup Language) and XSL (Extensible Stylesheet Language) are key technologies that promise to extend the Web beyond the "information storehouse" paradigm and provide…
A possible extension to the RInChI as a means of providing machine readable process data.
Jacob, Philipp-Maximilian; Lan, Tian; Goodman, Jonathan M; Lapkin, Alexei A
2017-04-11
The algorithmic, large-scale use and analysis of reaction databases such as Reaxys is currently hindered by the absence of widely adopted standards for publishing reaction data in machine readable formats. Crucial data such as yields of all products or stoichiometry are frequently not explicitly stated in the published papers and, hence, not reported in the database entry for those reactions, limiting their usefulness for algorithmic analysis. This paper presents a possible extension to the IUPAC RInChI standard via an auxiliary layer, termed ProcAuxInfo, which is a standardised, extensible form in which to report certain key reaction parameters such as declaration of all products and reactants as well as auxiliaries known in the reaction, reaction stoichiometry, amounts of substances used, conversion, yield and operating conditions. The standard is demonstrated via creation of the RInChI including the ProcAuxInfo layer based on three published reactions and demonstrates accurate data recoverability via reverse translation of the created strings. Implementation of this or another method of reporting process data by the publishing community would ensure that databases, such as Reaxys, would be able to abstract crucial data for big data analysis of their contents.
Automatic Extraction of JPF Options and Documentation
NASA Technical Reports Server (NTRS)
Luks, Wojciech; Tkachuk, Oksana; Buschnell, David
2011-01-01
Documenting existing Java PathFinder (JPF) projects or developing new extensions is a challenging task. JPF provides a platform for creating new extensions and relies on key-value properties for their configuration. Keeping track of all possible options and extension mechanisms in JPF can be difficult. This paper presents jpf-autodoc-options, a tool that automatically extracts JPF projects options and other documentation-related information, which can greatly help both JPF users and developers of JPF extensions.
ERIC Educational Resources Information Center
Bava Harji, Madhubala; Gheitanchian, Mehrnaz
2017-01-01
Albeit Task-Based Language Teaching (TBLT) has been extensively researched, there appears to be limited studies that focus on the effects of multimedia technology (MT) enhanced TBLT approach on EFL development. A study was conducted to examine the effects of a MT imbued TBLT, i.e. Multimedia Task-Based Teaching and Learning (MMTBLT) approach on…
Mulcahy, Nicholas J; Schubiger, Michèle N; Suddendorf, T
2013-02-01
Great apes appear to have limited knowledge of tool functionality when they are presented with tasks that involve a physical connection between a tool and a reward. For instance, they fail to understand that pulling a rope with a reward tied to its end is more beneficial than pulling a rope that only touches a reward. Apes show more success when both ropes have rewards tied to their ends but one rope is nonfunctional because it is clearly separated into aligned sections. It is unclear, however, whether this success is based on perceptual features unrelated to connectivity, such as perceiving the tool's separate sections as independent tools rather than one discontinuous tool. Surprisingly, there appears to be no study that has tested any type of connectivity problem using natural tools made from branches with which wild and captive apes often have extensive experience. It is possible that such ecologically valid tools may better help subjects understand connectivity that involves physical attachment. In this study, we tested orangutans with natural tools and a range of connectivity problems that involved the physical attachment of a reward on continuous and broken tools. We found that the orangutans understood tool connectivity involving physical attachment that apes from other studies failed when tested with similar tasks using artificial as opposed to natural tools. We found no evidence that the orangutans' success in broken tool conditions was based on perceptual features unrelated to connectivity. Our results suggest that artificial tools may limit apes' knowledge of connectivity involving physical attachment, whereas ecologically valid tools may have the opposite effect. PsycINFO Database Record (c) 2013 APA, all rights reserved
ERIC Educational Resources Information Center
Guzeller, Cem Oktay; Akin, Ayca
2014-01-01
The purpose of this study is to determine the predicting power of mathematics achievement from ICT variables including the Internet/entertainment use (IEU), program/software use (PRGUSE), confidence in internet tasks (INTCONF) and confidence in ICT high level tasks (HIGHCONF) based on PISA 2006 data. This study indicates that the ICT variables…
Improving older adults' memory performance using prior task success.
Geraci, Lisa; Miller, Tyler M
2013-06-01
Holding negative aging stereotypes can lead older adults to perform poorly on memory tests. We attempted to improve older adults' memory performance by giving them task experience that would counter their negative performance expectations. Before participating in a memory experiment, younger and older adults were given a cognitive task that they could either successfully complete, not successfully complete, or they were given no prior task. For older adults, recall was significantly higher and self-reported anxiety was significantly lower for the prior task success group relative to the other groups. There was no effect of prior task experience on younger adults' memory performance. Results suggest that older adults' memory can be improved with a single successful prior task experience. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Integrated Functional and Executional Modelling of Software Using Web-Based Databases
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak; Marietta, Roberta
1998-01-01
NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases.
The Hubbard Brook Long Term Ecological Research site has produced some of the most extensive and long-running databases on the hydrology, biology and chemistry of forest ecosystem responses to climate and forest harvest. We used these long-term databases to calibrate and apply G...
Helping Patrons Find Locally Held Electronic Resources: An Interlibrary Loan Perspective
ERIC Educational Resources Information Center
Johnston, Pamela
2016-01-01
The University of North Texas Libraries provide extensive online access to academic journals through major vendor databases. As illustrated by interlibrary loan borrowing requests for items held in our databases, patrons often have difficulty navigating the available resources. In this study, the Interlibrary Loan staff used data gathered from the…
The BioExtract Server: a web-based bioinformatic workflow platform
Lushbough, Carol M.; Jennewein, Douglas M.; Brendel, Volker P.
2011-01-01
The BioExtract Server (bioextract.org) is an open, web-based system designed to aid researchers in the analysis of genomic data by providing a platform for the creation of bioinformatic workflows. Scientific workflows are created within the system by recording tasks performed by the user. These tasks may include querying multiple, distributed data sources, saving query results as searchable data extracts, and executing local and web-accessible analytic tools. The series of recorded tasks can then be saved as a reproducible, sharable workflow available for subsequent execution with the original or modified inputs and parameter settings. Integrated data resources include interfaces to the National Center for Biotechnology Information (NCBI) nucleotide and protein databases, the European Molecular Biology Laboratory (EMBL-Bank) non-redundant nucleotide database, the Universal Protein Resource (UniProt), and the UniProt Reference Clusters (UniRef) database. The system offers access to numerous preinstalled, curated analytic tools and also provides researchers with the option of selecting computational tools from a large list of web services including the European Molecular Biology Open Software Suite (EMBOSS), BioMoby, and the Kyoto Encyclopedia of Genes and Genomes (KEGG). The system further allows users to integrate local command line tools residing on their own computers through a client-side Java applet. PMID:21546552
Rapp, Adam A; Bachrach, Daniel G; Rapp, Tammy L
2013-07-01
In this research we integrate resource allocation and social exchange perspectives to build and test theory focusing on the moderating role of time management skill in the nonmonotonic relationship between organizational citizenship behavior (OCB) and task performance. Results from matching survey data collected from 212 employees and 41 supervisors and from task performance metrics collected several months later indicate that the curvilinear association between OCB and task performance is significantly moderated by employees' time management skill. Implications for theory and practice are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Seismic Calibration of Group 1 IMS Stations in Eastern Asia for Improved IDC Event Location
2006-04-01
database has been assembled and delivered to the SMR (formerly CMR) Research and Development Support Services (RDSS) data archive. This database ...Data used in these tomographic inversions have been collected into a uniform database and delivered to the RDSS at the SMR. Extensive testing of these...complex 3-D velocity models is based on a finite difference approximation to the eikonal equation developed by Podvin and Lecomte (1 991) and
An effective model for store and retrieve big health data in cloud computing.
Goli-Malekabadi, Zohreh; Sargolzaei-Javan, Morteza; Akbari, Mohammad Kazem
2016-08-01
The volume of healthcare data including different and variable text types, sounds, and images is increasing day to day. Therefore, the storage and processing of these data is a necessary and challenging issue. Generally, relational databases are used for storing health data which are not able to handle the massive and diverse nature of them. This study aimed at presenting the model based on NoSQL databases for the storage of healthcare data. Despite different types of NoSQL databases, document-based DBs were selected by a survey on the nature of health data. The presented model was implemented in the Cloud environment for accessing to the distribution properties. Then, the data were distributed on the database by applying the Shard property. The efficiency of the model was evaluated in comparison with the previous data model, Relational Database, considering query time, data preparation, flexibility, and extensibility parameters. The results showed that the presented model approximately performed the same as SQL Server for "read" query while it acted more efficiently than SQL Server for "write" query. Also, the performance of the presented model was better than SQL Server in the case of flexibility, data preparation and extensibility. Based on these observations, the proposed model was more effective than Relational Databases for handling health data. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Steinhauser, Marco; Hübner, Ronald
2009-10-01
It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were conducted in which manual versions of a standard Stroop task (Experiment 1) and a separated Stroop task (Experiment 2) were performed under task-switching conditions. Effects of response congruency and stimulus bivalency were used to measure response conflict and task conflict, respectively. Ex-Gaussian analysis revealed that response conflict was mainly observed in the Gaussian component, whereas task conflict was stronger in the exponential component. Moreover, task conflict in the exponential component was selectively enhanced under task-switching conditions. The results suggest that ex-Gaussian analysis can be used as a tool to isolate different conflict types in the Stroop task. PsycINFO Database Record (c) 2009 APA, all rights reserved.
Mehryary, Farrokh; Kaewphan, Suwisa; Hakala, Kai; Ginter, Filip
2016-01-01
Biomedical event extraction is one of the key tasks in biomedical text mining, supporting various applications such as database curation and hypothesis generation. Several systems, some of which have been applied at a large scale, have been introduced to solve this task. Past studies have shown that the identification of the phrases describing biological processes, also known as trigger detection, is a crucial part of event extraction, and notable overall performance gains can be obtained by solely focusing on this sub-task. In this paper we propose a novel approach for filtering falsely identified triggers from large-scale event databases, thus improving the quality of knowledge extraction. Our method relies on state-of-the-art word embeddings, event statistics gathered from the whole biomedical literature, and both supervised and unsupervised machine learning techniques. We focus on EVEX, an event database covering the whole PubMed and PubMed Central Open Access literature containing more than 40 million extracted events. The top most frequent EVEX trigger words are hierarchically clustered, and the resulting cluster tree is pruned to identify words that can never act as triggers regardless of their context. For rarely occurring trigger words we introduce a supervised approach trained on the combination of trigger word classification produced by the unsupervised clustering method and manual annotation. The method is evaluated on the official test set of BioNLP Shared Task on Event Extraction. The evaluation shows that the method can be used to improve the performance of the state-of-the-art event extraction systems. This successful effort also translates into removing 1,338,075 of potentially incorrect events from EVEX, thus greatly improving the quality of the data. The method is not solely bound to the EVEX resource and can be thus used to improve the quality of any event extraction system or database. The data and source code for this work are available at: http://bionlp-www.utu.fi/trigger-clustering/.
NASA Technical Reports Server (NTRS)
1993-01-01
All the options in the NASA VEGetation Workbench (VEG) make use of a database of historical cover types. This database contains results from experiments by scientists on a wide variety of different cover types. The learning system uses the database to provide positive and negative training examples of classes that enable it to learn distinguishing features between classes of vegetation. All the other VEG options use the database to estimate the error bounds involved in the results obtained when various analysis techniques are applied to the sample of cover type data that is being studied. In the previous version of VEG, the historical cover type database was stored as part of the VEG knowledge base. This database was removed from the knowledge base. It is now stored as a series of flat files that are external to VEG. An interface between VEG and these files was provided. The interface allows the user to select which files of historical data to use. The files are then read, and the data are stored in Knowledge Engineering Environment (KEE) units using the same organization of units as in the previous version of VEG. The interface also allows the user to delete some or all of the historical database units from VEG and load new historical data from a file. This report summarizes the use of the historical cover type database in VEG. It then describes the new interface to the files containing the historical data. It describes minor changes that were made to VEG to enable the externally stored database to be used. Test runs to test the operation of the new interface and also to test the operation of VEG using historical data loaded from external files are described. Task F was completed. A Sun cartridge tape containing the KEE and Common Lisp code for the new interface and the modified version of the VEG knowledge base was delivered to the NASA GSFC technical representative.
XML technology planning database : lessons learned
NASA Technical Reports Server (NTRS)
Some, Raphael R.; Neff, Jon M.
2005-01-01
A hierarchical Extensible Markup Language(XML) database called XCALIBR (XML Analysis LIBRary) has been developed by Millennium Program to assist in technology investment (ROI) analysis and technology Language Capability the New return on portfolio optimization. The database contains mission requirements and technology capabilities, which are related by use of an XML dictionary. The XML dictionary codifies a standardized taxonomy for space missions, systems, subsystems and technologies. In addition to being used for ROI analysis, the database is being examined for use in project planning, tracking and documentation. During the past year, the database has moved from development into alpha testing. This paper describes the lessons learned during construction and testing of the prototype database and the motivation for moving from an XML taxonomy to a standard XML-based ontology.
Improved configuration control for redundant robots
NASA Technical Reports Server (NTRS)
Seraji, H.; Colbaugh, R.
1990-01-01
This article presents a singularity-robust task-prioritized reformulation of the configuration control scheme for redundant robot manipulators. This reformulation suppresses large joint velocities near singularities, at the expense of small task trajectory errors. This is achieved by optimally reducing the joint velocities to induce minimal errors in the task performance by modifying the task trajectories. Furthermore, the same framework provides a means for assignment of priorities between the basic task of end-effector motion and the user-defined additional task for utilizing redundancy. This allows automatic relaxation of the additional task constraints in favor of the desired end-effector motion, when both cannot be achieved exactly. The improved configuration control scheme is illustrated for a variety of additional tasks, and extensive simulation results are presented.
Tripal: a construction toolkit for online genome databases.
Ficklin, Stephen P; Sanderson, Lacey-Anne; Cheng, Chun-Huai; Staton, Margaret E; Lee, Taein; Cho, Il-Hyung; Jung, Sook; Bett, Kirstin E; Main, Doreen
2011-01-01
As the availability, affordability and magnitude of genomics and genetics research increases so does the need to provide online access to resulting data and analyses. Availability of a tailored online database is the desire for many investigators or research communities; however, managing the Information Technology infrastructure needed to create such a database can be an undesired distraction from primary research or potentially cost prohibitive. Tripal provides simplified site development by merging the power of Drupal, a popular web Content Management System with that of Chado, a community-derived database schema for storage of genomic, genetic and other related biological data. Tripal provides an interface that extends the content management features of Drupal to the data housed in Chado. Furthermore, Tripal provides a web-based Chado installer, genomic data loaders, web-based editing of data for organisms, genomic features, biological libraries, controlled vocabularies and stock collections. Also available are Tripal extensions that support loading and visualizations of NCBI BLAST, InterPro, Kyoto Encyclopedia of Genes and Genomes and Gene Ontology analyses, as well as an extension that provides integration of Tripal with GBrowse, a popular GMOD tool. An Application Programming Interface is available to allow creation of custom extensions by site developers, and the look-and-feel of the site is completely customizable through Drupal-based PHP template files. Addition of non-biological content and user-management is afforded through Drupal. Tripal is an open source and freely available software package found at http://tripal.sourceforge.net.
Tripal: a construction toolkit for online genome databases
Sanderson, Lacey-Anne; Cheng, Chun-Huai; Staton, Margaret E.; Lee, Taein; Cho, Il-Hyung; Jung, Sook; Bett, Kirstin E.; Main, Doreen
2011-01-01
As the availability, affordability and magnitude of genomics and genetics research increases so does the need to provide online access to resulting data and analyses. Availability of a tailored online database is the desire for many investigators or research communities; however, managing the Information Technology infrastructure needed to create such a database can be an undesired distraction from primary research or potentially cost prohibitive. Tripal provides simplified site development by merging the power of Drupal, a popular web Content Management System with that of Chado, a community-derived database schema for storage of genomic, genetic and other related biological data. Tripal provides an interface that extends the content management features of Drupal to the data housed in Chado. Furthermore, Tripal provides a web-based Chado installer, genomic data loaders, web-based editing of data for organisms, genomic features, biological libraries, controlled vocabularies and stock collections. Also available are Tripal extensions that support loading and visualizations of NCBI BLAST, InterPro, Kyoto Encyclopedia of Genes and Genomes and Gene Ontology analyses, as well as an extension that provides integration of Tripal with GBrowse, a popular GMOD tool. An Application Programming Interface is available to allow creation of custom extensions by site developers, and the look-and-feel of the site is completely customizable through Drupal-based PHP template files. Addition of non-biological content and user-management is afforded through Drupal. Tripal is an open source and freely available software package found at http://tripal.sourceforge.net PMID:21959868
A Contingency Model of Conflict and Team Effectiveness
ERIC Educational Resources Information Center
Shaw, Jason D.; Zhu, Jing; Duffy, Michelle K.; Scott, Kristin L.; Shih, Hsi-An; Susanto, Ely
2011-01-01
The authors develop and test theoretical extensions of the relationships of task conflict, relationship conflict, and 2 dimensions of team effectiveness (performance and team-member satisfaction) among 2 samples of work teams in Taiwan and Indonesia. Findings show that relationship conflict moderates the task conflict-team performance…
A Mathematical Mystery Tour: Higher-Thinking Math Tasks.
ERIC Educational Resources Information Center
Wahl, Mark
This book contains mathematics activities based upon the concepts of Fibonacci numbers and the Golden Ratio. The activities include higher order thinking skills, calculation practice, integration with different subject areas, mathematics history, extensions and home tasks, teaching notes, and questions for thought and comprehension. A visual map…
Manganese Research Health Project (MHRP)
2006-01-01
ultrafine particles (or nanoparticles) on health (e.g. Royal Society 2004) and the apparent potential for translocation of these particles along the...evaluate the usefulness of particle counting methods (CPC) in assessing exposure to ultrafine particles in manganese production scenarios. Task 4. Database...R, Kreyling W, Cox C (2004). Translocation of Inhaled Ultrafine Particles to the Brain. Inhalation toxicology; 16:437 - 445 Ritchie P, Cherrie J
SPIRE Data-Base Management System
NASA Technical Reports Server (NTRS)
Fuechsel, C. F.
1984-01-01
Spacelab Payload Integration and Rocket Experiment (SPIRE) data-base management system (DBMS) based on relational model of data bases. Data bases typically used for engineering and mission analysis tasks and, unlike most commercially available systems, allow data items and data structures stored in forms suitable for direct analytical computation. SPIRE DBMS designed to support data requests from interactive users as well as applications programs.
NASA Astrophysics Data System (ADS)
Belov, G. V.; Dyachkov, S. A.; Levashov, P. R.; Lomonosov, I. V.; Minakov, D. V.; Morozov, I. V.; Sineva, M. A.; Smirnov, V. N.
2018-01-01
The database structure, main features and user interface of an IVTANTHERMO-Online system are reviewed. This system continues the series of the IVTANTHERMO packages developed in JIHT RAS. It includes the database for thermodynamic properties of individual substances and related software for analysis of experimental results, data fitting, calculation and estimation of thermodynamical functions and thermochemistry quantities. In contrast to the previous IVTANTHERMO versions it has a new extensible database design, the client-server architecture, a user-friendly web interface with a number of new features for online and offline data processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancey, P.; Logg, C.
DEPOT has been developed to provide tracking for the Stanford Linear Collider (SLC) control system equipment. For each piece of equipment entered into the database, complete location, service, maintenance, modification, certification, and radiation exposure histories can be maintained. To facilitate data entry accuracy, efficiency, and consistency, barcoding technology has been used extensively. DEPOT has been an important tool in improving the reliability of the microsystems controlling SLC. This document describes the components of the DEPOT database, the elements in the database records, and the use of the supporting programs for entering data, searching the database, and producing reports from themore » information.« less
Graph Learning in Knowledge Bases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, Sean; Wang, Daisy Zhe
The amount of text data has been growing exponentially in recent years, giving rise to automatic information extraction methods that store text annotations in a database. The current state-of-theart structured prediction methods, however, are likely to contain errors and it’s important to be able to manage the overall uncertainty of the database. On the other hand, the advent of crowdsourcing has enabled humans to aid machine algorithms at scale. As part of this project we introduced pi-CASTLE , a system that optimizes and integrates human and machine computing as applied to a complex structured prediction problem involving conditional random fieldsmore » (CRFs). We proposed strategies grounded in information theory to select a token subset, formulate questions for the crowd to label, and integrate these labelings back into the database using a method of constrained inference. On both a text segmentation task over academic citations and a named entity recognition task over tweets we showed an order of magnitude improvement in accuracy gain over baseline methods.« less
FJET Database Project: Extract, Transform, and Load
NASA Technical Reports Server (NTRS)
Samms, Kevin O.
2015-01-01
The Data Mining & Knowledge Management team at Kennedy Space Center is providing data management services to the Frangible Joint Empirical Test (FJET) project at Langley Research Center (LARC). FJET is a project under the NASA Engineering and Safety Center (NESC). The purpose of FJET is to conduct an assessment of mild detonating fuse (MDF) frangible joints (FJs) for human spacecraft separation tasks in support of the NASA Commercial Crew Program. The Data Mining & Knowledge Management team has been tasked with creating and managing a database for the efficient storage and retrieval of FJET test data. This paper details the Extract, Transform, and Load (ETL) process as it is related to gathering FJET test data into a Microsoft SQL relational database, and making that data available to the data users. Lessons learned, procedures implemented, and programming code samples are discussed to help detail the learning experienced as the Data Mining & Knowledge Management team adapted to changing requirements and new technology while maintaining flexibility of design in various aspects of the data management project.
The data operation centre tool. Architecture and population strategies
NASA Astrophysics Data System (ADS)
Dal Pra, Stefano; Crescente, Alberto
2012-12-01
Keeping track of the layout of the informatic resources in a big datacenter is a complex task. DOCET is a database-based webtool designed and implemented at INFN. It aims at providing a uniform interface to manage and retrieve needed information about one or more datacenter, such as available hardware, software and their status. Having a suitable application is however useless until most of the information about the centre are not inserted in the DOCET'S database. Manually inserting all the information from scratch is an unfeasible task. After describing DOCET'S high level architecture, its main features and current development track, we present and discuss the work done to populate the DOCET database for the INFN-T1 site by retrieving information from a heterogenous variety of authoritative sources, such as DNS, DHCP, Quattor host profiles, etc. We then describe the work being done to integrate DOCET with some common management operation, such as adding a newly installed host to DHCP and DNS, or creating a suitable Quattor profile template for it.
ERIC Educational Resources Information Center
Chang, May
2000-01-01
Describes the development of electronic finding aids for archives at the University of Illinois, Urbana-Champaign that used XML (extensible markup language) and EAD (encoded archival description) to enable more flexible information management and retrieval than using MARC or a relational database management system. EAD template is appended.…
"Social Work Abstracts" Fails Again: A Replication and Extension
ERIC Educational Resources Information Center
Holden, Gary; Barker, Kathleen; Covert-Vail, Lucinda; Rosenberg, Gary; Cohen, Stephanie A.
2009-01-01
Objective: According to a prior study, there are substantial lapses in journal coverage in the "Social Work Abstracts" (SWA) database. The current study provides a replication and extension. Method: The longitudinal pattern of coverage of thirty-three journals categorized in SWA as core journals (published in the 1989-1996 period) is examined.…
Uses and limitations of registry and academic databases.
Williams, William G
2010-01-01
A database is simply a structured collection of information. A clinical database may be a Registry (a limited amount of data for every patient undergoing heart surgery) or Academic (an organized and extensive dataset of an inception cohort of carefully selected subset of patients). A registry and an academic database have different purposes and cost. The data to be collected for a database is defined by its purpose and the output reports required for achieving that purpose. A Registry's purpose is to ensure quality care, an Academic Database, to discover new knowledge through research. A database is only as good as the data it contains. Database personnel must be exceptionally committed and supported by clinical faculty. A system to routinely validate and verify data integrity is essential to ensure database utility. Frequent use of the database improves its accuracy. For congenital heart surgeons, routine use of a Registry Database is an essential component of clinical practice. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Re-thinking the role of the dorsal striatum in egocentric/response strategy.
Botreau, Fanny; Gisquet-Verrier, Pascale
2010-01-01
Rats trained in a dual-solution cross-maze task, which can be solved by place and response strategies, predominantly used a response strategy after extensive training. This paper examines the involvement of the medial and lateral dorsal striatum (mDS and lDS) in the choice of these strategies after partial and extensive training. Our results show that rats with lDS and mDS lesions used mainly a response strategy from the early phase of training. We replicated these unexpected data in rats with lDS lesions and confirmed their tendency to use the response strategy in a modified cross-maze task. When trained in a dual-solution water-maze task, however, control and lesioned rats consistently used a place strategy, demonstrating that lDS and mDS lesioned rats can use a place strategy and that the shift towards a response strategy did not systematically result from extensive training. The present data did not show any clear dissociation between the mDS and lDS in dual solution tasks. They further indicate that the dorsal striatum seems to determine the strategies adopted in a particular context but cannot be considered as a neural support for the response memory system. Accordingly, the role of the lateral and medial part of the dorsal striatum in egocentric/response memory should be reconsidered.
Kato, Kouki; Kanosue, Kazuyuki
2016-10-28
We investigated the effects of foot muscle relaxation and contraction on muscle activities in the hand on both ipsilateral and contralateral sides. The subjects sat in an armchair with hands in the pronated position. They were able to freely move their right/left hand and foot. They performed three tasks for both ipsilateral (right hand and right foot) and contralateral limb coordination (left hand and right foot for a total of six tasks). These tasks involved: (1) wrist extension from a flexed (resting) position, (2) wrist extension with simultaneous ankle dorsiflexion from a plantarflexed (resting) position, and (3) wrist extension with simultaneous ankle relaxation from a dorsiflexed position. The subjects performed each task as fast as possible after hearing the start signal. Reaction time for the wrist extensor contraction (i.e. the degree to which it preceded the motor reaction time), as observed in electromyography (EMG), became longer when it was concurrently done with relaxation of the ankle dorsiflexor. Also, the magnitude of EMG activity became smaller, as compared with activity when wrist extensor contraction was done alone or with contraction of the ankle dorsiflexor. These effects were observed not only for the ipsilateral hand, but also for the contralateral hand. Our findings suggest that muscle relaxation in one limb interferes with muscle contraction in both the ipsilateral and contralateral limbs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Wound closure in flexion versus extension following total knee arthroplasty: a systematic review.
Smith, Toby O; Davies, Leigh; Hing, Caroline B
2010-06-01
Optimising knee range of motion following total knee arthroplasty (TKA) is important for patient satisfaction, functional outcome and early rehabilitation to promote accelerated discharge. Historically, wound closure following TKA has been performed in extension. It has been suggested that knee position during wound closure may influence range of motion and clinical outcomes following TKA. The purpose of this study was to determine whether TKA wounds should be closed in flexion or extension. An electronic search of MEDLINE, EMBASE, CINAHL and AMED databases was made in addition to a review of unpublished material. All included papers were critically appraised using a modified PEDro (Physiotherapy Evidence Database) critical appraisal tool. Three papers were eligible, assessing 237 TKAs. On analysis, patients with TKA wounds closed in flexion had greater flexion range of motion and required less domiciliary physiotherapy compared to those with wounds closed in full extension. The specific degree of knee flexion used when closing total knee replacement wounds may be an important variable to clinical outcome. However, the present evidence-base is limited in both size and methodological quality.
Response to Vogelstein: How the 2012 AAP Task Force on circumcision went wrong.
Van Howe, Robert S
2018-01-01
Vogelstein cautions medical organizations against jumping into the fray of controversial issues, yet proffers the 2012 American Academy of Pediatrics' Task Force policy position on infant male circumcision as 'an appropriate use of position-statements.' Only a scratch below the surface of this policy statement uncovers the Task Force's failure to consider Vogelstein's many caveats. The Task Force supported the cultural practice by putting undeserved emphasis on questionable scientific data, while ignoring or underplaying the importance of valid contrary scientific data. Without any effort to quantitatively assess the risk/benefit balance, the Task Force concluded the benefits of circumcision outweighed the risks, while acknowledging that the incidence of risks was unknown. This Task Force differed from other Academy policy-forming panels by ignoring the Academy's standard quality measures and by not appointing members with extensive research experience, extensive publications, or recognized expertise directly related to this topic. Despite nearly 100 publications available at the time addressing the substantial ethical issues associated with infant male circumcision, the Task Force chose to ignore the ethical controversy. They merely stated, with minimal justification, the opinion of one of the Task Force members that the practice of infant male circumcision is morally permissible. The release of the report has fostered an explosion of academic discussion on the ethics of infant male circumcision with a number of national medical organizations now decrying the practice as a human rights violation. © 2017 John Wiley & Sons Ltd.
RoadPlex: A Mobile VGI Game to Collect and Validate Data for POIs
NASA Astrophysics Data System (ADS)
Kashian, A.; Rajabifard, A.; Richter, K. F.
2014-11-01
By increasing the popularity of smart phones equipped with GPS sensors, more volunteers are expected to join VGI (Volunteered Geographic Information) activities and therefore more positional data will be collected in shorter time. Current statistics from open databases such OpenStreetMap reveals that although there have been exponential growth in the number of contributed POIs (Points of Interest), the lack of detailed attribute information is immediately visible. The process of adding attribute information to VGI databases is usually considered as a boring task and it is believed that contributors do not experience a similar level of satisfaction when they add such detailed information compared to tasks like adding new roads or copying building boundaries from satellite imageries. In other crowdsourcing projects, different approaches are taken for engaging contributors in problem solving by embedding the tasks inside a game. In the literature, this concept is known as "gamification" or "games with purpose" which encapsulate the idea of entertaining contributors while they are completing a particular defined task. Same concept is used to design a mobile application called "RoadPlex" which aims to collect general or specific attribute information for POIs The increased number of contributions in the past few months confirms that the design characteristics and the methodology of the game are appealing to players. Such growth enables us to evaluate the quality of the generated data through mining the database of answered questions. This paper reflects the some contribution results and emphasises the importance of using gamification concept in the domain of VGI.
Customer and household matching: resolving entity identity in data warehouses
NASA Astrophysics Data System (ADS)
Berndt, Donald J.; Satterfield, Ronald K.
2000-04-01
The data preparation and cleansing tasks necessary to ensure high quality data are among the most difficult challenges faced in data warehousing and data mining projects. The extraction of source data, transformation into new forms, and loading into a data warehouse environment are all time consuming tasks that can be supported by methodologies and tools. This paper focuses on the problem of record linkage or entity matching, tasks that can be very important in providing high quality data. Merging two or more large databases into a single integrated system is a difficult problem in many industries, especially in the wake of acquisitions. For example, managing customer lists can be challenging when duplicate entries, data entry problems, and changing information conspire to make data quality an elusive target. Common tasks with regard to customer lists include customer matching to reduce duplicate entries and household matching to group customers. These often O(n2) problems can consume significant resources, both in computing infrastructure and human oversight, and the goal of high accuracy in the final integrated database can be difficult to assure. This paper distinguishes between attribute corruption and entity corruption, discussing the various impacts on quality. A metajoin operator is proposed and used to organize past and current entity matching techniques. Finally, a logistic regression approach to implementing the metajoin operator is discussed and illustrated with an example. The metajoin can be used to determine whether two records match, don't match, or require further evaluation by human experts. Properly implemented, the metajoin operator could allow the integration of individual databases with greater accuracy and lower cost.
Centralized database for interconnection system design. [for spacecraft
NASA Technical Reports Server (NTRS)
Billitti, Joseph W.
1989-01-01
A database application called DFACS (Database, Forms and Applications for Cabling and Systems) is described. The objective of DFACS is to improve the speed and accuracy of interconnection system information flow during the design and fabrication stages of a project, while simultaneously supporting both the horizontal (end-to-end wiring) and the vertical (wiring by connector) design stratagems used by the Jet Propulsion Laboratory (JPL) project engineering community. The DFACS architecture is centered around a centralized database and program methodology which emulates the manual design process hitherto used at JPL. DFACS has been tested and successfully applied to existing JPL hardware tasks with a resulting reduction in schedule time and costs.
DNAtraffic--a new database for systems biology of DNA dynamics during the cell life.
Kuchta, Krzysztof; Barszcz, Daniela; Grzesiuk, Elzbieta; Pomorski, Pawel; Krwawicz, Joanna
2012-01-01
DNAtraffic (http://dnatraffic.ibb.waw.pl/) is dedicated to be a unique comprehensive and richly annotated database of genome dynamics during the cell life. It contains extensive data on the nomenclature, ontology, structure and function of proteins related to the DNA integrity mechanisms such as chromatin remodeling, histone modifications, DNA repair and damage response from eight organisms: Homo sapiens, Mus musculus, Drosophila melanogaster, Caenorhabditis elegans, Saccharomyces cerevisiae, Schizosaccharomyces pombe, Escherichia coli and Arabidopsis thaliana. DNAtraffic contains comprehensive information on the diseases related to the assembled human proteins. DNAtraffic is richly annotated in the systemic information on the nomenclature, chemistry and structure of DNA damage and their sources, including environmental agents or commonly used drugs targeting nucleic acids and/or proteins involved in the maintenance of genome stability. One of the DNAtraffic database aim is to create the first platform of the combinatorial complexity of DNA network analysis. Database includes illustrations of pathways, damage, proteins and drugs. Since DNAtraffic is designed to cover a broad spectrum of scientific disciplines, it has to be extensively linked to numerous external data sources. Our database represents the result of the manual annotation work aimed at making the DNAtraffic much more useful for a wide range of systems biology applications.
DNAtraffic—a new database for systems biology of DNA dynamics during the cell life
Kuchta, Krzysztof; Barszcz, Daniela; Grzesiuk, Elzbieta; Pomorski, Pawel; Krwawicz, Joanna
2012-01-01
DNAtraffic (http://dnatraffic.ibb.waw.pl/) is dedicated to be a unique comprehensive and richly annotated database of genome dynamics during the cell life. It contains extensive data on the nomenclature, ontology, structure and function of proteins related to the DNA integrity mechanisms such as chromatin remodeling, histone modifications, DNA repair and damage response from eight organisms: Homo sapiens, Mus musculus, Drosophila melanogaster, Caenorhabditis elegans, Saccharomyces cerevisiae, Schizosaccharomyces pombe, Escherichia coli and Arabidopsis thaliana. DNAtraffic contains comprehensive information on the diseases related to the assembled human proteins. DNAtraffic is richly annotated in the systemic information on the nomenclature, chemistry and structure of DNA damage and their sources, including environmental agents or commonly used drugs targeting nucleic acids and/or proteins involved in the maintenance of genome stability. One of the DNAtraffic database aim is to create the first platform of the combinatorial complexity of DNA network analysis. Database includes illustrations of pathways, damage, proteins and drugs. Since DNAtraffic is designed to cover a broad spectrum of scientific disciplines, it has to be extensively linked to numerous external data sources. Our database represents the result of the manual annotation work aimed at making the DNAtraffic much more useful for a wide range of systems biology applications. PMID:22110027
Using Real Colors to Transform Organizational Culture
ERIC Educational Resources Information Center
Roback, Paul
2017-01-01
Extension educators are frequently tasked with strengthening organizations they collaborate with or provide education to. When a county government in Wisconsin experienced significant personnel changes in a span of less than 18 months, department heads contacted Extension to request professional development and team-building education for their…
Proprioceptive Interaction between the Two Arms in a Single-Arm Pointing Task.
Kigawa, Kazuyoshi; Izumizaki, Masahiko; Tsukada, Setsuro; Hakuta, Naoyuki
2015-01-01
Proprioceptive signals coming from both arms are used to determine the perceived position of one arm in a two-arm matching task. Here, we examined whether the perceived position of one arm is affected by proprioceptive signals from the other arm in a one-arm pointing task in which participants specified the perceived position of an unseen reference arm with an indicator paddle. Both arms were hidden from the participant's view throughout the study. In Experiment 1, with both arms placed in front of the body, the participants received 70-80 Hz vibration to the elbow flexors of the reference arm (= right arm) to induce the illusion of elbow extension. This extension illusion was compared with that when the left arm elbow flexors were vibrated or not. The degree of the vibration-induced extension illusion of the right arm was reduced in the presence of left arm vibration. In Experiment 2, we found that this kinesthetic interaction between the two arms did not occur when the left arm was vibrated in an abducted position. In Experiment 3, the vibration-induced extension illusion of one arm was fully developed when this arm was placed at an abducted position, indicating that the brain receives increased proprioceptive input from a vibrated arm even if the arm was abducted. Our results suggest that proprioceptive interaction between the two arms occurs in a one-arm pointing task when the two arms are aligned with one another. The position sense of one arm measured using a pointer appears to include the influences of incoming information from the other arm when both arms were placed in front of the body and parallel to one another.
Proprioceptive Interaction between the Two Arms in a Single-Arm Pointing Task
Kigawa, Kazuyoshi; Izumizaki, Masahiko; Tsukada, Setsuro; Hakuta, Naoyuki
2015-01-01
Proprioceptive signals coming from both arms are used to determine the perceived position of one arm in a two-arm matching task. Here, we examined whether the perceived position of one arm is affected by proprioceptive signals from the other arm in a one-arm pointing task in which participants specified the perceived position of an unseen reference arm with an indicator paddle. Both arms were hidden from the participant’s view throughout the study. In Experiment 1, with both arms placed in front of the body, the participants received 70–80 Hz vibration to the elbow flexors of the reference arm (= right arm) to induce the illusion of elbow extension. This extension illusion was compared with that when the left arm elbow flexors were vibrated or not. The degree of the vibration-induced extension illusion of the right arm was reduced in the presence of left arm vibration. In Experiment 2, we found that this kinesthetic interaction between the two arms did not occur when the left arm was vibrated in an abducted position. In Experiment 3, the vibration-induced extension illusion of one arm was fully developed when this arm was placed at an abducted position, indicating that the brain receives increased proprioceptive input from a vibrated arm even if the arm was abducted. Our results suggest that proprioceptive interaction between the two arms occurs in a one-arm pointing task when the two arms are aligned with one another. The position sense of one arm measured using a pointer appears to include the influences of incoming information from the other arm when both arms were placed in front of the body and parallel to one another. PMID:26317518
ERIC Educational Resources Information Center
Alexopoulou, Theodora; Michel, Marije; Murakami, Akira; Meurers, Detmar
2017-01-01
Large-scale learner corpora collected from online language learning platforms, such as the EF-Cambridge Open Language Database (EFCAMDAT), provide opportunities to analyze learner data at an unprecedented scale. However, interpreting the learner language in such corpora requires a precise understanding of tasks: How does the prompt and input of a…
This paper presents a summary of the findings of a report prepared by Task Force 1 of the UNEP/SETAC Life Cycle Initiative on the available Life Cycle Inventory (LCI) databases around the world. An update of a previous summary prepared in May 2002 by Norris and Notten, the repor...
SQL/NF Translator for the Triton Nested Relational Database System
1990-12-01
18as., Ohio .. 9~~ ~~ 1 4- AFIT/GCE/ENG/90D-05 SQL/Nk1 TRANSLATOR FOR THE TRITON NESTED RELATIONAL DATABASE SYSTEM THESIS Craig William Schnepf Captain...FOR THE TRITON NESTED RELATIONAL DATABASE SYSTEM THESIS Presented to the Faculty of the School of Engineering of the Air Force Institute of Technnlogy... systems . The SQL/NF query language used for the nested relationil model is an extension of the popular relational model query language SQL. The query
Toward a public analysis database for LHC new physics searches using M ADA NALYSIS 5
NASA Astrophysics Data System (ADS)
Dumont, B.; Fuks, B.; Kraml, S.; Bein, S.; Chalons, G.; Conte, E.; Kulkarni, S.; Sengupta, D.; Wymant, C.
2015-02-01
We present the implementation, in the MadAnalysis 5 framework, of several ATLAS and CMS searches for supersymmetry in data recorded during the first run of the LHC. We provide extensive details on the validation of our implementations and propose to create a public analysis database within this framework.
Language, Thought, and Real Nouns
ERIC Educational Resources Information Center
Barner, David; Inagaki, Shunji; Li, Peggy
2009-01-01
We test the claim that acquiring a mass-count language, like English, causes speakers to think differently about entities in the world, relative to speakers of classifier languages like Japanese. We use three tasks to assess this claim: object-substance rating, quantity judgment, and word extension. Using the first two tasks, we present evidence…
Sequential Dependencies in Driving
ERIC Educational Resources Information Center
Doshi, Anup; Tran, Cuong; Wilder, Matthew H.; Mozer, Michael C.; Trivedi, Mohan M.
2012-01-01
The effect of recent experience on current behavior has been studied extensively in simple laboratory tasks. We explore the nature of sequential effects in the more naturalistic setting of automobile driving. Driving is a safety-critical task in which delayed response times may have severe consequences. Using a realistic driving simulator, we find…
Toward a Cognitive Task Analysis for Biomedical Query Mediation
Hruby, Gregory W.; Cimino, James J.; Patel, Vimla; Weng, Chunhua
2014-01-01
In many institutions, data analysts use a Biomedical Query Mediation (BQM) process to facilitate data access for medical researchers. However, understanding of the BQM process is limited in the literature. To bridge this gap, we performed the initial steps of a cognitive task analysis using 31 BQM instances conducted between one analyst and 22 researchers in one academic department. We identified five top-level tasks, i.e., clarify research statement, explain clinical process, identify related data elements, locate EHR data element, and end BQM with either a database query or unmet, infeasible information needs, and 10 sub-tasks. We evaluated the BQM task model with seven data analysts from different clinical research institutions. Evaluators found all the tasks completely or semi-valid. This study contributes initial knowledge towards the development of a generalizable cognitive task representation for BQM. PMID:25954589
Toward a cognitive task analysis for biomedical query mediation.
Hruby, Gregory W; Cimino, James J; Patel, Vimla; Weng, Chunhua
2014-01-01
In many institutions, data analysts use a Biomedical Query Mediation (BQM) process to facilitate data access for medical researchers. However, understanding of the BQM process is limited in the literature. To bridge this gap, we performed the initial steps of a cognitive task analysis using 31 BQM instances conducted between one analyst and 22 researchers in one academic department. We identified five top-level tasks, i.e., clarify research statement, explain clinical process, identify related data elements, locate EHR data element, and end BQM with either a database query or unmet, infeasible information needs, and 10 sub-tasks. We evaluated the BQM task model with seven data analysts from different clinical research institutions. Evaluators found all the tasks completely or semi-valid. This study contributes initial knowledge towards the development of a generalizable cognitive task representation for BQM.
NASA Astrophysics Data System (ADS)
Holmes, N. G.; Wieman, Carl E.
2016-12-01
While the positive outcomes of undergraduate research experiences (UREs) have been extensively categorized, the mechanisms for those outcomes are less understood. Through lightly structured focus group interviews, we have extracted the cognitive tasks that students identify as engaging in during their UREs. We also use their many comparative statements about their coursework, especially lab courses, to evaluate their experimental physics-related cognitive tasks in those environments. We find there are a number of cognitive tasks consistently encountered in physics UREs that are present in most experimental research. These are seldom encountered in lab or lecture courses, with some notable exceptions. Having time to reflect and fix or revise, and having a sense of autonomy, were both repeatedly cited as key enablers of the benefits of UREs. We also identify tasks encountered in actual experimental research that are not encountered in UREs. We use these findings to identify opportunities for better integration of the cognitive tasks in UREs and lab courses, as well as discussing the barriers that exist. This work responds to extensive calls for science education to better develop students' scientific skills and practices, as well as calls to expose more students to scientific research.
Pinciroli, Francesco; Masseroli, Marco; Acerbo, Livio A; Bonacina, Stefano; Ferrari, Roberto; Marchente, Mario
2004-01-01
This paper presents a low cost software platform prototype supporting health care personnel in retrieving patient referral multimedia data. These information are centralized in a server machine and structured by using a flexible eXtensible Markup Language (XML) Bio-Image Referral Database (BIRD). Data are distributed on demand to requesting client in an Intranet network and transformed via eXtensible Stylesheet Language (XSL) to be visualized in an uniform way on market browsers. The core server operation software has been developed in PHP Hypertext Preprocessor scripting language, which is very versatile and useful for crafting a dynamic Web environment.
Virus taxonomy: the database of the International Committee on Taxonomy of Viruses (ICTV).
Lefkowitz, Elliot J; Dempsey, Donald M; Hendrickson, Robert Curtis; Orton, Richard J; Siddell, Stuart G; Smith, Donald B
2018-01-04
The International Committee on Taxonomy of Viruses (ICTV) is charged with the task of developing, refining, and maintaining a universal virus taxonomy. This task encompasses the classification of virus species and higher-level taxa according to the genetic and biological properties of their members; naming virus taxa; maintaining a database detailing the currently approved taxonomy; and providing the database, supporting proposals, and other virus-related information from an open-access, public web site. The ICTV web site (http://ictv.global) provides access to the current taxonomy database in online and downloadable formats, and maintains a complete history of virus taxa back to the first release in 1971. The ICTV has also published the ICTV Report on Virus Taxonomy starting in 1971. This Report provides a comprehensive description of all virus taxa covering virus structure, genome structure, biology and phylogenetics. The ninth ICTV report, published in 2012, is available as an open-access online publication from the ICTV web site. The current, 10th report (http://ictv.global/report/), is being published online, and is replacing the previous hard-copy edition with a completely open access, continuously updated publication. No other database or resource exists that provides such a comprehensive, fully annotated compendium of information on virus taxa and taxonomy. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Tensoral for post-processing users and simulation authors
NASA Technical Reports Server (NTRS)
Dresselhaus, Eliot
1993-01-01
The CTR post-processing effort aims to make turbulence simulations and data more readily and usefully available to the research and industrial communities. The Tensoral language, which provides the foundation for this effort, is introduced here in the form of a user's guide. The Tensoral user's guide is presented in two main sections. Section one acts as a general introduction and guides database users who wish to post-process simulation databases. Section two gives a brief description of how database authors and other advanced users can make simulation codes and/or the databases they generate available to the user community via Tensoral database back ends. The two-part structure of this document conforms to the two-level design structure of the Tensoral language. Tensoral has been designed to be a general computer language for performing tensor calculus and statistics on numerical data. Tensoral's generality allows it to be used for stand-alone native coding of high-level post-processing tasks (as described in section one of this guide). At the same time, Tensoral's specialization to a minute task (namely, to numerical tensor calculus and statistics) allows it to be easily embedded into applications written partly in Tensoral and partly in other computer languages (here, C and Vectoral). Embedded Tensoral, aimed at advanced users for more general coding (e.g. of efficient simulations, for interfacing with pre-existing software, for visualization, etc.), is described in section two of this guide.
The Design and Analysis of a Network Interface for the Multi-Lingual Database System.
1985-12-01
IDENTIF:CATION NUMBER 0 ORGANIZATION (If applicable) 8c. ADDRESS (City, State. and ZIP Code) 10. SOURCE OF FUNDING NUMBERS PROGRAM PROJECT TASK WORK UNIT...APPFNlDIX - THE~ KMS PROGRAM SPECIFICATI~bS ........ 94 I4 XST O)F REFEFRENCFS*O*IOebqBS~*OBS 124 Il LIST OF FIrURPS F’igure 1: The multi-Linqual Database...bacKend Database System *CABO0S). In this section, we Provide an overviev of Doti tne MLLS an tne 4B0S to enhance the readers understandin- of the
Digital images in the map revision process
NASA Astrophysics Data System (ADS)
Newby, P. R. T.
Progress towards the adoption of digital (or softcopy) photogrammetric techniques for database and map revision is reviewed. Particular attention is given to the Ordnance Survey of Great Britain, the author's former employer, where digital processes are under investigation but have not yet been introduced for routine production. Developments which may lead to increasing automation of database update processes appear promising, but because of the cost and practical problems associated with managing as well as updating large digital databases, caution is advised when considering the transition to softcopy photogrammetry for revision tasks.
YMDB 2.0: a significantly expanded version of the yeast metabolome database.
Ramirez-Gaona, Miguel; Marcu, Ana; Pon, Allison; Guo, An Chi; Sajed, Tanvir; Wishart, Noah A; Karu, Naama; Djoumbou Feunang, Yannick; Arndt, David; Wishart, David S
2017-01-04
YMDB or the Yeast Metabolome Database (http://www.ymdb.ca/) is a comprehensive database containing extensive information on the genome and metabolome of Saccharomyces cerevisiae Initially released in 2012, the YMDB has gone through a significant expansion and a number of improvements over the past 4 years. This manuscript describes the most recent version of YMDB (YMDB 2.0). More specifically, it provides an updated description of the database that was previously described in the 2012 NAR Database Issue and it details many of the additions and improvements made to the YMDB over that time. Some of the most important changes include a 7-fold increase in the number of compounds in the database (from 2007 to 16 042), a 430-fold increase in the number of metabolic and signaling pathway diagrams (from 66 to 28 734), a 16-fold increase in the number of compounds linked to pathways (from 742 to 12 733), a 17-fold increase in the numbers of compounds with nuclear magnetic resonance or MS spectra (from 783 to 13 173) and an increase in both the number of data fields and the number of links to external databases. In addition to these database expansions, a number of improvements to YMDB's web interface and its data visualization tools have been made. These additions and improvements should greatly improve the ease, the speed and the quantity of data that can be extracted, searched or viewed within YMDB. Overall, we believe these improvements should not only improve the understanding of the metabolism of S. cerevisiae, but also allow more in-depth exploration of its extensive metabolic networks, signaling pathways and biochemistry. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Great Basin paleontological database
Zhang, N.; Blodgett, R.B.; Hofstra, A.H.
2008-01-01
The U.S. Geological Survey has constructed a paleontological database for the Great Basin physiographic province that can be served over the World Wide Web for data entry, queries, displays, and retrievals. It is similar to the web-database solution that we constructed for Alaskan paleontological data (www.alaskafossil.org). The first phase of this effort was to compile a paleontological bibliography for Nevada and portions of adjacent states in the Great Basin that has recently been completed. In addition, we are also compiling paleontological reports (Known as E&R reports) of the U.S. Geological Survey, which are another extensive source of l,egacy data for this region. Initial population of the database benefited from a recently published conodont data set and is otherwise focused on Devonian and Mississippian localities because strata of this age host important sedimentary exhalative (sedex) Au, Zn, and barite resources and enormons Carlin-type An deposits. In addition, these strata are the most important petroleum source rocks in the region, and record the transition from extension to contraction associated with the Antler orogeny, the Alamo meteorite impact, and biotic crises associated with global oceanic anoxic events. The finished product will provide an invaluable tool for future geologic mapping, paleontological research, and mineral resource investigations in the Great Basin, making paleontological data acquired over nearly the past 150 yr readily available over the World Wide Web. A description of the structure of the database and the web interface developed for this effort are provided herein. This database is being used ws a model for a National Paleontological Database (which we am currently developing for the U.S. Geological Survey) as well as for other paleontological databases now being developed in other parts of the globe. ?? 2008 Geological Society of America.
Organizational Design within University Extension Units: Some Concepts, Options, and Guidelines
ERIC Educational Resources Information Center
Baker, Harold R.
1976-01-01
Drawing on the behavioral sciences, the author outlines alternative modes of structuring and organizing an extension unit. The advantages and disadvantages of several organizational design options, the purposes and management of the temporary task force, and some general guidelines for making organizational design decisions are discussed.…
NASA Technical Reports Server (NTRS)
1979-01-01
A plan for the production of two PEP flight systems is defined. The task's milestones are described. Provisions for the development and assembly of new ground support equipment required for both testing and launch operations are included.
ERIC Educational Resources Information Center
Bradley, Lucy K.; Cook, Jonneen; Cook, Chris
2011-01-01
North Carolina State University has incorporated many aspects of volunteer program administration and reporting into an on-line solution that integrates impact reporting into daily program management. The Extension Master Gardener Intranet automates many of the administrative tasks associated with volunteer management, increasing efficiency, and…
Developing Effective Extension Agents: Experience Concerns.
ERIC Educational Resources Information Center
Goddu, Roland
This paper is a description of the requirements placed on persons selected to fill the role of extension agents for the purpose of penetrating an educational environment, installing change in an educational organization, and completing tasks as a resource outside of the education establishment. These experience concerns are summarized by…
Buchler, Norbou G; Hoyer, William J; Cerella, John
2008-06-01
Task-switching performance was assessed in young and older adults as a function of the number of task sets to be actively maintained in memory (varied from 1 to 4) over the course of extended training (5 days). Each of the four tasks required the execution of a simple computational algorithm, which was instantaneously cued by the color of the two-digit stimulus. Tasks were presented in pure (task set size 1) and mixed blocks (task set sizes 2, 3, 4), and the task sequence was unpredictable. By considering task switching beyond two tasks, we found evidence for a cognitive control system that is not overwhelmed by task set size load manipulations. Extended training eliminated age effects in task-switching performance, even when the participants had to manage the execution of up to four tasks. The results are discussed in terms of current theories of cognitive control, including task set inertia and production system postulates.
NASA Astrophysics Data System (ADS)
Császár, Attila G.; Furtenbacher, T.; Tennyson, Jonathan; Bernath, Peter F.; Brown, Linda R.; Campargue, Alain; Daumont, Ludovic; Gamache, Robert R.; Hodges, Joseph T.; Naumenko, Olga V.; Polyansky, Oleg L.; Rothman, Laurence S.; Vandaele, Ann Carine; Zobov, Nikolai F.
2014-06-01
The results of an IUPAC Task Group formed in 2004 on "A Database of Water Transitions from Experiment and Theory" (Project No. 2004-035-1-100) are presented. Energy levels and recommended labels involving exact and approximate quantum numbers for the main isotopologues of water in the gas phase, H216O, H218O, H217O, HD16O, HD18O, HD17O, D216O, D218O, and D217O, are determined from measured transition wavenumbers. The transition wavenumbers and energy levels are validated using the MARVEL (measured active rotational-vibrational energy levels) approach and first-principles nuclear motion computations. The extensive data, e.g., more than 200,000 transitions have been handled for H216O, including lines and levels that are required for analysis and synthesis of spectra, thermochemical applications, the construction of theoretical models, and the removal of spectral contamination by ubiquitous water lines. These datasets can also be used to assess where measurements are lacking for each isotopologue and to provide accurate frequencies for many yet-to-be measured transitions. The lack of high-quality frequency calibration standards in the near infrared is identified as an issue that has hindered the determination of high-accuracy energy levels at higher frequencies. The generation of spectra using the MARVEL energy levels combined with transition intensities computed using high accuracy ab initio dipole moment surfaces are discussed.
No-Reference Video Quality Assessment Based on Statistical Analysis in 3D-DCT Domain.
Li, Xuelong; Guo, Qun; Lu, Xiaoqiang
2016-05-13
It is an important task to design models for universal no-reference video quality assessment (NR-VQA) in multiple video processing and computer vision applications. However, most existing NR-VQA metrics are designed for specific distortion types which are not often aware in practical applications. A further deficiency is that the spatial and temporal information of videos is hardly considered simultaneously. In this paper, we propose a new NR-VQA metric based on the spatiotemporal natural video statistics (NVS) in 3D discrete cosine transform (3D-DCT) domain. In the proposed method, a set of features are firstly extracted based on the statistical analysis of 3D-DCT coefficients to characterize the spatiotemporal statistics of videos in different views. These features are used to predict the perceived video quality via the efficient linear support vector regression (SVR) model afterwards. The contributions of this paper are: 1) we explore the spatiotemporal statistics of videos in 3DDCT domain which has the inherent spatiotemporal encoding advantage over other widely used 2D transformations; 2) we extract a small set of simple but effective statistical features for video visual quality prediction; 3) the proposed method is universal for multiple types of distortions and robust to different databases. The proposed method is tested on four widely used video databases. Extensive experimental results demonstrate that the proposed method is competitive with the state-of-art NR-VQA metrics and the top-performing FR-VQA and RR-VQA metrics.
Borges, Díbio L; Vidal, Flávio B; Flores, Marta R P; Melani, Rodolfo F H; Guimarães, Marco A; Machado, Carlos E P
2018-03-01
Age assessment from images is of high interest in the forensic community because of the necessity to establish formal protocols to identify child pornography, child missing and abuses where visual evidences are the mostly admissible. Recently, photoanthropometric methods have been found useful for age estimation correlating facial proportions in image databases with samples of some age groups. Notwithstanding the advances, newer facial features and further analysis are needed to improve accuracy and establish larger applicability. In this investigation, frontal images of 1000 individuals (500 females, 500 males), equally distributed in five age groups (6, 10, 14, 18, 22 years old) were used in a 10 fold cross-validated experiment for three age thresholds classifications (<10, <14, <18 years old). A set of novel 40 features, based on a relation between landmark distances and the iris diameter, is proposed and joint mutual information is used to select the most relevant and complementary features for the classification task. In a civil image identification database with diverse ancestry, receiver operating characteristic (ROC) curves were plotted to verify accuracy, and the resultant AUCs achieved 0.971, 0.969, and 0.903 for the age classifications (<10, <14, <18 years old), respectively. These results add support to continuing research in age assessment from images using the metric approach. Still, larger samples are necessary to evaluate reliability in extensive conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Multi-level deep supervised networks for retinal vessel segmentation.
Mo, Juan; Zhang, Lei
2017-12-01
Changes in the appearance of retinal blood vessels are an important indicator for various ophthalmologic and cardiovascular diseases, including diabetes, hypertension, arteriosclerosis, and choroidal neovascularization. Vessel segmentation from retinal images is very challenging because of low blood vessel contrast, intricate vessel topology, and the presence of pathologies such as microaneurysms and hemorrhages. To overcome these challenges, we propose a neural network-based method for vessel segmentation. A deep supervised fully convolutional network is developed by leveraging multi-level hierarchical features of the deep networks. To improve the discriminative capability of features in lower layers of the deep network and guide the gradient back propagation to overcome gradient vanishing, deep supervision with auxiliary classifiers is incorporated in some intermediate layers of the network. Moreover, the transferred knowledge learned from other domains is used to alleviate the issue of insufficient medical training data. The proposed approach does not rely on hand-crafted features and needs no problem-specific preprocessing or postprocessing, which reduces the impact of subjective factors. We evaluate the proposed method on three publicly available databases, the DRIVE, STARE, and CHASE_DB1 databases. Extensive experiments demonstrate that our approach achieves better or comparable performance to state-of-the-art methods with a much faster processing speed, making it suitable for real-world clinical applications. The results of cross-training experiments demonstrate its robustness with respect to the training set. The proposed approach segments retinal vessels accurately with a much faster processing speed and can be easily applied to other biomedical segmentation tasks.
NASA: Model development for human factors interfacing
NASA Technical Reports Server (NTRS)
Smith, L. L.
1984-01-01
The results of an intensive literature review in the general topics of human error analysis, stress and job performance, and accident and safety analysis revealed no usable techniques or approaches for analyzing human error in ground or space operations tasks. A task review model is described and proposed to be developed in order to reduce the degree of labor intensiveness in ground and space operations tasks. An extensive number of annotated references are provided.
Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency.
Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio
2015-01-01
Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB.
NASA Technical Reports Server (NTRS)
Levak, Daniel
1993-01-01
The Alternate Propulsion Subsystem Concepts contract had five tasks defined for the first year. The tasks were: F-1A Restart Study, J-2S Restart Study, Propulsion Database Development, Space Shuttle Main Engine (SSME) Upper Stage Use, and CER's for Liquid Propellant Rocket Engines. The detailed study results, with the data to support the conclusions from various analyses, are being reported as a series of five separate Final Task Reports. Consequently, this volume only reports the required programmatic information concerning Computer Aided Design Documentation, and New Technology Reports. A detailed Executive Summary, covering all the tasks, is also available as Volume 1.
NASA Astrophysics Data System (ADS)
Eyono Obono, S. D.; Basak, Sujit Kumar
2011-12-01
The general formulation of the assignment problem consists in the optimal allocation of a given set of tasks to a workforce. This problem is covered by existing literature for different domains such as distributed databases, distributed systems, transportation, packets radio networks, IT outsourcing, and teaching allocation. This paper presents a new version of the assignment problem for the allocation of academic tasks to staff members in departments with long leave opportunities. It presents the description of a workload allocation scheme and its algorithm, for the allocation of an equitable number of tasks in academic departments where long leaves are necessary.
Application of Chimera Navier-Stokes Code for High Speed Flows
NASA Technical Reports Server (NTRS)
Ajmani, Kumud
1997-01-01
The primary task for this year was performed in support of the "Trailblazer" project. The purpose of the task was to perform an extensive CFD study of the shock boundary-layer interaction between the engine-diverters and the primary body surfaces of the Trailblazer vehicle. Information gathered from this study would be used to determine the effectiveness of the diverters in preventing the boundary-layer coming off of the vehicle forebody from entering the main engines. The PEGSUS code was used to define the "holes" and "boundaries" for each grid. Two sets of CFD calculations were performed.Extensive post-processing of the results was performed.
A Drug Discovery Partnership for Personalized Breast Cancer Therapy
2015-09-01
antagonists) and then virtually screen the USDA Phytochemical, Chinese Herbal Medicine , and the FDA Marketed Drug Databases for new estrogens. Task 1...and antagonists that are in the registered pharmaceuticals and herbal medicine databases. The 29 analogs obtained have been characterized for...Marleesa Bastian, Technician at Xavier University (Sridhar lab and is now pursuing graduation at Meharry Medical College school of Medicine , Tennessee
ICA model order selection of task co-activation networks.
Ray, Kimberly L; McKay, D Reese; Fox, Peter M; Riedel, Michael C; Uecker, Angela M; Beckmann, Christian F; Smith, Stephen M; Fox, Peter T; Laird, Angela R
2013-01-01
Independent component analysis (ICA) has become a widely used method for extracting functional networks in the brain during rest and task. Historically, preferred ICA dimensionality has widely varied within the neuroimaging community, but typically varies between 20 and 100 components. This can be problematic when comparing results across multiple studies because of the impact ICA dimensionality has on the topology of its resultant components. Recent studies have demonstrated that ICA can be applied to peak activation coordinates archived in a large neuroimaging database (i.e., BrainMap Database) to yield whole-brain task-based co-activation networks. A strength of applying ICA to BrainMap data is that the vast amount of metadata in BrainMap can be used to quantitatively assess tasks and cognitive processes contributing to each component. In this study, we investigated the effect of model order on the distribution of functional properties across networks as a method for identifying the most informative decompositions of BrainMap-based ICA components. Our findings suggest dimensionality of 20 for low model order ICA to examine large-scale brain networks, and dimensionality of 70 to provide insight into how large-scale networks fractionate into sub-networks. We also provide a functional and organizational assessment of visual, motor, emotion, and interoceptive task co-activation networks as they fractionate from low to high model-orders.
ICA model order selection of task co-activation networks
Ray, Kimberly L.; McKay, D. Reese; Fox, Peter M.; Riedel, Michael C.; Uecker, Angela M.; Beckmann, Christian F.; Smith, Stephen M.; Fox, Peter T.; Laird, Angela R.
2013-01-01
Independent component analysis (ICA) has become a widely used method for extracting functional networks in the brain during rest and task. Historically, preferred ICA dimensionality has widely varied within the neuroimaging community, but typically varies between 20 and 100 components. This can be problematic when comparing results across multiple studies because of the impact ICA dimensionality has on the topology of its resultant components. Recent studies have demonstrated that ICA can be applied to peak activation coordinates archived in a large neuroimaging database (i.e., BrainMap Database) to yield whole-brain task-based co-activation networks. A strength of applying ICA to BrainMap data is that the vast amount of metadata in BrainMap can be used to quantitatively assess tasks and cognitive processes contributing to each component. In this study, we investigated the effect of model order on the distribution of functional properties across networks as a method for identifying the most informative decompositions of BrainMap-based ICA components. Our findings suggest dimensionality of 20 for low model order ICA to examine large-scale brain networks, and dimensionality of 70 to provide insight into how large-scale networks fractionate into sub-networks. We also provide a functional and organizational assessment of visual, motor, emotion, and interoceptive task co-activation networks as they fractionate from low to high model-orders. PMID:24339802
NASA Astrophysics Data System (ADS)
Qin, Yulin; Sohn, Myeong-Ho; Anderson, John R.; Stenger, V. Andrew; Fissell, Kate; Goode, Adam; Carter, Cameron S.
2003-04-01
Based on adaptive control of thought-rational (ACT-R), a cognitive architecture for cognitive modeling, researchers have developed an information-processing model to predict the blood oxygenation level-dependent (BOLD) response of functional MRI in symbol manipulation tasks. As an extension of this research, the current event-related functional MRI study investigates the effect of relatively extensive practice on the activation patterns of related brain regions. The task involved performing transformations on equations in an artificial algebra system. This paper shows that the base-level activation learning in the ACT-R theory can predict the change of the BOLD response in practice in a left prefrontal region reflecting retrieval of information. In contrast, practice has relatively little effect on the form of BOLD response in the parietal region reflecting imagined transformations to the equation or the motor region reflecting manual programming.
Díaz-Orueta, Unai; Blanco-Campal, Alberto; Burke, Teresa
2018-05-01
ABSTRACTBackground:A detailed neuropsychological assessment plays an important role in the diagnostic process of Mild Cognitive Impairment (MCI). However, available brief cognitive screening tests for this clinical population are administered and interpreted based mainly, or exclusively, on total achievement scores. This score-based approach can lead to erroneous clinical interpretations unless we also pay attention to the test taking behavior or to the type of errors committed during test performance. The goal of the current study is to perform a rapid review of the literature regarding cognitive screening tools for dementia in primary and secondary care; this will include revisiting previously published systematic reviews on screening tools for dementia, extensive database search, and analysis of individual references cited in selected studies. A subset of representative screening tools for dementia was identified that covers as many cognitive functions as possible. How these screening tools overlap with each other (in terms of the cognitive domains being measured and the method used to assess them) was examined and a series of process-based approach (PBA) modifications for these overlapping features was proposed, so that the changes recommended in relation to one particular cognitive task could be extrapolated to other screening tools. It is expected that future versions of cognitive screening tests, modified using a PBA, will highlight the benefits of attending to qualitative features of test performance when trying to identify subtle features suggestive of MCI and/or dementia.
Team Composition Issues for Future Space Exploration: A Review and Directions for Future Research.
Bell, Suzanne T; Brown, Shanique G; Abben, Daniel R; Outland, Neal B
2015-06-01
Future space exploration, such as a mission to Mars, will require space crews to live and work in extreme environments unlike those of previous space missions. Extreme conditions such as prolonged confinement, isolation, and expected communication time delays will require that crews have a higher level of interpersonal compatibility and be able to work autonomously, adapting to unforeseen challenges in order to ensure mission success. Team composition, or the configuration of member attributes, is an important consideration for maximizing crewmember well-being and team performance. We conducted an extensive search to find articles about team composition in long-distance space exploration (LDSE)-analogue environments, including a search of databases, specific relevant journals, and by contacting authors who publish in the area. We review the team composition research conducted in analogue environments in terms of two paths through which team composition is likely to be related to LDSE mission success, namely by 1) affecting social integration, and 2) the team processes and emergent states related to team task completion. Suggestions for future research are summarized as: 1) the need to identify ways to foster unit-level social integration within diverse crews; 2) the missed opportunity to use team composition variables as a way to improve team processes, emergent states, and task completion; and 3) the importance of disentangling the effect of specific team composition variables to determine the traits (e.g., personality, values) that are associated with particular risks (e.g., subgrouping) to performance.
A Data Mining Approach to Identify Sexuality Patterns in a Brazilian University Population.
Waleska Simões, Priscyla; Cesconetto, Samuel; Toniazzo de Abreu, Larissa Letieli; Côrtes de Mattos Garcia, Merisandra; Cassettari Junior, José Márcio; Comunello, Eros; Bisognin Ceretta, Luciane; Aparecida Manenti, Sandra
2015-01-01
This paper presents the profile and experience of sexuality generated from a data mining classification task. We used a database about sexuality and gender violence performed on a university population in southern Brazil. The data mining task identified two relationships between the variables, which enabled the distinction of subgroups that better detail the profile and experience of sexuality. The identification of the relationships between the variables define behavioral models and factors of risk that will help define the algorithms being implemented in the data mining classification task.
[Visual cues as a therapeutic tool in Parkinson's disease. A systematic review].
Muñoz-Hellín, Elena; Cano-de-la-Cuerda, Roberto; Miangolarra-Page, Juan Carlos
2013-01-01
Sensory stimuli or sensory cues are being used as a therapeutic tool for improving gait disorders in Parkinson's disease patients, but most studies seem to focus on auditory stimuli. The aim of this study was to conduct a systematic review regarding the use of visual cues over gait disorders, dual tasks during gait, freezing and the incidence of falls in patients with Parkinson to obtain therapeutic implications. We conducted a systematic review in main databases such as Cochrane Database of Systematic Reviews, TripDataBase, PubMed, Ovid MEDLINE, Ovid EMBASE and Physiotherapy Evidence Database, during 2005 to 2012, according to the recommendations of the Consolidated Standards of Reporting Trials, evaluating the quality of the papers included with the Downs & Black Quality Index. 21 articles were finally included in this systematic review (with a total of 892 participants) with variable methodological quality, achieving an average of 17.27 points in the Downs and Black Quality Index (range: 11-21). Visual cues produce improvements over temporal-spatial parameters in gait, turning execution, reducing the appearance of freezing and falls in Parkinson's disease patients. Visual cues appear to benefit dual tasks during gait, reducing the interference of the second task. Further studies are needed to determine the preferred type of stimuli for each stage of the disease. Copyright © 2012 SEGG. Published by Elsevier Espana. All rights reserved.
NASA Technical Reports Server (NTRS)
Hu, Chaumin
2007-01-01
IPG Execution Service is a framework that reliably executes complex jobs on a computational grid, and is part of the IPG service architecture designed to support location-independent computing. The new grid service enables users to describe the platform on which they need a job to run, which allows the service to locate the desired platform, configure it for the required application, and execute the job. After a job is submitted, users can monitor it through periodic notifications, or through queries. Each job consists of a set of tasks that performs actions such as executing applications and managing data. Each task is executed based on a starting condition that is an expression of the states of other tasks. This formulation allows tasks to be executed in parallel, and also allows a user to specify tasks to execute when other tasks succeed, fail, or are canceled. The two core components of the Execution Service are the Task Database, which stores tasks that have been submitted for execution, and the Task Manager, which executes tasks in the proper order, based on the user-specified starting conditions, and avoids overloading local and remote resources while executing tasks.
Extensive video-game experience alters cortical networks for complex visuomotor transformations.
Granek, Joshua A; Gorbet, Diana J; Sergio, Lauren E
2010-10-01
Using event-related functional magnetic resonance imaging (fMRI), we examined the effect of video-game experience on the neural control of increasingly complex visuomotor tasks. Previously, skilled individuals have demonstrated the use of a more efficient movement control brain network, including the prefrontal, premotor, primary sensorimotor and parietal cortices. Our results extend and generalize this finding by documenting additional prefrontal cortex activity in experienced video gamers planning for complex eye-hand coordination tasks that are distinct from actual video-game play. These changes in activation between non-gamers and extensive gamers are putatively related to the increased online control and spatial attention required for complex visually guided reaching. These data suggest that the basic cortical network for processing complex visually guided reaching is altered by extensive video-game play. Crown Copyright © 2009. Published by Elsevier Srl. All rights reserved.
ERIC Educational Resources Information Center
Sheets, Rosa Hernandez
This paper reviews patterns in the literature on minority teachers and teacher preparation. The study involved an extensive literature search using the following database selections: Books in Print A-Z; ERIC Database 1966-2000; Education Abstracts FTX 6/83-12/99; PsycINFO 1984-2000/02; Sociological Abstracts 1963-1999/12; and Social Sciences Abst…
NASA Astrophysics Data System (ADS)
Shao, Weber; Kupelian, Patrick A.; Wang, Jason; Low, Daniel A.; Ruan, Dan
2014-03-01
We devise a paradigm for representing the DICOM-RT structure sets in a database management system, in such way that secondary calculations of geometric information can be performed quickly from the existing contour definitions. The implementation of this paradigm is achieved using the PostgreSQL database system and the PostGIS extension, a geographic information system commonly used for encoding geographical map data. The proposed paradigm eliminates the overhead of retrieving large data records from the database, as well as the need to implement various numerical and data parsing routines, when additional information related to the geometry of the anatomy is desired.
Advanced Software Development Workstation Project
NASA Technical Reports Server (NTRS)
Lee, Daniel
1989-01-01
The Advanced Software Development Workstation Project, funded by Johnson Space Center, is investigating knowledge-based techniques for software reuse in NASA software development projects. Two prototypes have been demonstrated and a third is now in development. The approach is to build a foundation that provides passive reuse support, add a layer that uses domain-independent programming knowledge, add a layer that supports the acquisition of domain-specific programming knowledge to provide active support, and enhance maintainability and modifiability through an object-oriented approach. The development of new application software would use specification-by-reformulation, based on a cognitive theory of retrieval from very long-term memory in humans, and using an Ada code library and an object base. Current tasks include enhancements to the knowledge representation of Ada packages and abstract data types, extensions to support Ada package instantiation knowledge acquisition, integration with Ada compilers and relational databases, enhancements to the graphical user interface, and demonstration of the system with a NASA contractor-developed trajectory simulation package. Future work will focus on investigating issues involving scale-up and integration.
Visual Search in ASD: Instructed versus Spontaneous Local and Global Processing
ERIC Educational Resources Information Center
Van der Hallen, Ruth; Evers, Kris; Boets, Bart; Steyaert, Jean; Noens, Ilse; Wagemans, Johan
2016-01-01
Visual search has been used extensively to investigate differences in mid-level visual processing between individuals with ASD and TD individuals. The current study employed two visual search paradigms with Gaborized stimuli to assess the impact of task distractors (Experiment 1) and task instruction (Experiment 2) on local-global visual…
A Practical Method for Collecting Social Media Campaign Metrics
ERIC Educational Resources Information Center
Gharis, Laurie W.; Hightower, Mary F.
2017-01-01
Today's Extension professionals are tasked with more work and fewer resources. Integrating social media campaigns into outreach efforts can be an efficient way to meet work demands. If resources go toward social media, a practical method for collecting metrics is needed. Collecting metrics adds one more task to the workloads of Extension…
The Effects of Beacons, Comments, and Tasks on Program Comprehension Process in Software Maintenance
ERIC Educational Resources Information Center
Fan, Quyin
2010-01-01
Program comprehension is the most important and frequent process in software maintenance. Extensive research has found that individual characteristics of programmers, differences of computer programs, and differences of task-driven motivations are the major factors that affect the program comprehension results. There is no study specifically…
Neurophysiology and Neuroanatomy of Reflexive and Voluntary Saccades in Non-Human Primates
ERIC Educational Resources Information Center
Johnston, Kevin; Everling, Stefan
2008-01-01
A multitude of cognitive functions can easily be tested by a number of relatively simple saccadic eye movement tasks. This approach has been employed extensively with patient populations to investigate the functional deficits associated with psychiatric disorders. Neurophysiological studies in non-human primates performing the same tasks have…
V-TECS Guide for Auto Mechanics: Suspension Systems, Brakes and Steering.
ERIC Educational Resources Information Center
Moore, Charles G.; And Others
The materials in this document are an extension of a catalog of occupational duties, tasks, and performance objectives relevant to maintaining automotive suspension systems, brakes, and steering mechanisms. This document provides the following for each occupational task within each duty: (1) a standard of performance; (2) the conditions under…
Alcohol Alert. No. 58. Changing the Culture of Campus Drinking
ERIC Educational Resources Information Center
Kington, Raynard
2002-01-01
Drinking on college campuses is more pervasive and destructive than many people realize. The extent of the problem was recently highlighted by an extensive 3-year investigation by the Task Force on College Drinking, commissioned by the National Institute on Alcohol Abuse and Alcoholism (NIAAA). The Task Force reports that alcohol consumption is…
Mura, Marco; Castagna, Alessandro; Fontani, Vania; Rinaldi, Salvatore
2012-01-01
Purpose This study assessed changes in functional dysmetria (FD) and in brain activation observable by functional magnetic resonance imaging (fMRI) during a leg flexion-extension motor task following brain stimulation with a single radioelectric asymmetric conveyer (REAC) pulse, according to the precisely defined neuropostural optimization (NPO) protocol. Population and methods Ten healthy volunteers were assessed using fMRI conducted during a simple motor task before and immediately after delivery of a single REAC-NPO pulse. The motor task consisted of a flexion-extension movement of the legs with the knees bent. FD signs and brain activation patterns were compared before and after REAC-NPO. Results A single 250-millisecond REAC-NPO treatment alleviated FD, as evidenced by patellar asymmetry during a sit-up motion, and modulated activity patterns in the brain, particularly in the cerebellum, during the performance of the motor task. Conclusion Activity in brain areas involved in motor control and coordination, including the cerebellum, is altered by administration of a REAC-NPO treatment and this effect is accompanied by an alleviation of FD. PMID:22536071
Portus, Marc R; Lloyd, David G; Elliott, Bruce C; Trama, Neil L
2011-05-01
The measurement of lumbar spine motion is an important step for injury prevention research during complex and high impact activities, such as cricket fast bowling or javelin throwing. This study examined the performance of two designs of a lumbar rig, previously used in gait research, during a controlled high impact bench jump task. An 8-camera retro-reflective motion analysis system was used to track the lumbar rig. Eleven athletes completed the task wearing the two different lumbar rig designs. Flexion extension data were analyzed using a fast Fourier transformation to assess the signal power of these data during the impact phase of the jump. The lumbar rig featuring an increased and pliable base of support recorded moderately less signal power through the 0-60 Hz spectrum, with statistically less magnitudes at the 0-5 Hz (p = .039), 5-10 Hz (p = .005) and 10-20 Hz (p = .006) frequency bins. A lumbar rig of this design would seem likely to provide less noisy lumbar motion data during high impact tasks.
Efficient frequent pattern mining algorithm based on node sets in cloud computing environment
NASA Astrophysics Data System (ADS)
Billa, V. N. Vinay Kumar; Lakshmanna, K.; Rajesh, K.; Reddy, M. Praveen Kumar; Nagaraja, G.; Sudheer, K.
2017-11-01
The ultimate goal of Data Mining is to determine the hidden information which is useful in making decisions using the large databases collected by an organization. This Data Mining involves many tasks that are to be performed during the process. Mining frequent itemsets is the one of the most important tasks in case of transactional databases. These transactional databases contain the data in very large scale where the mining of these databases involves the consumption of physical memory and time in proportion to the size of the database. A frequent pattern mining algorithm is said to be efficient only if it consumes less memory and time to mine the frequent itemsets from the given large database. Having these points in mind in this thesis we proposed a system which mines frequent itemsets in an optimized way in terms of memory and time by using cloud computing as an important factor to make the process parallel and the application is provided as a service. A complete framework which uses a proven efficient algorithm called FIN algorithm. FIN algorithm works on Nodesets and POC (pre-order coding) tree. In order to evaluate the performance of the system we conduct the experiments to compare the efficiency of the same algorithm applied in a standalone manner and in cloud computing environment on a real time data set which is traffic accidents data set. The results show that the memory consumption and execution time taken for the process in the proposed system is much lesser than those of standalone system.
Databases toward Disseminated Use - Nikkei News Telecom -
NASA Astrophysics Data System (ADS)
Kasiwagi, Akira
The need for “searchers” - adept hands in the art of information retrieval - is increasing nowadays. Searchers have become necessary as the result of the upbeat online database market. The number of database users is rising steeply. There is the urgent need to develop potential users of general information, such as newspaper articles. Simple commands, easy operation, and low prices hold the key to general popularization of databases, and the issue lies in how the industry will get about achieving this task. Nihon Keizai Shimbun has been undertaking a wide range of possibilities with Nikkei News Telecom. Although only two years have passed since its start, results of Nikkei’s efforts are summarized below.
Narayanan, Shrikanth; Toutios, Asterios; Ramanarayanan, Vikram; Lammert, Adam; Kim, Jangwon; Lee, Sungbok; Nayak, Krishna; Kim, Yoon-Chul; Zhu, Yinghua; Goldstein, Louis; Byrd, Dani; Bresch, Erik; Ghosh, Prasanta; Katsamanis, Athanasios; Proctor, Michael
2014-01-01
USC-TIMIT is an extensive database of multimodal speech production data, developed to complement existing resources available to the speech research community and with the intention of being continuously refined and augmented. The database currently includes real-time magnetic resonance imaging data from five male and five female speakers of American English. Electromagnetic articulography data have also been presently collected from four of these speakers. The two modalities were recorded in two independent sessions while the subjects produced the same 460 sentence corpus used previously in the MOCHA-TIMIT database. In both cases the audio signal was recorded and synchronized with the articulatory data. The database and companion software are freely available to the research community. PMID:25190403
Database usage and performance for the Fermilab Run II experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonham, D.; Box, D.; Gallas, E.
2004-12-01
The Run II experiments at Fermilab, CDF and D0, have extensive database needs covering many areas of their online and offline operations. Delivering data to users and processing farms worldwide has represented major challenges to both experiments. The range of applications employing databases includes, calibration (conditions), trigger information, run configuration, run quality, luminosity, data management, and others. Oracle is the primary database product being used for these applications at Fermilab and some of its advanced features have been employed, such as table partitioning and replication. There is also experience with open source database products such as MySQL for secondary databasesmore » used, for example, in monitoring. Tools employed for monitoring the operation and diagnosing problems are also described.« less
Subject searching of monographs online in the medical literature.
Brahmi, F A
1988-01-01
Searching by subject for monographic information online in the medical literature is a challenging task. The NLM database of choice is CATLINE. Other NLM databases of interest are BIOTHICSLINE, CANCERLIT, HEALTH, POPLINE, and TOXLINE. Ten BRS databases are also discussed. Of these, Books in Print, Bookinfo, and OCLC are explored further. The databases are compared as to number of total records and number and percentage of monographs. Three topics were searched on CROSS to compare hits on BBIP, BOOK, and OCLC. The same searches were run on CATLINE. The parameters of time coverage and language were equalized and the resulting citations were compared and analyzed for duplication and uniqueness. With the input of CATLINE tapes into OCLC, OCLC has become the database of choice for searching by subject for medical monographs.
[Genetic mutation databases: stakes and perspectives for orphan genetic diseases].
Humbertclaude, V; Tuffery-Giraud, S; Bareil, C; Thèze, C; Paulet, D; Desmet, F-O; Hamroun, D; Baux, D; Girardet, A; Collod-Béroud, G; Khau Van Kien, P; Roux, A-F; des Georges, M; Béroud, C; Claustres, M
2010-10-01
New technologies, which constantly become available for mutation detection and gene analysis, have contributed to an exponential rate of discovery of disease genes and variation in the human genome. The task of collecting and documenting this enormous amount of data in genetic databases represents a major challenge for the future of biological and medical science. The Locus Specific Databases (LSDBs) are so far the most efficient mutation databases. This review presents the main types of databases available for the analysis of mutations responsible for genetic disorders, as well as open perspectives for new therapeutic research or challenges for future medicine. Accurate and exhaustive collection of variations in human genomes will be crucial for research and personalized delivery of healthcare. Copyright © 2009 Elsevier Masson SAS. All rights reserved.
A UML Profile for Developing Databases that Conform to the Third Manifesto
NASA Astrophysics Data System (ADS)
Eessaar, Erki
The Third Manifesto (TTM) presents the principles of a relational database language that is free of deficiencies and ambiguities of SQL. There are database management systems that are created according to TTM. Developers need tools that support the development of databases by using these database management systems. UML is a widely used visual modeling language. It provides built-in extension mechanism that makes it possible to extend UML by creating profiles. In this paper, we introduce a UML profile for designing databases that correspond to the rules of TTM. We created the first version of the profile by translating existing profiles of SQL database design. After that, we extended and improved the profile. We implemented the profile by using UML CASE system StarUML™. We present an example of using the new profile. In addition, we describe problems that occurred during the profile development.
Stöckel, Tino; Wang, Jinsung
2011-11-01
Interlimb transfer of motor learning, indicating an improvement in performance with one limb following training with the other, often occurs asymmetrically (i.e., from non-dominant to dominant limb or vice versa, but not both). In the present study, we examined whether interlimb transfer of the same motor task could occur asymmetrically and in opposite directions (i.e., from right to left leg vs. left to right leg) depending on individuals' conception of the task. Two experimental conditions were tested: In a dynamic control condition, the process of learning was facilitated by providing the subjects with a type of information that forced them to focus on dynamic features of a given task (force impulse); and in a spatial control condition, it was done with another type of information that forced them to focus on visuomotor features of the same task (distance). Both conditions employed the same leg extension task. In addition, a fully-crossed transfer paradigm was used in which one group of subjects initially practiced with the right leg and were tested with the left leg for a transfer test, while the other group used the two legs in the opposite order. The results showed that the direction of interlimb transfer varied depending on the condition, such that the right and the left leg benefited from initial training with the opposite leg only in the spatial and the dynamic condition, respectively. Our finding suggests that manipulating the conception of a leg extension task has a substantial influence on the pattern of interlimb transfer in such a way that the direction of transfer can even be opposite depending on whether the task is conceived as a dynamic or spatial control task. Copyright © 2011 Elsevier Inc. All rights reserved.
2005-01-01
C. Hughes, Spacecraft Attitude Dynamics, New York, NY: Wiley, 1994. [8] H. K. Khalil, “Adaptive Output Feedback Control of Non- linear Systems...Closed-Loop Manipulator Control Using Quaternion Feedback ”, IEEE Trans. Robotics and Automation, Vol. 4, No. 4, pp. 434-440, (1988). [23] E...full-state feedback quaternion based controller de- veloped in [5] and focuses on the design of a general sub-task controller. This sub-task controller
2014-10-02
MPD. This manufacturer documentation contains maintenance tasks with specification of intervals and required man-hours that are to be carried out...failures, without consideration of false alarms and missed failures (see also section 4.1.3). The task redundancy rate is the percentage of preventive...Prognostics and Health Management ROI return on investment RUL remaining useful life TCG task code group SB Service Bulletin XML Extensible Markup
Berger, Marc L; Mamdani, Muhammad; Atkins, David; Johnson, Michael L
2009-01-01
Health insurers, physicians, and patients worldwide need information on the comparative effectiveness and safety of prescription drugs in routine care. Nonrandomized studies of treatment effects using secondary databases may supplement the evidence based from randomized clinical trials and prospective observational studies. Recognizing the challenges to conducting valid retrospective epidemiologic and health services research studies, a Task Force was formed to develop a guidance document on state of the art approaches to frame research questions and report findings for these studies. The Task Force was commissioned and a Chair was selected by the International Society for Pharmacoeconomics and Outcomes Research Board of Directors in October 2007. This Report, the first of three reported in this issue of the journal, addressed issues of framing the research question and reporting and interpreting findings. The Task Force Report proposes four primary characteristics-relevance, specificity, novelty, and feasibility while defining the research question. Recommendations included: the practice of a priori specification of the research question; transparency of prespecified analytical plans, provision of justifications for any subsequent changes in analytical plan, and reporting the results of prespecified plans as well as results from significant modifications, structured abstracts to report findings with scientific neutrality; and reasoned interpretations of findings to help inform policy decisions. Comparative effectiveness research in the form of nonrandomized studies using secondary databases can be designed with rigorous elements and conducted with sophisticated statistical methods to improve causal inference of treatment effects. Standardized reporting and careful interpretation of results can aid policy and decision-making.
ERIC Educational Resources Information Center
Lipp, Ellen
2017-01-01
This pilot study examined multilingual university students' willingness to engage in voluntary extensive reading (ER) of books after they received training. The research questions were whether training appeared to promote self-efficacy, motivation for the task, use of metacognitive strategies, and independent reading. University freshmen in an ESL…
Self-Determination Theory and Day and Bamford's Principles for Extensive Reading
ERIC Educational Resources Information Center
Türkdogan, Gönül; Sivell, John
2016-01-01
Day and Bamford's ten principles for promoting second-language (L2) extensive reading (ER) have been commended for their highly applicable practicality. However, for various reasons, assuring successful ER instruction can remain a challenging task. This surprising contrast may in part be clarified by examining the relationship between Day and…
Making Evaluation Work for You: Ideas for Deriving Multiple Benefits from Evaluation
ERIC Educational Resources Information Center
Jayaratne, K. S. U.
2016-01-01
Increased demand for accountability has forced Extension educators to evaluate their programs and document program impacts. Due to this situation, some Extension educators may view evaluation simply as the task, imposed on them by administrators, of collecting outcome and impact data for accountability. They do not perceive evaluation as a useful…
Mokhtarinia, Hamid Reza; Sanjari, Mohammad Ali; Chehrehrazi, Mahshid; Kahrizi, Sedigheh; Parnianpour, Mohamad
2016-02-01
Multiple joint interactions are critical to produce stable coordinated movements and can be influenced by low back pain and task conditions. Inter-segmental coordination pattern and variability were assessed in subjects with and without chronic nonspecific low back pain (CNSLBP). Kinematic data were collected from 22 CNSLBP and 22 healthy volunteers during repeated trunk flexion-extension in various conditions of symmetry, velocity, and loading; each at two levels. Sagittal plane angular data were time normalized and used to calculate continuous relative phase for each data point. Mean absolute relative phase (MARP) and deviation phase (DP) were derived to quantify lumbar-pelvis and pelvis-thigh coordination patterns and variability. Statistical analysis revealed more in-phase coordination pattern in CNSLBP (p=0.005). There was less adaptation in the DP for the CNSLBP group, as shown by interactions of Group by Load (p=.008) and Group by Symmetry by Velocity (p=.03) for the DP of pelvis-thigh and lumbar-pelvis couplings, respectively. Asymmetric (p<0.001) and loaded (p=0.04) conditions caused less in-phase coordination. Coordination variability was higher during asymmetric and low velocity conditions (p<0.001). In conclusion, coordination pattern and variability could be influenced by trunk flexion-extension conditions. CNSLBP subjects demonstrated less adaptability of movement pattern to the demands of the flexion-extension task. Copyright © 2015 Elsevier B.V. All rights reserved.
Pathology data integration with eXtensible Markup Language.
Berman, Jules J
2005-02-01
It is impossible to overstate the importance of XML (eXtensible Markup Language) as a data organization tool. With XML, pathologists can annotate all of their data (clinical and anatomic) in a format that can transform every pathology report into a database, without compromising narrative structure. The purpose of this manuscript is to provide an overview of XML for pathologists. Examples will demonstrate how pathologists can use XML to annotate individual data elements and to structure reports in a common format that can be merged with other XML files or queried using standard XML tools. This manuscript gives pathologists a glimpse into how XML allows pathology data to be linked to other types of biomedical data and reduces our dependence on centralized proprietary databases.
JNDMS Task Authorization 2 Report
2013-10-01
uses Barnyard to store alarms from all DREnet Snort sensors in a MySQL database. Barnyard is an open source tool designed to work with Snort to take...Technology ITI Information Technology Infrastructure J2EE Java 2 Enterprise Edition JAR Java Archive. This is an archive file format defined by Java ...standards. JDBC Java Database Connectivity JDW JNDMS Data Warehouse JNDMS Joint Network and Defence Management System JNDMS Joint Network Defence and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marsh, Amber; Harsch, Tim; Pitt, Julie
2007-08-31
The computer side of the IMAGE project consists of a collection of Perl scripts that perform a variety of tasks; scripts are available to insert, update and delete data from the underlying Oracle database, download data from NCBI's Genbank and other sources, and generate data files for download by interested parties. Web scripts make up the tracking interface, and various tools available on the project web-site (image.llnl.gov) that provide a search interface to the database.
Database Management in Design Optimization.
1983-10-30
processing program(s) engaged in the task of preparing input data for the (finite-element) analysis and optimization phases primary storage the main...and extraction of data from the database for further processing . It can be divided into two phases: a) The process of selection and identification of ...user wishes to stop the reading or the writing process . The meaning of END depends on the method specified for retrieving data: a) Row-wise - then
Mass Storage Performance Information System
NASA Technical Reports Server (NTRS)
Scheuermann, Peter
2000-01-01
The purpose of this task is to develop a data warehouse to enable system administrators and their managers to gather information by querying the data logs of the MDSDS. Currently detailed logs capture the activity of the MDSDS internal to the different systems. The elements to be included in the data warehouse are requirements analysis, data cleansing, database design, database population, hardware/software acquisition, data transformation, query and report generation, and data mining.
ELISA-BASE: An Integrated Bioinformatics Tool for Analyzing and Tracking ELISA Microarray Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Amanda M.; Collett, James L.; Seurynck-Servoss, Shannon L.
ELISA-BASE is an open-source database for capturing, organizing and analyzing protein enzyme-linked immunosorbent assay (ELISA) microarray data. ELISA-BASE is an extension of the BioArray Soft-ware Environment (BASE) database system, which was developed for DNA microarrays. In order to make BASE suitable for protein microarray experiments, we developed several plugins for importing and analyzing quantitative ELISA microarray data. Most notably, our Protein Microarray Analysis Tool (ProMAT) for processing quantita-tive ELISA data is now available as a plugin to the database.
Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency
Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio
2015-01-01
Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB. PMID:26558254
A Human Factors Analysis of EVA Time Requirements
NASA Technical Reports Server (NTRS)
Pate, Dennis W.
1997-01-01
Human Factors Engineering (HFE) is a discipline whose goal is to engineer a safer, more efficient interface between humans and machines. HFE makes use of a wide range of tools and techniques to fulfill this goal. One of these tools is known as motion and time study, a technique used to develop time standards for given tasks. During the summer of 1995, a human factors motion and time study was initiated with the goals of developing a database of EVA task times and developing a method of utilizing the database to predict how long an EVA should take. Initial development relied on the EVA activities performed during the STS-61 (Hubble) mission. The first step of the study was to become familiar with EVA's, the previous task-time studies, and documents produced on EVA's. After reviewing these documents, an initial set of task primitives and task-time modifiers was developed. Data was collected from videotaped footage of two entire STS-61 EVA missions and portions of several others, each with two EVA astronauts. Feedback from the analysis of the data was used to further refine the primitives and modifiers used. The project was continued during the summer of 1996, during which data on human errors was also collected and analyzed. Additional data from the STS-71 mission was also collected. Analysis of variance techniques for categorical data was used to determine which factors may affect the primitive times and how much of an effect they have. Probability distributions for the various task were also generated. Further analysis of the modifiers and interactions is planned.
BioFrameNet: A FrameNet Extension to the Domain of Molecular Biology
ERIC Educational Resources Information Center
Dolbey, Andrew Eric
2009-01-01
In this study I introduce BioFrameNet, an extension of the Berkeley FrameNet lexical database to the domain of molecular biology. I examine the syntactic and semantic combinatorial possibilities exhibited in the lexical items used in this domain in order to get a better understanding of the grammatical properties of the language used in scientific…
Physical Abilities and Military Task Performance: A Replication and Extension
2009-06-09
exertion lasted 3 s. Trapezius lift. Subject stood with feet at shoulder width grasping handles that were 38.5 cm apart to mimic the grip used in...maintained even with encouragement. Trapezius lift. The subject stood erect with his feet shoulder-width apart. He held a 20.9- kg load with his arms...static trunk extension; dynamic and static arm flexion; bench press, trapezius lift, leg extension; dynamic and static trunk flexion; right and left
On scheduling task systems with variable service times
NASA Astrophysics Data System (ADS)
Maset, Richard G.; Banawan, Sayed A.
1993-08-01
Several strategies have been proposed for developing optimal and near-optimal schedules for task systems (jobs consisting of multiple tasks that can be executed in parallel). Most such strategies, however, implicitly assume deterministic task service times. We show that these strategies are much less effective when service times are highly variable. We then evaluate two strategies—one adaptive, one static—that have been proposed for retaining high performance despite such variability. Both strategies are extensions of critical path scheduling, which has been found to be efficient at producing near-optimal schedules. We found the adaptive approach to be quite effective.
Let your fingers do the walking: The projects most invaluable tool
NASA Technical Reports Server (NTRS)
Zirk, Deborah A.
1993-01-01
The barrage of information pertaining to the software being developed for a project can be overwhelming. Current status information, as well as the statistics and history of software releases, should be 'at the fingertips' of project management and key technical personnel. This paper discusses the development, configuration, capabilities, and operation of a relational database, the System Engineering Database (SEDB) which was designed to assist management in monitoring of the tasks performed by the Network Control Center (NCC) Project. This database has proven to be an invaluable project tool and is utilized daily to support all project personnel.
Simulation of Constrained Musculoskeletal Systems in Task Space.
Stanev, Dimitar; Moustakas, Konstantinos
2018-02-01
This paper proposes an operational task space formalization of constrained musculoskeletal systems, motivated by its promising results in the field of robotics. The change of representation requires different algorithms for solving the inverse and forward dynamics simulation in the task space domain. We propose an extension to the direct marker control and an adaptation of the computed muscle control algorithms for solving the inverse kinematics and muscle redundancy problems, respectively. Experimental evaluation demonstrates that this framework is not only successful in dealing with the inverse dynamics problem, but also provides an intuitive way of studying and designing simulations, facilitating assessment prior to any experimental data collection. The incorporation of constraints in the derivation unveils an important extension of this framework toward addressing systems that use absolute coordinates and topologies that contain closed kinematic chains. Task space projection reveals a more intuitive encoding of the motion planning problem, allows for better correspondence between observed and estimated variables, provides the means to effectively study the role of kinematic redundancy, and most importantly, offers an abstract point of view and control, which can be advantageous toward further integration with high level models of the precommand level. Task-based approaches could be adopted in the design of simulation related to the study of constrained musculoskeletal systems.
MimoSA: a system for minimotif annotation
2010-01-01
Background Minimotifs are short peptide sequences within one protein, which are recognized by other proteins or molecules. While there are now several minimotif databases, they are incomplete. There are reports of many minimotifs in the primary literature, which have yet to be annotated, while entirely novel minimotifs continue to be published on a weekly basis. Our recently proposed function and sequence syntax for minimotifs enables us to build a general tool that will facilitate structured annotation and management of minimotif data from the biomedical literature. Results We have built the MimoSA application for minimotif annotation. The application supports management of the Minimotif Miner database, literature tracking, and annotation of new minimotifs. MimoSA enables the visualization, organization, selection and editing functions of minimotifs and their attributes in the MnM database. For the literature components, Mimosa provides paper status tracking and scoring of papers for annotation through a freely available machine learning approach, which is based on word correlation. The paper scoring algorithm is also available as a separate program, TextMine. Form-driven annotation of minimotif attributes enables entry of new minimotifs into the MnM database. Several supporting features increase the efficiency of annotation. The layered architecture of MimoSA allows for extensibility by separating the functions of paper scoring, minimotif visualization, and database management. MimoSA is readily adaptable to other annotation efforts that manually curate literature into a MySQL database. Conclusions MimoSA is an extensible application that facilitates minimotif annotation and integrates with the Minimotif Miner database. We have built MimoSA as an application that integrates dynamic abstract scoring with a high performance relational model of minimotif syntax. MimoSA's TextMine, an efficient paper-scoring algorithm, can be used to dynamically rank papers with respect to context. PMID:20565705
Stereotype threat as a trigger of mind-wandering in older adults.
Jordano, Megan L; Touron, Dayna R
2017-05-01
Older adults (OAs) report less overall mind-wandering than younger adults (YAs) but more task-related interference (TRI; mind-wandering about the task). The current study examined TRI while manipulating older adults' performance-related concerns. We compared groups for which memory-related stereotype threat (ST) was activated or relieved to a control group. Participants completed an operation span task containing mind-wandering probes. ST-activated OAs reported more TRI than ST-relieved OAs and had worse performance on the operation span task. This study illustrates that environmental context triggers current concerns and determines, in part, the frequency and content of mind-wandering. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Task 1.6 -- Mixed waste. Topical report, April 1994--September 1995
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rindt, J.R.; Jones, F.A.
1996-01-01
For fifty years, the United States was involved in a nuclear arms race of immense proportions. During the majority of this period, the push was always to design new weapons, produce more weapons, and increase the size of the arsenal, maintaining an advantage over the opposition in order to protect US interests. Now that the Cold War is over, the US is faced with the imposing tasks of dismantling, cleaning up, and remediating the wide variety of problems created by this arms race. The ability to understand the problems encountered when dealing with radioactive waste, both from a scientific standpointmore » and from a legislative standpoint, requires knowledge of treatment and disposal subject areas. This required the accumulation of applicable information. A literature database was developed; site visits were made; and contact relationships were established. Informational databases from government agencies involved in environmental remediation were ordered or purchased, and previously established private sector relationships were used to develop an information base. An appendix contains 482 bibliographic citations that have been integrated into a Microsoft Access{reg_sign} database.« less
Liu, Pan; Pell, Marc D
2012-12-01
To establish a valid database of vocal emotional stimuli in Mandarin Chinese, a set of Chinese pseudosentences (i.e., semantically meaningless sentences that resembled real Chinese) were produced by four native Mandarin speakers to express seven emotional meanings: anger, disgust, fear, sadness, happiness, pleasant surprise, and neutrality. These expressions were identified by a group of native Mandarin listeners in a seven-alternative forced choice task, and items reaching a recognition rate of at least three times chance performance in the seven-choice task were selected as a valid database and then subjected to acoustic analysis. The results demonstrated expected variations in both perceptual and acoustic patterns of the seven vocal emotions in Mandarin. For instance, fear, anger, sadness, and neutrality were associated with relatively high recognition, whereas happiness, disgust, and pleasant surprise were recognized less accurately. Acoustically, anger and pleasant surprise exhibited relatively high mean f0 values and large variation in f0 and amplitude; in contrast, sadness, disgust, fear, and neutrality exhibited relatively low mean f0 values and small amplitude variations, and happiness exhibited a moderate mean f0 value and f0 variation. Emotional expressions varied systematically in speech rate and harmonics-to-noise ratio values as well. This validated database is available to the research community and will contribute to future studies of emotional prosody for a number of purposes. To access the database, please contact pan.liu@mail.mcgill.ca.
Emilyn Sheffield; Leslie Furr; Charles Nelson
1992-01-01
Filevision IV is a multilayer imaging and data-base management system that combines drawing, filing and extensive report-writing capabilities (Filevision IV, 1988). Filevision IV users access data by attaching graphics to text-oriented data-base records. Tourist attractions, support services, and geo-graphic features can be located on a base map of an area or region....
2011-09-01
rate Python’s maturity as “High.” Python is nine years old and has been continuously been developed and enhanced since then. During fiscal year 2010...We rate Python’s developer toolkit availability/extensibility as “Yes.” Python runs on a SQL database and is 64 compatible with Oracle database...MODEL...........................................................................11 D. GOAL DEVELOPMENT
Cognitive Control and Language across the Life Span: Does Labeling Improve Reactive Control?
ERIC Educational Resources Information Center
Lucenet, Joanna; Blaye, Agnès; Chevalier, Nicolas; Kray, Jutta
2014-01-01
How does cognitive control change with age, and what are the processes underlying these changes? This question has been extensively studied using versions of the task-switching paradigm, which allow participants to actively prepare for the upcoming task (Kray, Eber, & Karbach, 2008). Little is known, however, about age-related changes in this…
Examining Lateralized Lexical Ambiguity Processing Using Dichotic and Cross-Modal Tasks
ERIC Educational Resources Information Center
Atchley, Ruth Ann; Grimshaw, Gina; Schuster, Jonathan; Gibson, Linzi
2011-01-01
The individual roles played by the cerebral hemispheres during the process of language comprehension have been extensively studied in tasks that require individuals to read text (for review see Jung-Beeman, 2005). However, it is not clear whether or not some aspects of the theorized laterality models of semantic comprehension are a result of the…
ERIC Educational Resources Information Center
Dymond, Simon; Bailey, Rebecca; Willner, Paul; Parry, Rhonwen
2010-01-01
Individuals with intellectual and developmental disabilities often have difficulties foregoing short-term loss for long-term gain. The Iowa Gambling Task (IGT) has been extensively adopted as a laboratory measure of this ability. In the present study, we undertook the first investigation with people with intellectual disabilities using a…
Does Time-on-Task Estimation Matter? Implications for the Validity of Learning Analytics Findings
ERIC Educational Resources Information Center
Kovanovic, Vitomir; Gaševic, Dragan; Dawson, Shane; Joksimovic, Srecko; Baker, Ryan S.; Hatala, Marek
2015-01-01
With\twidespread adoption of Learning Management Systems (LMS) and other learning technology, large amounts of data--commonly known as trace data--are readily accessible to researchers. Trace data has been extensively used to calculate time that students spend on different learning activities--typically referred to as time-on-task. These measures…
User Acceptance of YouTube for Procedural Learning: An Extension of the Technology Acceptance Model
ERIC Educational Resources Information Center
Lee, Doo Young; Lehto, Mark R.
2013-01-01
The present study was framed using the Technology Acceptance Model (TAM) to identify determinants affecting behavioral intention to use YouTube. Most importantly, this research emphasizes the motives for using YouTube, which is notable given its extrinsic task goal of being used for procedural learning tasks. Our conceptual framework included two…
ERIC Educational Resources Information Center
Inoue, Chihiro
2016-01-01
The constructs of complexity, accuracy and fluency (CAF) have been used extensively to investigate learner performance on second language tasks. However, a serious concern is that the variables used to measure these constructs are sometimes used conventionally without any empirical justification. It is crucial for researchers to understand how…
Written Corrective Feedback in IELTS Writing Task 2: Teachers' Priorities, Practices, and Beliefs
ERIC Educational Resources Information Center
Pearson, William S.
2018-01-01
Teacher corrective feedback is widely recognised as integral in supporting developing L2 writers. The potentially high pressure IELTS test preparation classroom presents a context where feedback has not yet been extensively studied. Consequently, teachers' approaches to corrective feedback on Writing Task 2, the essay component of IELTS Writing,…
ERIC Educational Resources Information Center
Hagen, Anastasia S.; And Others
This study, fourth in a series examining factors related to involvement in academic tasks, considers the ways in which cognitive, affective, and motivational variables associated with involvement change over various phases of completing an actual academic task (studying for a final examination). The phases of studying were: (1) just about to begin…
Metacognition of multitasking: How well do we predict the costs of divided attention?
Finley, Jason R; Benjamin, Aaron S; McCarley, Jason S
2014-06-01
Risky multitasking, such as texting while driving, may occur because people misestimate the costs of divided attention. In two experiments, participants performed a computerized visual-manual tracking task in which they attempted to keep a mouse cursor within a small target that moved erratically around a circular track. They then separately performed an auditory n-back task. After practicing both tasks separately, participants received feedback on their single-task tracking performance and predicted their dual-task tracking performance before finally performing the 2 tasks simultaneously. Most participants correctly predicted reductions in tracking performance under dual-task conditions, with a majority overestimating the costs of dual-tasking. However, the between-subjects correlation between predicted and actual performance decrements was near 0. This combination of results suggests that people do anticipate costs of multitasking, but have little metacognitive insight on the extent to which they are personally vulnerable to the risks of divided attention, relative to other people. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Del Casale, Antonio; Kotzalidis, Georgios D; Rapinesi, Chiara; Sorice, Serena; Girardi, Nicoletta; Ferracuti, Stefano; Girardi, Paolo
2016-01-01
The nature of the alteration of the response to cognitive tasks in first-episode psychosis (FEP) still awaits clarification. We used activation likelihood estimation, an increasingly used method in evaluating normal and pathological brain function, to identify activation changes in functional magnetic resonance imaging (fMRI) studies of FEP during attentional and memory tasks. We included 11 peer-reviewed fMRI studies assessing FEP patients versus healthy controls (HCs) during performance of attentional and memory tasks. Our database comprised 290 patients with FEP, matched with 316 HCs. Between-group analyses showed that HCs, compared to FEP patients, exhibited hyperactivation of the right middle frontal gyrus (Brodmann area, BA, 9), right inferior parietal lobule (BA 40), and right insula (BA 13) during attentional task performances and hyperactivation of the left insula (BA 13) during memory task performances. Right frontal, parietal, and insular dysfunction during attentional task performance and left insular dysfunction during memory task performance are significant neural functional FEP correlates. © 2016 S. Karger AG, Basel.
Investigation on the improvement and transfer of dual-task coordination skills.
Strobach, Tilo; Frensch, Peter A; Soutschek, Alexander; Schubert, Torsten
2012-11-01
Recent research has demonstrated that dual-task performance in situations with two simultaneously presented tasks can be substantially improved with extensive practice. This improvement was related to the acquisition of task coordination skills. Earlier studies provided evidence that these skills result from hybrid practice, including dual and single tasks, but not from single-task practice. It is an open question, however, whether task coordination skills are independent from the specific practice situation and are transferable to new situations or whether they are non-transferable and task-specific. The present study, therefore, tested skill transfer in (1) a dual-task situation with identical tasks in practice and transfer, (2) a dual-task situation with two tasks changed from practice to transfer, and (3) a task switching situation with two sequentially presented tasks. Our findings are largely consistent with the assumption that task coordination skills are non-transferable and task-specific. We cannot, however, definitively reject the assumption of transferable skills when measuring error rates in the dual-task situation with two changed tasks after practice. In the task switching situation, single-task and hybrid practice both led to a transfer effect on mixing costs.
Task Prioritization in Dual-Tasking: Instructions versus Preferences
Jansen, Reinier J.; van Egmond, René; de Ridder, Huib
2016-01-01
The role of task prioritization in performance tradeoffs during multi-tasking has received widespread attention. However, little is known on whether people have preferences regarding tasks, and if so, whether these preferences conflict with priority instructions. Three experiments were conducted with a high-speed driving game and an auditory memory task. In Experiment 1, participants did not receive priority instructions. Participants performed different sequences of single-task and dual-task conditions. Task performance was evaluated according to participants’ retrospective accounts on preferences. These preferences were reformulated as priority instructions in Experiments 2 and 3. The results showed that people differ in their preferences regarding task prioritization in an experimental setting, which can be overruled by priority instructions, but only after increased dual-task exposure. Additional measures of mental effort showed that performance tradeoffs had an impact on mental effort. The interpretation of these findings was used to explore an extension of Threaded Cognition Theory with Hockey’s Compensatory Control Model. PMID:27391779
Skewed task conflicts in teams: What happens when a few members see more conflict than the rest?
Sinha, Ruchi; Janardhanan, Niranjan S; Greer, Lindred L; Conlon, Donald E; Edwards, Jeffery R
2016-07-01
Task conflict has been the subject of a long-standing debate in the literature-when does task conflict help or hurt team performance? We propose that this debate can be resolved by taking a more precise view of how task conflicts are perceived in teams. Specifically, we propose that in teams, when a few team members perceive a high level of task disagreement while a majority of others perceive low levels of task disagreement-that is, there is positively skewed task conflict, task conflict is most likely to live up to its purported benefits for team performance. In our first study of student teams engaged in a business decision game, we find support for the positive relationship between skewed task conflict and team performance. In our second field study of teams in a financial corporation, we find that the relationship between positively skewed task conflict and supervisor ratings of team performance is mediated by reflective communication within the team. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Lenz, Bernard N.
1997-01-01
An important part of the U.S. Geological Survey's (USGS) National Water-Quality Assessment (NAWQA) Program is the analysis of existing data in each of the NAWQA study areas. The Wisconsin Department of Natural Resources (WDNR) has an extensive aquatic benthic macroinvertebrate communities in streams (benthic invertebrates) database maintained by the University of Wisconsin-Stevens Point. This database has data which date back to 1984 and includes data from streams within the Western Lake Michigan Drainages (WMIC) study area (fig. 1). This report looks at the feasibility of USGS scientists supplementing the data they collect with data from the WDNR database when assessing water quality in the study area.
An online database of nuclear electromagnetic moments
NASA Astrophysics Data System (ADS)
Mertzimekis, T. J.; Stamou, K.; Psaltis, A.
2016-01-01
Measurements of nuclear magnetic dipole and electric quadrupole moments are considered quite important for the understanding of nuclear structure both near and far from the valley of stability. The recent advent of radioactive beams has resulted in a plethora of new, continuously flowing, experimental data on nuclear structure - including nuclear moments - which hinders the information management. A new, dedicated, public and user friendly online database (http://magneticmoments.info) has been created comprising experimental data of nuclear electromagnetic moments. The present database supersedes existing printed compilations, including also non-evaluated series of data and relevant meta-data, while putting strong emphasis on bimonthly updates. The scope, features and extensions of the database are reported.
Use of modeling to identify vulnerabilities to human error in laparoscopy.
Funk, Kenneth H; Bauer, James D; Doolen, Toni L; Telasha, David; Nicolalde, R Javier; Reeber, Miriam; Yodpijit, Nantakrit; Long, Myra
2010-01-01
This article describes an exercise to investigate the utility of modeling and human factors analysis in understanding surgical processes and their vulnerabilities to medical error. A formal method to identify error vulnerabilities was developed and applied to a test case of Veress needle insertion during closed laparoscopy. A team of 2 surgeons, a medical assistant, and 3 engineers used hierarchical task analysis and Integrated DEFinition language 0 (IDEF0) modeling to create rich models of the processes used in initial port creation. Using terminology from a standardized human performance database, detailed task descriptions were written for 4 tasks executed in the process of inserting the Veress needle. Key terms from the descriptions were used to extract from the database generic errors that could occur. Task descriptions with potential errors were translated back into surgical terminology. Referring to the process models and task descriptions, the team used a modified failure modes and effects analysis (FMEA) to consider each potential error for its probability of occurrence, its consequences if it should occur and be undetected, and its probability of detection. The resulting likely and consequential errors were prioritized for intervention. A literature-based validation study confirmed the significance of the top error vulnerabilities identified using the method. Ongoing work includes design and evaluation of procedures to correct the identified vulnerabilities and improvements to the modeling and vulnerability identification methods. Copyright 2010 AAGL. Published by Elsevier Inc. All rights reserved.
Device Oriented Project Controller
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalesio, Leo; Kraimer, Martin
2013-11-20
This proposal is directed at the issue of developing control systems for very large HEP projects. A de-facto standard in accelerator control is the Experimental Physics and Industrial Control System (EPICS), which has been applied successfully to many physics projects. EPICS is a channel based system that requires that each channel of each device be configured and controlled. In Phase I, the feasibility of a device oriented extension to the distributed channel database was demonstrated by prototyping a device aware version of an EPICS I/O controller that functions with the current version of the channel access communication protocol. Extensions havemore » been made to the grammar to define the database. Only a multi-stage position controller with limit switches was developed in the demonstration, but the grammar should support a full range of functional record types. In phase II, a full set of record types will be developed to support all existing record types, a set of process control functions for closed loop control, and support for experimental beam line control. A tool to configure these records will be developed. A communication protocol will be developed or extensions will be made to Channel Access to support introspection of components of a device. Performance bench marks will be made on both communication protocol and the database. After these records and performance tests are under way, a second of the grammar will be undertaken.« less
NASA Technical Reports Server (NTRS)
Vincent, R. K.; Thomas, G. S.; Nalepka, R. F.
1974-01-01
The importance of specific spectral regions to signature extension is explored. In the recent past, the signature extension task was focused on the development of new techniques. Tested techniques are now used to investigate this spectral aspect of the large area survey. Sets of channels were sought which, for a given technique, were the least affected by several sources of variation over four data sets and yet provided good object class separation on each individual data set. Using sets of channels determined as part of this study, signature extension was accomplished between data sets collected over a six-day period and over a range of about 400 kilometers.
The National Nonindigenous Aquatic Species Database
Neilson, Matthew E.; Fuller, Pamela L.
2012-01-01
The U.S. Geological Survey (USGS) Nonindigenous Aquatic Species (NAS) Program maintains a database that monitors, records, and analyzes sightings of nonindigenous aquatic plant and animal species throughout the United States. The program is based at the USGS Wetland and Aquatic Research Center in Gainesville, Florida.The initiative to maintain scientific information on nationwide occurrences of nonindigenous aquatic species began with the Aquatic Nuisance Species Task Force, created by Congress in 1990 to provide timely information to natural resource managers. Since then, the NAS database has been a clearinghouse of information for confirmed sightings of nonindigenous, also known as nonnative, aquatic species throughout the Nation. The database is used to produce email alerts, maps, summary graphs, publications, and other information products to support natural resource managers.
Critical Infrastructure: The National Asset Database
2006-09-14
NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION...that, in its current form, it is being used inappropriately as the basis upon which federal resources, including infrastructure protection grants , are...National Asset Database has been used to support federal grant -making decisions, according to a DHS official, it does not drive those decisions. In July
The Creative task Creator: a tool for the generation of customized, Web-based creativity tasks.
Pretz, Jean E; Link, John A
2008-11-01
This article presents a Web-based tool for the creation of divergent-thinking and open-ended creativity tasks. A Java program generates HTML forms with PHP scripting that run an Alternate Uses Task and/or open-ended response items. Researchers may specify their own instructions, objects, and time limits, or use default settings. Participants can also be prompted to select their best responses to the Alternate Uses Task (Silvia et al., 2008). Minimal programming knowledge is required. The program runs on any server, and responses are recorded in a standard MySQL database. Responses can be scored using the consensual assessment technique (Amabile, 1996) or Torrance's (1998) traditional scoring method. Adoption of this Web-based tool should facilitate creativity research across cultures and access to eminent creators. The Creative Task Creator may be downloaded from the Psychonomic Society's Archive of Norms, Stimuli, and Data, www.psychonomic.org/archive.
Connecting Our Nation’s Crisis Information Management Systems
2008-12-01
Voice Alert is a communications solution that uses a combination of database and GIS mapping technologies to deliver outbound notifications.85 EOC...needing to be accessed through an extension is necessary. With many businesses, hotels , and other locations requiring an extension to reach...built around five major management activities of an incident.130 Command Operations Planning Logistics Finance/administration. The new
Models, Tools, and Databases for Land and Waste Management Research
These publicly available resources can be used for such tasks as simulating biodegradation or remediation of contaminants such as hydrocarbons, measuring sediment accumulation at superfund sites, or assessing toxicity and risk.
NASA Technical Reports Server (NTRS)
Simmons, Reid; Apfelbaum, David
2005-01-01
Task Description Language (TDL) is an extension of the C++ programming language that enables programmers to quickly and easily write complex, concurrent computer programs for controlling real-time autonomous systems, including robots and spacecraft. TDL is based on earlier work (circa 1984 through 1989) on the Task Control Architecture (TCA). TDL provides syntactic support for hierarchical task-level control functions, including task decomposition, synchronization, execution monitoring, and exception handling. A Java-language-based compiler transforms TDL programs into pure C++ code that includes calls to a platform-independent task-control-management (TCM) library. TDL has been used to control and coordinate multiple heterogeneous robots in projects sponsored by NASA and the Defense Advanced Research Projects Agency (DARPA). It has also been used in Brazil to control an autonomous airship and in Canada to control a robotic manipulator.
CAMEO Chemicals is an extensive chemical database, available for download, with critical response information for thousands of chemicals, and a tool that tells you what reactions might occur if chemicals were mixed together.
Summaries of Minnehaha Creek Watershed District Plans/Studies/Reports
2004-01-30
34+ Management of all wetland functional assessment data in a Microsoft Access© database "+ Development of a GIS wetland data management system "+ Recommendations...General Task B Design GIS -Based Decision Making Model: Scenario-Based $125,000 $125,000 Model of Landuse Hydro Data Monitoring Task C Water Quality...Landuse and Land cover data + Watershed GIS data layers + Flood Insurance Rate Maps + Proposed project locations + Stream miles, reaches and conditions
Geologic and aeromagnetic maps of the Fossil Ridge area and vicinity, Gunnison County, Colorado
DeWitt, Ed; Zech, R.S.; Chase, C.G.; Zartman, R.E.; Kucks, R.P.; Bartelson, Bruce; Rosenlund, G.C.; Earley, Drummond
2002-01-01
This data set includes a GIS geologic map database of an Early Proterozoic metavolcanic and metasedimentary terrane extensively intruded by Early and Middle Proterozoic granitic plutons. Laramide to Tertiary deformation and intrusion of felsic plutons have created numerous small mineral deposits that are described in the tables and are shown on the figures in the accompanying text pamphlet. Also included in the pamphlet are numerous chemical analyses of igneous and meta-igneous bodies of all ages in tables and in summary geochemical diagrams. The text pamphlet also contains a detailed description of map units and discussions of the aeromagnetic survey, igneous and metmorphic rocks, and mineral deposits. The printed map sheet and browse graphic pdf file include the aeromagnetic map of the study area, as well as figures and photographs. Purpose: This GIS geologic map database is provided to facilitate the presentation and analysis of earth-science data for this region of Colorado. This digital map database may be displayed at any scale or projection. However, the geologic data in this coverage are not intended for use at a scale other than 1:30,000. Supplemental useful data accompanying the database are extensive geochemical and mineral deposits data, as well as an aeromagnetic map.
NASA Technical Reports Server (NTRS)
Shafer, Jaclyn; Watson, Leela R.
2015-01-01
NASA's Launch Services Program, Ground Systems Development and Operations, Space Launch System and other programs at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) use the daily and weekly weather forecasts issued by the 45th Weather Squadron (45 WS) as decision tools for their day-to-day and launch operations on the Eastern Range (ER). Examples include determining if they need to limit activities such as vehicle transport to the launch pad, protect people, structures or exposed launch vehicles given a threat of severe weather, or reschedule other critical operations. The 45 WS uses numerical weather prediction models as a guide for these weather forecasts, particularly the Air Force Weather Agency (AFWA) 1.67 km Weather Research and Forecasting (WRF) model. Considering the 45 WS forecasters' and Launch Weather Officers' (LWO) extensive use of the AFWA model, the 45 WS proposed a task at the September 2013 Applied Meteorology Unit (AMU) Tasking Meeting requesting the AMU verify this model. Due to the lack of archived model data available from AFWA, verification is not yet possible. Instead, the AMU proposed to implement and verify the performance of an ER version of the high-resolution WRF Environmental Modeling System (EMS) model configured by the AMU (Watson 2013) in real time. Implementing a real-time version of the ER WRF-EMS would generate a larger database of model output than in the previous AMU task for determining model performance, and allows the AMU more control over and access to the model output archive. The tasking group agreed to this proposal; therefore the AMU implemented the WRF-EMS model on the second of two NASA AMU modeling clusters. The AMU also calculated verification statistics to determine model performance compared to observational data. Finally, the AMU made the model output available on the AMU Advanced Weather Interactive Processing System II (AWIPS II) servers, which allows the 45 WS and AMU staff to customize the model output display on the AMU and Range Weather Operations (RWO) AWIPS II client computers and conduct real-time subjective analyses.
ERIC Educational Resources Information Center
Ben-David, Boaz M.; Icht, Michal
2017-01-01
Background: Oral-diadochokinesis (oral-DDK) tasks are extensively used in the evaluation of motor speech abilities. Currently, validated normative data for older adults (aged 65 years and older) are missing in Hebrew. The effect of task stimuli (non-word versus real-word repetition) is also non-clear in the population of older adult Hebrew…
Exploring Natural Pedagogy in Play with Preschoolers: Cues Parents Use and Relations among Them
ERIC Educational Resources Information Center
Sage, Kara; Baldwin, Dare
2012-01-01
Recent developmental work demonstrates a range of effects of pedagogical cues on childhood learning. The present work investigates natural pedagogy in informal parent-child play. Preschool-aged children participated in free play and a toy task with a parent in addition to a toy task with an experimenter. Sessions were extensively coded for use of…
ERIC Educational Resources Information Center
Oxman, Victor; Stupel, Moshe
2018-01-01
A geometrical task is presented with multiple solutions using different methods, in order to show the connection between various branches of mathematics and to highlight the importance of providing the students with an extensive 'mathematical toolbox'. Investigation of the property that appears in the task was carried out using a computerized tool.
Increasing On-Task Behavior in the Classroom: Extension of Self-Monitoring Strategies
ERIC Educational Resources Information Center
Amato-Zech, Natalie A.; Hoff, Kathryn E.; Doepke, Karla J.
2006-01-01
We examined the effectiveness of a tactile self-monitoring prompt to increase on-task behaviors among 3 elementary-aged students in a special education classroom. Students were taught to self-monitor their attention by using the MotivAider (MotivAider, 2000), an electronic beeper that vibrates to provide a tactile cue to self-monitor. An ABAB…
ERIC Educational Resources Information Center
Bridgman, Anne
2017-01-01
Parenting is one of the most emotionally powerful, demanding, and consequential tasks of adulthood. Previously, the task of parenting was shared with extended family and community members. Today, with less extensive networks of experience and support, parents are frequently not as well prepared. Research has identified the elements of competent…
NASA Astrophysics Data System (ADS)
Oxman, Victor; Stupel, Moshe
2018-04-01
A geometrical task is presented with multiple solutions using different methods, in order to show the connection between various branches of mathematics and to highlight the importance of providing the students with an extensive 'mathematical toolbox'. Investigation of the property that appears in the task was carried out using a computerized tool.
ERIC Educational Resources Information Center
Quinlan, Philip T.; van der Maas, Han L. J.; Jansen, Brenda R. J.; Booij, Olaf; Rendell, Mark
2007-01-01
The present paper re-appraises connectionist attempts to explain how human cognitive development appears to progress through a series of sequential stages. Models of performance on the Piagetian balance scale task are the focus of attention. Limitations of these models are discussed and replications and extensions to the work are provided via the…
ERIC Educational Resources Information Center
Dunbar, Mary Elizabeth
This research was to determine the relationship between New York State Cooperative Extension 4-H Division Leaders' propensity toward delegation of work responsibility and (1) their degree of involvement in the performance of leader identification and selection tasks, (2) assignment of major responsibility for these tasks, and (3) other selected…
Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze
2013-04-01
Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
The (Null) Effect of Affective Touch on Betrayal Aversion, Altruism, and Risk Taking
Koppel, Lina; Andersson, David; Morrison, India; Västfjäll, Daniel; Tinghög, Gustav
2017-01-01
Pleasant touch is thought to increase the release of oxytocin. Oxytocin, in turn, has been extensively studied with regards to its effects on trust and prosocial behavior, but results remain inconsistent. The purpose of this study was to investigate the effect of touch on economic decision making. Participants (n = 120) were stroked on their left arm using a soft brush (touch condition) or not at all (control condition; varied within subjects), while they performed a series of decision tasks assessing betrayal aversion (the Betrayal Aversion Elicitation Task), altruism (donating money to a charitable organization), and risk taking (the Balloon Analog Risk Task). We found no significant effect of touch on any of the outcome measures, neither within nor between subjects. Furthermore, effects were not moderated by gender or attachment. However, attachment avoidance had a significant effect on altruism in that those who were high in avoidance donated less money. Our findings contribute to the understanding of affective touch—and, by extension, oxytocin—in social behavior, and decision making by showing that touch does not directly influence performance in tasks involving risk and prosocial decisions. Specifically, our work casts further doubt on the validity of oxytocin research in humans. PMID:29311867
The (Null) Effect of Affective Touch on Betrayal Aversion, Altruism, and Risk Taking.
Koppel, Lina; Andersson, David; Morrison, India; Västfjäll, Daniel; Tinghög, Gustav
2017-01-01
Pleasant touch is thought to increase the release of oxytocin. Oxytocin, in turn, has been extensively studied with regards to its effects on trust and prosocial behavior, but results remain inconsistent. The purpose of this study was to investigate the effect of touch on economic decision making. Participants ( n = 120) were stroked on their left arm using a soft brush (touch condition) or not at all (control condition; varied within subjects), while they performed a series of decision tasks assessing betrayal aversion (the Betrayal Aversion Elicitation Task), altruism (donating money to a charitable organization), and risk taking (the Balloon Analog Risk Task). We found no significant effect of touch on any of the outcome measures, neither within nor between subjects. Furthermore, effects were not moderated by gender or attachment. However, attachment avoidance had a significant effect on altruism in that those who were high in avoidance donated less money. Our findings contribute to the understanding of affective touch-and, by extension, oxytocin-in social behavior, and decision making by showing that touch does not directly influence performance in tasks involving risk and prosocial decisions. Specifically, our work casts further doubt on the validity of oxytocin research in humans.
Indexing of Patents of Pharmaceutical Composition in Online Databases
NASA Astrophysics Data System (ADS)
Online searching of patents of pharmaceutical composition is generally considered to be very difficult. It is due to the fact that the patent databases include extensive technical information as well as legal information so that they are not likely to have index proper to the pharmaceutical composition or even if they have such index, the scope and coverage of indexing is ambiguous. This paper discusses how patents of pharmaceutical composition are indexed in online databases such as WPl, CA, CLAIMS, USP and PATOLIS. Online searching of patents of pharmaceutical composition are also discussed in some detail.
Penningroth, Suzanna L; Scott, Walter D; Freuen, Margaret
2011-03-01
Few studies have addressed social motivation in prospective memory (PM). In a pilot study and two main studies, we examined whether social PM tasks possess a motivational advantage over nonsocial PM tasks. In the pilot study and Study 1, participants listed their real-life important and less important PM tasks. Independent raters categorized the PM tasks as social or nonsocial. Results from both studies showed a higher proportion of tasks rated as social when important tasks were requested than when less important tasks were requested. In Study 1, participants also reported whether they had remembered to perform each PM task. Reported performance rates were higher for tasks rated as social than for those rated as nonsocial. Finally, in Study 2, participants rated the importance of two hypothetical PM tasks, one social and one nonsocial. The social PM task was rated higher in importance. Overall, these findings suggest that social PM tasks are viewed as more important than nonsocial PM tasks and they are more likely to be performed. We propose that consideration of the social relevance of PM will lead to a more complete and ecologically valid theoretical description of PM performance. (PsycINFO Database Record (c) 2011 APA, all rights reserved).
Ferris, Lauren A; Denney, Linda M; Maletsky, Lorin P
2013-02-01
Functional activities in daily life can require squatting and shifting body weight during transverse plane rotations. Stability of the knee can be challenging for people with a total knee replacement (TKR) due to reduced proprioception, nonconforming articular geometry, muscle strength, and soft tissue weakness. The objective of this study was to identify strategies utilized by individuals with TKR in double-stance transferring load during rotation and flexion. Twenty-three subjects were recruited for this study: 11 TKR subjects (age: 65 ± 6 years; BMI 27.4 ± 4.1) and 12 healthy subjects (age: 63 ± 7; BMI 24.6 ± 3.8). Each subject completed a novel crossover button push task where rotation, flexion, and extension of the knee were utilized. Each subject performed two crossover reaching tasks where the subject used the opposite hand to cross over their body and press a button next to either their shoulder (high) or knee (low), then switched hands and rotated to press the opposite button, either low or high. The two tasks related to the order they pressed the buttons while crossing over, either low-to-high (L2H) or high-to-low (H2L). Force platforms measured ground reaction forces under each foot, which were then converted to lead force ratios (LFRs) based on the total force. Knee flexion angles were also measured. No statistical differences were found in the LFRs during the H2L and L2H tasks for the different groups, although differences in the variation of the loading within subjects were noted. A significant difference was found between healthy and unaffected knee angles and a strong trend between healthy and affected subject's knee angles in both H2L and L2H tasks. Large variations in the LFR at mid-task in the TKR subjects suggested possible difficulties in maintaining positional stability during these tasks. The TKR subjects maintained more of an extended knee, which is a consistent quadriceps avoidance strategy seen by other researchers in different tasks. These outcomes suggest that individuals with a TKR utilize strategies, such as keeping an extended knee, to achieve rotary tasks during knee flexion and extension. Repeated compensatory movements could result in forces that may cause difficulty over time in the hip joints or low back. Early identification of these strategies could improve TKR success and the return to activities of daily living that involve flexion and rotation.
A task-dependent causal role for low-level visual processes in spoken word comprehension.
Ostarek, Markus; Huettig, Falk
2017-08-01
It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical-decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete versus abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Thad Cochran National Warmwater Aquaculture Center
bytes) 2012 U.S. CATFISH DATABASE grnbar.jpg (3114 bytes) Delta Research & Extension Center to access these files. THAD COCHRAN NATIONAL WARMWATER AQUACULTURE CENTER Delta Research and
Douzery, Emmanuel J P; Scornavacca, Celine; Romiguier, Jonathan; Belkhir, Khalid; Galtier, Nicolas; Delsuc, Frédéric; Ranwez, Vincent
2014-07-01
Comparative genomic studies extensively rely on alignments of orthologous sequences. Yet, selecting, gathering, and aligning orthologous exons and protein-coding sequences (CDS) that are relevant for a given evolutionary analysis can be a difficult and time-consuming task. In this context, we developed OrthoMaM, a database of ORTHOlogous MAmmalian Markers describing the evolutionary dynamics of orthologous genes in mammalian genomes using a phylogenetic framework. Since its first release in 2007, OrthoMaM has regularly evolved, not only to include newly available genomes but also to incorporate up-to-date software in its analytic pipeline. This eighth release integrates the 40 complete mammalian genomes available in Ensembl v73 and provides alignments, phylogenies, evolutionary descriptor information, and functional annotations for 13,404 single-copy orthologous CDS and 6,953 long exons. The graphical interface allows to easily explore OrthoMaM to identify markers with specific characteristics (e.g., taxa availability, alignment size, %G+C, evolutionary rate, chromosome location). It hence provides an efficient solution to sample preprocessed markers adapted to user-specific needs. OrthoMaM has proven to be a valuable resource for researchers interested in mammalian phylogenomics, evolutionary genomics, and has served as a source of benchmark empirical data sets in several methodological studies. OrthoMaM is available for browsing, query and complete or filtered downloads at http://www.orthomam.univ-montp2.fr/. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
dipIQ: Blind Image Quality Assessment by Learning-to-Rank Discriminable Image Pairs.
Ma, Kede; Liu, Wentao; Liu, Tongliang; Wang, Zhou; Tao, Dacheng
2017-05-26
Objective assessment of image quality is fundamentally important in many image processing tasks. In this work, we focus on learning blind image quality assessment (BIQA) models which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIP) can be obtained automatically at low cost by exploiting largescale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation (gMAD) competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL Inferred Quality (dilIQ) index achieves an additional performance gain.
Polyamines in foods: development of a food database
Ali, Mohamed Atiya; Poortvliet, Eric; Strömberg, Roger; Yngve, Agneta
2011-01-01
Background Knowing the levels of polyamines (putrescine, spermidine, and spermine) in different foods is of interest due to the association of these bioactive nutrients to health and diseases. There is a lack of relevant information on their contents in foods. Objective To develop a food polyamine database from published data by which polyamine intake and food contribution to this intake can be estimated, and to determine the levels of polyamines in Swedish dairy products. Design Extensive literature search and laboratory analysis of selected Swedish dairy products. Polyamine contents in foods were collected using an extensive literature search of databases. Polyamines in different types of Swedish dairy products (milk with different fat percentages, yogurt, cheeses, and sour milk) were determined using high performance liquid chromatography (HPLC) equipped with a UV detector. Results Fruits and cheese were the highest sources of putrescine, while vegetables and meat products were found to be rich in spermidine and spermine, respectively. The content of polyamines in cheese varied considerably between studies. In analyzed Swedish dairy products, matured cheese had the highest total polyamine contents with values of 52.3, 1.2, and 2.6 mg/kg for putrescine, spermidine, and spermine, respectively. Low fat milk had higher putrescine and spermidine, 1.2 and 1.0 mg/kg, respectively, than the other types of milk. Conclusions The database aids other researchers in their quest for information regarding polyamine intake from foods. Connecting the polyamine contents in food with the Swedish Food Database allows for estimation of polyamine contents per portion. PMID:21249159
A cross-cultural comparison of children's imitative flexibility.
Clegg, Jennifer M; Legare, Cristine H
2016-09-01
Recent research with Western populations has demonstrated that children use imitation flexibly to engage in both instrumental and conventional learning. Evidence for children's imitative flexibility in non-Western populations is limited, however, and has only assessed imitation of instrumental tasks. This study (N = 142, 6- to 8-year-olds) demonstrates both cultural continuity and cultural variation in imitative flexibility. Children engage in higher imitative fidelity for conventional tasks than for instrumental tasks in both an industrialized, Western culture (United States), and a subsistence-based, non-Western culture (Vanuatu). Children in Vanuatu engage in higher imitative fidelity of instrumental tasks than in the United States, a potential consequence of cultural variation in child socialization for conformity. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
BioCreative V CDR task corpus: a resource for chemical disease relation extraction.
Li, Jiao; Sun, Yueping; Johnson, Robin J; Sciaky, Daniela; Wei, Chih-Hsuan; Leaman, Robert; Davis, Allan Peter; Mattingly, Carolyn J; Wiegers, Thomas C; Lu, Zhiyong
2016-01-01
Community-run, formal evaluations and manually annotated text corpora are critically important for advancing biomedical text-mining research. Recently in BioCreative V, a new challenge was organized for the tasks of disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. Given the nature of both tasks, a test collection is required to contain both disease/chemical annotations and relation annotations in the same set of articles. Despite previous efforts in biomedical corpus construction, none was found to be sufficient for the task. Thus, we developed our own corpus called BC5CDR during the challenge by inviting a team of Medical Subject Headings (MeSH) indexers for disease/chemical entity annotation and Comparative Toxicogenomics Database (CTD) curators for CID relation annotation. To ensure high annotation quality and productivity, detailed annotation guidelines and automatic annotation tools were provided. The resulting BC5CDR corpus consists of 1500 PubMed articles with 4409 annotated chemicals, 5818 diseases and 3116 chemical-disease interactions. Each entity annotation includes both the mention text spans and normalized concept identifiers, using MeSH as the controlled vocabulary. To ensure accuracy, the entities were first captured independently by two annotators followed by a consensus annotation: The average inter-annotator agreement (IAA) scores were 87.49% and 96.05% for the disease and chemicals, respectively, in the test set according to the Jaccard similarity coefficient. Our corpus was successfully used for the BioCreative V challenge tasks and should serve as a valuable resource for the text-mining research community.Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the United States.
Investigations into mirror fabrication metrology analysis
NASA Technical Reports Server (NTRS)
Dimmock, John O.
1994-01-01
This final report describes the work performed under this delivery order from June 1993 through August 1994. The scope of work included three distinct tasks in support of the AXAF-I program. The objective of the first task was to perform investigations of the grinding and polishing characteristics of the zerodur material by fabricating several samples. The second task was to continue the development of the integrated optical performance modeling software for AXAF-I. The purpose of third and final task was to develop and update the database of AXAF technical documents for an easy and rapid access. The MSFC optical and metrology shops were relocated from the B-wing of Building 4487 to Room BC 144 of Building 4466 in the beginning of this contract. This included dismantling, packing, and moving the equipment from its old location, and then reassembling it at the new location. A total of 65 zerodur samples, measuring 1 inch x 2 inches x 6 inches were ground and polished to a surface figure of lambda/10 p-v, and a surface finish of 5A rms were fabricated for coating tests. A number of special purpose tools and metal mirrors were also fabricated to support various AXAF-I development activities. In the metrology area, the ZYGO Mark 4 interferometer was relocated and also upgraded with a faster and more powerful processor. Surface metrology work was also performed on the coating samples and other optics using ZYGO interferometer and WYKO profilometer. A number of new features have been added to the GRAZTRACE program to enhance its analysis and modeling capabilities. A number of new commands have been added to the command mode GRAZTRACE program to provide a better control to the user on the program execution and data manipulation. Some commands and parameter entries have been reorganized for a uniform format. The command mode version of the convolution program CONVOLVE has been developed. An on-line help system and a user's manual have also been developed for the benefit of the users. The database of AXAF technical documents continues to progress. The titles, company name, date, and location of over 390 documents have been entered in this database. This database provides both a data search and retrieval function, and a data adding function. These functions allow a user to quickly search the data files for documents or add new information. A detailed user's guide has also been prepared. This user guide includes a document classification guide, a list of abbreviations, and a list of acronyms, which have been used in compiling this database of AXAF-I technical documents.
PathwayAccess: CellDesigner plugins for pathway databases.
Van Hemert, John L; Dickerson, Julie A
2010-09-15
CellDesigner provides a user-friendly interface for graphical biochemical pathway description. Many pathway databases are not directly exportable to CellDesigner models. PathwayAccess is an extensible suite of CellDesigner plugins, which connect CellDesigner directly to pathway databases using respective Java application programming interfaces. The process is streamlined for creating new PathwayAccess plugins for specific pathway databases. Three PathwayAccess plugins, MetNetAccess, BioCycAccess and ReactomeAccess, directly connect CellDesigner to the pathway databases MetNetDB, BioCyc and Reactome. PathwayAccess plugins enable CellDesigner users to expose pathway data to analytical CellDesigner functions, curate their pathway databases and visually integrate pathway data from different databases using standard Systems Biology Markup Language and Systems Biology Graphical Notation. Implemented in Java, PathwayAccess plugins run with CellDesigner version 4.0.1 and were tested on Ubuntu Linux, Windows XP and 7, and MacOSX. Source code, binaries, documentation and video walkthroughs are freely available at http://vrac.iastate.edu/~jlv.
System Engineering for the NNSA Knowledge Base
NASA Astrophysics Data System (ADS)
Young, C.; Ballard, S.; Hipp, J.
2006-05-01
To improve ground-based nuclear explosion monitoring capability, GNEM R&E (Ground-based Nuclear Explosion Monitoring Research & Engineering) researchers at the national laboratories have collected an extensive set of raw data products. These raw data are used to develop higher level products (e.g. 2D and 3D travel time models) to better characterize the Earth at regional scales. The processed products and selected portions of the raw data are stored in an archiving and access system known as the NNSA (National Nuclear Security Administration) Knowledge Base (KB), which is engineered to meet the requirements of operational monitoring authorities. At its core, the KB is a data archive, and the effectiveness of the KB is ultimately determined by the quality of the data content, but access to that content is completely controlled by the information system in which that content is embedded. Developing this system has been the task of Sandia National Laboratories (SNL), and in this paper we discuss some of the significant challenges we have faced and the solutions we have engineered. One of the biggest system challenges with raw data has been integrating database content from the various sources to yield an overall KB product that is comprehensive, thorough and validated, yet minimizes the amount of disk storage required. Researchers at different facilities often use the same data to develop their products, and this redundancy must be removed in the delivered KB, ideally without requiring any additional effort on the part of the researchers. Further, related data content must be grouped together for KB user convenience. Initially SNL used whatever tools were already available for these tasks, and did the other tasks manually. The ever-growing volume of KB data to be merged, as well as a need for more control of merging utilities, led SNL to develop our own java software package, consisting of a low- level database utility library upon which we have built several applications for specific tasks (e.g an event/origin merger; a waveform merger). Our package now includes applications for nearly all of the KB merging tasks, but development continues with an emphasis on improving user interfaces by adding GUIs and on increasing performance. Not all types of data products are well-suited to storage and access from a relational database, because of their basic underlying structure as well as the performance requirements for their use. In some cases, such products already have a standard format and corresponding software library in the existing monitoring system, but for others, this is not the case. For some of the latter, SNL has developed a large and complex C++ library for storing and accessing a wide variety of interpolatable geophysical data. Our library was first developed to support kriging of empirical data to provide value corrections and uncertainty estimates for underlying base models. Operational performance constraints led to the addition of an optimal tessellation capability to deliver kriging results using the much faster natural-neighbor interpolation method. To provide better predictions in areas without well-recorded seismicity, the library was further enhanced to provide the capability to use complex polygon delimited regional models as a base model for a single station/phase. Our latest work has focused on improving system performance and flexibility by running the library as a server application on a dedicated multi-processor server. This work was supported by the United States Department of Energy under Contract DE-AC04-94AL85000. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy.
Modular space station phase B extension period executive summary
NASA Technical Reports Server (NTRS)
Tischler, A. A.; Could, C. L.
1972-01-01
A narrative summary is presented of technical, programmatic, and planning information developed during the space station definition study extension period. The modular space station is emphasized, but tasks pertaining to shuttle sorties missions and information management advanced development are included. A series of program options considering technical, schedule, and programmatic alternatives to the baseline program are defined and evaluated.
SPMBR: a scalable algorithm for mining sequential patterns based on bitmaps
NASA Astrophysics Data System (ADS)
Xu, Xiwei; Zhang, Changhai
2013-12-01
Now some sequential patterns mining algorithms generate too many candidate sequences, and increase the processing cost of support counting. Therefore, we present an effective and scalable algorithm called SPMBR (Sequential Patterns Mining based on Bitmap Representation) to solve the problem of mining the sequential patterns for large databases. Our method differs from previous related works of mining sequential patterns. The main difference is that the database of sequential patterns is represented by bitmaps, and a simplified bitmap structure is presented firstly. In this paper, First the algorithm generate candidate sequences by SE(Sequence Extension) and IE(Item Extension), and then obtain all frequent sequences by comparing the original bitmap and the extended item bitmap .This method could simplify the problem of mining the sequential patterns and avoid the high processing cost of support counting. Both theories and experiments indicate that the performance of SPMBR is predominant for large transaction databases, the required memory size for storing temporal data is much less during mining process, and all sequential patterns can be mined with feasibility.
Assessment program for Kentucky traffic records.
DOT National Transportation Integrated Search
2015-02-01
During 2013, the Kentucky Transportation Center identified 117 potential performance metrics for the ten databases in : the Kentucky Traffic Records System. This report summarizes the findings of three main tasks completed in 2014: (1) : assessment o...
GIS for evaluating socioeconomic data of small communities in Oklahoma
DOT National Transportation Integrated Search
2001-01-01
This document summarizes the overall tasks, functionality, and limitations related to the delivered product of this project. Specifically, we present a Geographic Information Systems (GIS)-based database and analysis package for evaluating socioecono...
Updating road databases from shape-files using aerial images
NASA Astrophysics Data System (ADS)
Häufel, Gisela; Bulatov, Dimitri; Pohl, Melanie
2015-10-01
Road databases are an important part of geo data infrastructure. The knowledge about their characteristics and course is essential for urban planning, navigation or evacuation tasks. Starting from OpenStreetMap (OSM) shape-file data for street networks, we introduce an algorithm to enrich these available road maps by new maps which are based on other airborne sensor technology. In our case, these are results of our context-based urban terrain reconstruction process. We wish to enhance the use of road databases by computing additional junctions, narrow passages and other items which may emerge due to changes in the terrain. This is relevant for various military and civil applications.
Annual Progress Report for July 1, 1980 through June 30, 1981,
1981-08-01
71 14.4 Directory of Computer-Readable Bibliographic Databases .......... 73 14.5 University of Illinois Online Search Service...34Aeasures of Human Performance in Fault Diagnosis Tasks," M.S.I.E. Thesis (July 1931). 13.35 ). R. Morehead, "Models of Human Behavior in Online Searching...1981 , to appear). 1 Journal Articles 14.7 A. E. Williams, Databases and Online Statistics for 1979," Bul. Amer. Soc. for Information Science 7(2
Astronomical Archive at Tartu Observatory
NASA Astrophysics Data System (ADS)
Annuk, K.
2007-10-01
Archiving astronomical data is important task not only at large observatories but also at small observatories. Here we describe the astronomical archive at Tartu Observatory. The archive consists of old photographic plate images, photographic spectrograms, CCD direct--images and CCD spectroscopic data. The photographic plate digitizing project was started in 2005. An on-line database (based on MySQL) was created. The database includes CCD data as well photographic data. A PHP-MySQL interface was written for access to all data.
Bortolan, Giovanni
2015-01-01
Traditional means for identity validation (PIN codes, passwords), and physiological and behavioral biometric characteristics (fingerprint, iris, and speech) are susceptible to hacker attacks and/or falsification. This paper presents a method for person verification/identification based on correlation of present-to-previous limb ECG leads: I (r I), II (r II), calculated from them first principal ECG component (r PCA), linear and nonlinear combinations between r I, r II, and r PCA. For the verification task, the one-to-one scenario is applied and threshold values for r I, r II, and r PCA and their combinations are derived. The identification task supposes one-to-many scenario and the tested subject is identified according to the maximal correlation with a previously recorded ECG in a database. The population based ECG-ILSA database of 540 patients (147 healthy subjects, 175 patients with cardiac diseases, and 218 with hypertension) has been considered. In addition a common reference PTB dataset (14 healthy individuals) with short time interval between the two acquisitions has been taken into account. The results on ECG-ILSA database were satisfactory with healthy people, and there was not a significant decrease in nonhealthy patients, demonstrating the robustness of the proposed method. With PTB database, the method provides an identification accuracy of 92.9% and a verification sensitivity and specificity of 100% and 89.9%. PMID:26568954
Reiner, Bruce
2015-06-01
One of the greatest challenges facing healthcare professionals is the ability to directly and efficiently access relevant data from the patient's healthcare record at the point of care; specific to both the context of the task being performed and the specific needs and preferences of the individual end-user. In radiology practice, the relative inefficiency of imaging data organization and manual workflow requirements serves as an impediment to historical imaging data review. At the same time, clinical data retrieval is even more problematic due to the quality and quantity of data recorded at the time of order entry, along with the relative lack of information system integration. One approach to address these data deficiencies is to create a multi-disciplinary patient referenceable database which consists of high-priority, actionable data within the cumulative patient healthcare record; in which predefined criteria are used to categorize and classify imaging and clinical data in accordance with anatomy, technology, pathology, and time. The population of this referenceable database can be performed through a combination of manual and automated methods, with an additional step of data verification introduced for data quality control. Once created, these referenceable databases can be filtered at the point of care to provide context and user-specific data specific to the task being performed and individual end-user requirements.
Jekova, Irena; Bortolan, Giovanni
2015-01-01
Traditional means for identity validation (PIN codes, passwords), and physiological and behavioral biometric characteristics (fingerprint, iris, and speech) are susceptible to hacker attacks and/or falsification. This paper presents a method for person verification/identification based on correlation of present-to-previous limb ECG leads: I (r I), II (r II), calculated from them first principal ECG component (r PCA), linear and nonlinear combinations between r I, r II, and r PCA. For the verification task, the one-to-one scenario is applied and threshold values for r I, r II, and r PCA and their combinations are derived. The identification task supposes one-to-many scenario and the tested subject is identified according to the maximal correlation with a previously recorded ECG in a database. The population based ECG-ILSA database of 540 patients (147 healthy subjects, 175 patients with cardiac diseases, and 218 with hypertension) has been considered. In addition a common reference PTB dataset (14 healthy individuals) with short time interval between the two acquisitions has been taken into account. The results on ECG-ILSA database were satisfactory with healthy people, and there was not a significant decrease in nonhealthy patients, demonstrating the robustness of the proposed method. With PTB database, the method provides an identification accuracy of 92.9% and a verification sensitivity and specificity of 100% and 89.9%.
Redundancy checking algorithms based on parallel novel extension rule
NASA Astrophysics Data System (ADS)
Liu, Lei; Yang, Yang; Li, Guangli; Wang, Qi; Lü, Shuai
2017-05-01
Redundancy checking (RC) is a key knowledge reduction technology. Extension rule (ER) is a new reasoning method, first presented in 2003 and well received by experts at home and abroad. Novel extension rule (NER) is an improved ER-based reasoning method, presented in 2009. In this paper, we first analyse the characteristics of the extension rule, and then present a simple algorithm for redundancy checking based on extension rule (RCER). In addition, we introduce MIMF, a type of heuristic strategy. Using the aforementioned rule and strategy, we design and implement RCHER algorithm, which relies on MIMF. Next we design and implement an RCNER (redundancy checking based on NER) algorithm based on NER. Parallel computing greatly accelerates the NER algorithm, which has weak dependence among tasks when executed. Considering this, we present PNER (parallel NER) and apply it to redundancy checking and necessity checking. Furthermore, we design and implement the RCPNER (redundancy checking based on PNER) and NCPPNER (necessary clause partition based on PNER) algorithms as well. The experimental results show that MIMF significantly influences the acceleration of algorithm RCER in formulae on a large scale and high redundancy. Comparing PNER with NER and RCPNER with RCNER, the average speedup can reach up to the number of task decompositions when executed. Comparing NCPNER with the RCNER-based algorithm on separating redundant formulae, speedup increases steadily as the scale of the formulae is incrementing. Finally, we describe the challenges that the extension rule will be faced with and suggest possible solutions.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semistructured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
An Extensible Schema-less Database Framework for Managing High-throughput Semi-Structured Documents
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.; La, Tracy; Clancy, Daniel (Technical Monitor)
2002-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword searches of records for both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high throughput open database framework for managing, storing, and searching unstructured or semi structured arbitrary hierarchal models, XML and HTML.
FCDD: A Database for Fruit Crops Diseases.
Chauhan, Rupal; Jasrai, Yogesh; Pandya, Himanshu; Chaudhari, Suman; Samota, Chand Mal
2014-01-01
Fruit Crops Diseases Database (FCDD) requires a number of biotechnology and bioinformatics tools. The FCDD is a unique bioinformatics resource that compiles information about 162 details on fruit crops diseases, diseases type, its causal organism, images, symptoms and their control. The FCDD contains 171 phytochemicals from 25 fruits, their 2D images and their 20 possible sequences. This information has been manually extracted and manually verified from numerous sources, including other electronic databases, textbooks and scientific journals. FCDD is fully searchable and supports extensive text search. The main focus of the FCDD is on providing possible information of fruit crops diseases, which will help in discovery of potential drugs from one of the common bioresource-fruits. The database was developed using MySQL. The database interface is developed in PHP, HTML and JAVA. FCDD is freely available. http://www.fruitcropsdd.com/
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
Freely Accessible Chemical Database Resources of Compounds for in Silico Drug Discovery.
Yang, JingFang; Wang, Di; Jia, Chenyang; Wang, Mengyao; Hao, GeFei; Yang, GuangFu
2018-05-07
In silico drug discovery has been proved to be a solidly established key component in early drug discovery. However, this task is hampered by the limitation of quantity and quality of compound databases for screening. In order to overcome these obstacles, freely accessible database resources of compounds have bloomed in recent years. Nevertheless, how to choose appropriate tools to treat these freely accessible databases are crucial. To the best of our knowledge, this is the first systematic review on this issue. The existed advantages and drawbacks of chemical databases were analyzed and summarized based on the collected six categories of freely accessible chemical databases from literature in this review. Suggestions on how and in which conditions the usage of these databases could be reasonable were provided. Tools and procedures for building 3D structure chemical libraries were also introduced. In this review, we described the freely accessible chemical database resources for in silico drug discovery. In particular, the chemical information for building chemical database appears as attractive resources for drug design to alleviate experimental pressure. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
NASA Technical Reports Server (NTRS)
Winters, J. M.; Stark, L.
1984-01-01
Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.
Task Assignment Heuristics for Parallel and Distributed CFD Applications
NASA Technical Reports Server (NTRS)
Lopez-Benitez, Noe; Djomehri, M. Jahed; Biswas, Rupak
2003-01-01
This paper proposes a task graph (TG) model to represent a single discrete step of multi-block overset grid computational fluid dynamics (CFD) applications. The TG model is then used to not only balance the computational workload across the overset grids but also to reduce inter-grid communication costs. We have developed a set of task assignment heuristics based on the constraints inherent in this class of CFD problems. Two basic assignments, the smallest task first (STF) and the largest task first (LTF), are first presented. They are then systematically costs. To predict the performance of the proposed task assignment heuristics, extensive performance evaluations are conducted on a synthetic TG with tasks defined in terms of the number of grid points in predetermined overlapping grids. A TG derived from a realistic problem with eight million grid points is also used as a test case.
Salahuddin, Lizawati; Ismail, Zuraini
2015-11-01
This paper provides a systematic review of safety use of health information technology (IT). The first objective is to identify the antecedents towards safety use of health IT by conducting systematic literature review (SLR). The second objective is to classify the identified antecedents based on the work system in Systems Engineering Initiative for Patient Safety (SEIPS) model and an extension of DeLone and McLean (D&M) information system (IS) success model. A systematic literature review (SLR) was conducted from peer-reviewed scholarly publications between January 2000 and July 2014. SLR was carried out and reported based on the preferred reporting items for systematic reviews and meta-analyses (PRISMA) statement. The related articles were identified by searching the articles published in Science Direct, Medline, EMBASE, and CINAHL databases. Data extracted from the resultant studies included are to be analysed based on the work system in Systems Engineering Initiative for Patient Safety (SEIPS) model, and also from the extended DeLone and McLean (D&M) information system (IS) success model. 55 articles delineated to be antecedents that influenced the safety use of health IT were included for review. Antecedents were identified and then classified into five key categories. The categories are (1) person, (2) technology, (3) tasks, (4) organization, and (5) environment. Specifically, person is attributed by competence while technology is associated to system quality, information quality, and service quality. Tasks are attributed by task-related stressor. Organisation is related to training, organisation resources, and teamwork. Lastly, environment is attributed by physical layout, and noise. This review provides evidence that the antecedents for safety use of health IT originated from both social and technical aspects. However, inappropriate health IT usage potentially increases the incidence of errors and produces new safety risks. The review cautions future implementation and adoption of health IT to carefully consider the complex interactions between social and technical elements propound in healthcare settings. Copyright © 2015. Published by Elsevier Ireland Ltd.
The science of teamwork: Progress, reflections, and the road ahead.
Salas, Eduardo; Reyes, Denise L; McDaniel, Susan H
2018-01-01
We need teams in nearly every aspect of our lives (e.g., hospitals, schools, flight decks, nuclear power plants, oil rigs, the military, and corporate offices). Nearly a century of psychological science has uncovered extensive knowledge about team-related processes and outcomes. In this article, we draw from the reviews and articles of this special issue to identify 10 key reflections that have arisen in the team literature, briefly summarized here. Team researchers have developed many theories surrounding the multilayered aspects of teams, such that now we have a solid theoretical basis for teams. We have recognized that the collective is often stronger than the individual, initiating the shift from individual tasks to team tasks. All teams are not created equal, so it is important to consider the context to understand relevant team dynamics and outcomes, but sometimes teams performing in different contexts are more similar than not. It is critical to have teamwork-supportive organizational conditions and environments where psychological safety can flourish and be a mechanism to resolve conflicts, ensure safety, mitigate errors, learn, and improve performance. There are also helpful teamwork competencies that can increase effectiveness across teams or tasks that have been identified (e.g., coordination, communication, and adaptability). Even if a team is made up of experts, it can still fail if they do not know how to cooperate, coordinate, and communicate well together. To ensure the improvement and maintenance of effective team functioning, the organization must implement team development interventions and evaluate relevant team outcomes with robust diagnostic measurement. We conclude with 3 main directions for scientists to expand upon in the future: (a) address issues with technology to make further improvements in team assessment, (b) learn more about multiteam systems, and (c) bridge the gap between theory and practice. In summary, the science of teams has made substantial progress but still has plenty of room for advancement. (PsycINFO Database Record (c) 2018 APA, all rights reserved).