40 CFR 98.96 - Data reporting requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... could expand the existing data set to include new gases, tools, or processes not included in the existing data set (i.e. gases, tools, or processes for which no data are currently available). (6) The... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Data reporting requirements. 98.96...
Hodgetts, Sandra; Park, Elly
2017-03-01
Despite recognized benefits, current clinical practice rarely includes direct input from children and youth with autism spectrum disorder (ASD) in setting rehabilitation goals. This study reviews tools and evidence-based strategies to assist with autonomous goal settings for children and youth with ASD. This study included two components: (1) A scoping review of existing tools and strategies to assist with autonomous goal setting in individuals with ASD and (2) a chart review of inter-disciplinary service plan goals for children and youth with ASD. Eleven data sources, evaluating five different tools to assist with autonomous goal setting for children and youth with ASD, were found. Three themes emerged from the integration of the scoping review and chart review, which are discussed in the paper: (1) generalizability of findings, (2) adaptations to support participation and (3) practice implications. Children and youth with ASD can participate in setting rehabilitation goals, but few tools to support their participation have been evaluated, and those tools that do exist do not align well with current services foci. Visual aids appear to be one effective support, but further research on effective strategies for meaningful engagement in autonomous goal setting for children and youth with ASD is warranted. Implications for rehabilitation Persons with ASD are less self-determined than their peers. Input into one's own rehabilitation goals and priorities is an important component of self-determination. Few tools exist to help engage children and youth with ASD in setting their own rehabilitation goals. An increased focus on identifying, developing and evaluating effective tools and strategies to facilitate engagement of children and youth with ASD in setting their own rehabilitation goals is warranted.
Sustainability Tools Inventory - Initial Gaps Analysis | Science ...
This report identifies a suite of tools that address a comprehensive set of community sustainability concerns. The objective is to discover whether "gaps" exist in the tool suite’s analytic capabilities. These tools address activities that significantly influence resource consumption, waste generation, and hazard generation including air pollution and greenhouse gases. In addition, the tools have been evaluated using four screening criteria: relevance to community decision making, tools in an appropriate developmental stage, tools that may be transferrable to situations useful for communities, and tools with requiring skill levels appropriate to communities. This document provides an initial gap analysis in the area of community sustainability decision support tools. It provides a reference to communities for existing decision support tools, and a set of gaps for those wishing to develop additional needed tools to help communities to achieve sustainability. It contributes to SHC 1.61.4
Sustainability Tools Inventory Initial Gap Analysis
This report identifies a suite of tools that address a comprehensive set of community sustainability concerns. The objective is to discover whether "gaps" exist in the tool suite’s analytic capabilities. These tools address activities that significantly influence resource consu...
Kasaie, Parastu; Mathema, Barun; Kelton, W David; Azman, Andrew S; Pennington, Jeff; Dowdy, David W
2015-01-01
In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission ("recent transmission proportion"), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional 'n-1' approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the 'n-1' technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the 'n-1' model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models' performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data.
Kasaie, Parastu; Mathema, Barun; Kelton, W. David; Azman, Andrew S.; Pennington, Jeff; Dowdy, David W.
2015-01-01
In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission (“recent transmission proportion”), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional ‘n-1’ approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the ‘n-1’ technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the ‘n-1’ model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models’ performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data. PMID:26679499
Tools for Supporting Distributed Agile Project Planning
NASA Astrophysics Data System (ADS)
Wang, Xin; Maurer, Frank; Morgan, Robert; Oliveira, Josyleuda
Agile project planning plays an important part in agile software development. In distributed settings, project planning is severely impacted by the lack of face-to-face communication and the inability to share paper index cards amongst all meeting participants. To address these issues, several distributed agile planning tools were developed. The tools vary in features, functions and running platforms. In this chapter, we first summarize the requirements for distributed agile planning. Then we give an overview on existing agile planning tools. We also evaluate existing tools based on tool requirements. Finally, we present some practical advices for both designers and users of distributed agile planning tools.
Leveraging Python Interoperability Tools to Improve Sapphire's Usability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gezahegne, A; Love, N S
2007-12-10
The Sapphire project at the Center for Applied Scientific Computing (CASC) develops and applies an extensive set of data mining algorithms for the analysis of large data sets. Sapphire's algorithms are currently available as a set of C++ libraries. However many users prefer higher level scripting languages such as Python for their ease of use and flexibility. In this report, we evaluate four interoperability tools for the purpose of wrapping Sapphire's core functionality with Python. Exposing Sapphire's functionality through a Python interface would increase its usability and connect its algorithms to existing Python tools.
2011-05-10
concert with existing surveillance applications or the SAGES tools may be used en masse for an end-to-end biosurveillance capability. This flexibility...existing surveillance applications or the SAGES tools may be used en masse for an end–to-end biosurveillance capability. doi:10.1371/journal.pone...health resources, and the costs of proprietary software. The Suite for Automated Global Electronic bioSurveillance (SAGES) is a collection of modular
Updates to the CMAQ Post Processing and Evaluation Tools for 2016
In the spring of 2016, the evaluation tools distributed with the CMAQ model code were updated and new tools were added to the existing set of tools. Observation data files, compatible with the AMET software, were also made available on the CMAS website for the first time with the...
Coproducing Aboriginal patient journey mapping tools for improved quality and coordination of care.
Kelly, Janet; Dwyer, Judith; Mackean, Tamara; O'Donnell, Kim; Willis, Eileen
2016-12-08
This paper describes the rationale and process for developing a set of Aboriginal patient journey mapping tools with Aboriginal patients, health professionals, support workers, educators and researchers in the Managing Two Worlds Together project between 2008 and 2015. Aboriginal patients and their families from rural and remote areas, and healthcare providers in urban, rural and remote settings, shared their perceptions of the barriers and enablers to quality care in interviews and focus groups, and individual patient journey case studies were documented. Data were thematically analysed. In the absence of suitable existing tools, a new analytical framework and mapping approach was developed. The utility of the tools in other settings was then tested with health professionals, and the tools were further modified for use in quality improvement in health and education settings in South Australia and the Northern Territory. A central set of patient journey mapping tools with flexible adaptations, a workbook, and five sets of case studies describing how staff adapted and used the tools at different sites are available for wider use.
hydropower biological evaluation tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
This software is a set of analytical tools to evaluate the physical and biological performance of existing, refurbished, or newly installed conventional hydro-turbines nationwide where fish passage is a regulatory concern. The current version is based on information collected by the Sensor Fish. Future version will include other technologies. The tool set includes data acquisition, data processing, and biological response tools with applications to various turbine designs and other passage alternatives. The associated database is centralized, and can be accessed remotely. We have demonstrated its use for various applications including both turbines and spillways
An Exploration of the Effectiveness of an Audit Simulation Tool in a Classroom Setting
ERIC Educational Resources Information Center
Zelin, Robert C., II
2010-01-01
The purpose of this study was to examine the effectiveness of using an audit simulation product in a classroom setting. Many students and professionals feel that a disconnect exists between learning auditing in the classroom and practicing auditing in the workplace. It was hoped that the introduction of an audit simulation tool would help to…
Green Infrastructure Models and Tools
The objective of this project is to modify and refine existing models and develop new tools to support decision making for the complete green infrastructure (GI) project lifecycle, including the planning and implementation of stormwater control in urban and agricultural settings,...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawal, Adekola; Schmal, Pieter; Ramos, Alfredo
PSE, in the first phase of the CCSI commercialization project, set out to identify market opportunities for the CCSI tools combined with existing gPROMS platform capabilities and develop a clear technical plan for the proposed commercialization activities.
Dwivedi, Bhakti; Kowalski, Jeanne
2018-01-01
While many methods exist for integrating multi-omics data or defining gene sets, there is no one single tool that defines gene sets based on merging of multiple omics data sets. We present shinyGISPA, an open-source application with a user-friendly web-based interface to define genes according to their similarity in several molecular changes that are driving a disease phenotype. This tool was developed to help facilitate the usability of a previously published method, Gene Integrated Set Profile Analysis (GISPA), among researchers with limited computer-programming skills. The GISPA method allows the identification of multiple gene sets that may play a role in the characterization, clinical application, or functional relevance of a disease phenotype. The tool provides an automated workflow that is highly scalable and adaptable to applications that go beyond genomic data merging analysis. It is available at http://shinygispa.winship.emory.edu/shinyGISPA/.
Dwivedi, Bhakti
2018-01-01
While many methods exist for integrating multi-omics data or defining gene sets, there is no one single tool that defines gene sets based on merging of multiple omics data sets. We present shinyGISPA, an open-source application with a user-friendly web-based interface to define genes according to their similarity in several molecular changes that are driving a disease phenotype. This tool was developed to help facilitate the usability of a previously published method, Gene Integrated Set Profile Analysis (GISPA), among researchers with limited computer-programming skills. The GISPA method allows the identification of multiple gene sets that may play a role in the characterization, clinical application, or functional relevance of a disease phenotype. The tool provides an automated workflow that is highly scalable and adaptable to applications that go beyond genomic data merging analysis. It is available at http://shinygispa.winship.emory.edu/shinyGISPA/. PMID:29415010
Simple tools for assembling and searching high-density picolitre pyrophosphate sequence data.
Parker, Nicolas J; Parker, Andrew G
2008-04-18
The advent of pyrophosphate sequencing makes large volumes of sequencing data available at a lower cost than previously possible. However, the short read lengths are difficult to assemble and the large dataset is difficult to handle. During the sequencing of a virus from the tsetse fly, Glossina pallidipes, we found the need for tools to search quickly a set of reads for near exact text matches. A set of tools is provided to search a large data set of pyrophosphate sequence reads under a "live" CD version of Linux on a standard PC that can be used by anyone without prior knowledge of Linux and without having to install a Linux setup on the computer. The tools permit short lengths of de novo assembly, checking of existing assembled sequences, selection and display of reads from the data set and gathering counts of sequences in the reads. Demonstrations are given of the use of the tools to help with checking an assembly against the fragment data set; investigating homopolymer lengths, repeat regions and polymorphisms; and resolving inserted bases caused by incomplete chain extension. The additional information contained in a pyrophosphate sequencing data set beyond a basic assembly is difficult to access due to a lack of tools. The set of simple tools presented here would allow anyone with basic computer skills and a standard PC to access this information.
Bolduc, Benjamin; Youens-Clark, Ken; Roux, Simon; Hurwitz, Bonnie L; Sullivan, Matthew B
2017-01-01
Microbes affect nutrient and energy transformations throughout the world's ecosystems, yet they do so under viral constraints. In complex communities, viral metagenome (virome) sequencing is transforming our ability to quantify viral diversity and impacts. Although some bottlenecks, for example, few reference genomes and nonquantitative viromics, have been overcome, the void of centralized data sets and specialized tools now prevents viromics from being broadly applied to answer fundamental ecological questions. Here we present iVirus, a community resource that leverages the CyVerse cyberinfrastructure to provide access to viromic tools and data sets. The iVirus Data Commons contains both raw and processed data from 1866 samples and 73 projects derived from global ocean expeditions, as well as existing and legacy public repositories. Through the CyVerse Discovery Environment, users can interrogate these data sets using existing analytical tools (software applications known as 'Apps') for assembly, open reading frame prediction and annotation, as well as several new Apps specifically developed for analyzing viromes. Because Apps are web based and powered by CyVerse supercomputing resources, they enable scalable analyses for a broad user base. Finally, a use-case scenario documents how to apply these advances toward new data. This growing iVirus resource should help researchers utilize viromics as yet another tool to elucidate viral roles in nature.
Bolduc, Benjamin; Youens-Clark, Ken; Roux, Simon; Hurwitz, Bonnie L; Sullivan, Matthew B
2017-01-01
Microbes affect nutrient and energy transformations throughout the world's ecosystems, yet they do so under viral constraints. In complex communities, viral metagenome (virome) sequencing is transforming our ability to quantify viral diversity and impacts. Although some bottlenecks, for example, few reference genomes and nonquantitative viromics, have been overcome, the void of centralized data sets and specialized tools now prevents viromics from being broadly applied to answer fundamental ecological questions. Here we present iVirus, a community resource that leverages the CyVerse cyberinfrastructure to provide access to viromic tools and data sets. The iVirus Data Commons contains both raw and processed data from 1866 samples and 73 projects derived from global ocean expeditions, as well as existing and legacy public repositories. Through the CyVerse Discovery Environment, users can interrogate these data sets using existing analytical tools (software applications known as ‘Apps') for assembly, open reading frame prediction and annotation, as well as several new Apps specifically developed for analyzing viromes. Because Apps are web based and powered by CyVerse supercomputing resources, they enable scalable analyses for a broad user base. Finally, a use-case scenario documents how to apply these advances toward new data. This growing iVirus resource should help researchers utilize viromics as yet another tool to elucidate viral roles in nature. PMID:27420028
Rabin, Borsika A.; Gaglio, Bridget; Sanders, Tristan; Nekhlyudov, Larissa; Dearing, James W.; Bull, Sheana; Glasgow, Russell E.; Marcus, Alfred
2013-01-01
Cancer prognosis is of keen interest for cancer patients, their caregivers and providers. Prognostic tools have been developed to guide patient-physician communication and decision-making. Given the proliferation of prognostic tools, it is timely to review existing online cancer prognostic tools and discuss implications for their use in clinical settings. Using a systematic approach, we searched the Internet, Medline, and consulted with experts to identify existing online prognostic tools. Each was reviewed for content and format. Twenty-two prognostic tools addressing 89 different cancers were identified. Tools primarily focused on prostate (n=11), colorectal (n=10), breast (n=8), and melanoma (n=6), though at least one tool was identified for most malignancies. The input variables for the tools included cancer characteristics (n=22), patient characteristics (n=18), and comorbidities (n=9). Effect of therapy on prognosis was included in 15 tools. The most common predicted outcome was cancer specific survival/mortality (n=17). Only a few tools (n=4) suggested patients as potential target users. A comprehensive repository of online prognostic tools was created to understand the state-of-the-art in prognostic tool availability and characteristics. Use of these tools may support communication and understanding about cancer prognosis. Dissemination, testing, refinement of existing, and development of new tools under different conditions are needed. PMID:23956026
Clinical code set engineering for reusing EHR data for research: A review.
Williams, Richard; Kontopantelis, Evangelos; Buchan, Iain; Peek, Niels
2017-06-01
The construction of reliable, reusable clinical code sets is essential when re-using Electronic Health Record (EHR) data for research. Yet code set definitions are rarely transparent and their sharing is almost non-existent. There is a lack of methodological standards for the management (construction, sharing, revision and reuse) of clinical code sets which needs to be addressed to ensure the reliability and credibility of studies which use code sets. To review methodological literature on the management of sets of clinical codes used in research on clinical databases and to provide a list of best practice recommendations for future studies and software tools. We performed an exhaustive search for methodological papers about clinical code set engineering for re-using EHR data in research. This was supplemented with papers identified by snowball sampling. In addition, a list of e-phenotyping systems was constructed by merging references from several systematic reviews on this topic, and the processes adopted by those systems for code set management was reviewed. Thirty methodological papers were reviewed. Common approaches included: creating an initial list of synonyms for the condition of interest (n=20); making use of the hierarchical nature of coding terminologies during searching (n=23); reviewing sets with clinician input (n=20); and reusing and updating an existing code set (n=20). Several open source software tools (n=3) were discovered. There is a need for software tools that enable users to easily and quickly create, revise, extend, review and share code sets and we provide a list of recommendations for their design and implementation. Research re-using EHR data could be improved through the further development, more widespread use and routine reporting of the methods by which clinical codes were selected. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Fernández-Carrobles, M. Milagro; Tadeo, Irene; Bueno, Gloria; Noguera, Rosa; Déniz, Oscar; Salido, Jesús; García-Rojo, Marcial
2013-01-01
Given that angiogenesis and lymphangiogenesis are strongly related to prognosis in neoplastic and other pathologies and that many methods exist that provide different results, we aim to construct a morphometric tool allowing us to measure different aspects of the shape and size of vascular vessels in a complete and accurate way. The developed tool presented is based on vessel closing which is an essential property to properly characterize the size and the shape of vascular and lymphatic vessels. The method is fast and accurate improving existing tools for angiogenesis analysis. The tool also improves the accuracy of vascular density measurements, since the set of endothelial cells forming a vessel is considered as a single object. PMID:24489494
Development of a Fiber Laser Welding Capability for the W76, MC4702 Firing Set
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samayoa, Jose
2010-05-12
Development work to implement a new welding system for a Firing Set is presented. The new system is significant because it represents the first use of fiber laser welding technology at the KCP. The work used Six-Sigma tools for weld characterization and to define process performance. Determinations of workable weld parameters and comparison to existing equipment were completed. Replication of existing waveforms was done utilizing an Arbitrary Pulse Generator (APG), which was used to modulate the fiber laser’s exclusive continuous wave (CW) output. Fiber laser weld process capability for a Firing Set is demonstrated.
Improving STEM Program Quality in Out-of-School-Time: Tool Development and Validation
ERIC Educational Resources Information Center
Shah, Ashima Mathur; Wylie, Caroline; Gitomer, Drew; Noam, Gil
2018-01-01
In and out-of-school time (OST) experiences are viewed as complementary in contributing to students' interest, engagement, and performance in science, technology, engineering, and mathematics (STEM). While tools exist to measure quality in general afterschool settings and others to measure structured science classroom experiences, there is a need…
Wirtz, A L; Glass, N; Pham, K; Perrin, N; Rubenstein, L S; Singh, S; Vu, A
2016-01-01
Conflict affected refugees and internally displaced persons (IDPs) are at increased vulnerability to gender-based violence (GBV). Health, psychosocial, and protection services have been implemented in humanitarian settings, but GBV remains under-reported and available services under-utilized. To improve access to existing GBV services and facilitate reporting, the ASIST-GBV screening tool was developed and tested for use in humanitarian settings. This process was completed in four phases: 1) systematic literature review, 2) qualitative research that included individual interviews and focus groups with GBV survivors and service providers, respectively, 3) pilot testing of the developed screening tool, and 4) 3-month implementation testing of the screening tool. Research was conducted among female refugees, aged ≥15 years in Ethiopia, and female IDPs, aged ≥18 years in Colombia. The systematic review and meta-analysis identified a range of GBV experiences and estimated a 21.4 % prevalence of sexual violence (95 % CI:14.9-28.7) among conflict-affected populations. No existing screening tools for GBV in humanitarian settings were identified. Qualitative research with GBV survivors in Ethiopia and Colombia found multiple forms of GBV experienced by refugees and IDPs that occurred during conflict, in transit, and in displaced settings. Identified forms of violence were combined into seven key items on the screening tool: threats of violence, physical violence, forced sex, sexual exploitation, forced pregnancy, forced abortion, and early or forced marriage. Cognitive testing further refined the tool. Pilot testing in both sites demonstrated preliminary feasibility where 64.8 % of participants in Ethiopia and 44.9 % of participants in Colombia were identified with recent (last 12 months) cases of GBV. Implementation testing of the screening tool, conducted as a routine service in camp/district hospitals, allowed for identification of GBV cases and referrals to services. In this phase, 50.6 % of participants in Ethiopia and 63.4 % in Colombia screened positive for recent experiences of GBV. Psychometric testing demonstrated appropriate internal consistency of the tool (Cronbach's α = 0.77) and item response theory demonstrated appropriate discrimination and difficulty of the tool. The ASIST-GBV screening tool has demonstrated utility and validity for use in confidential identification and referral of refugees and IDPs who experience GBV.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deshmukh, Ranjit; Wu, Grace
The MapRE (Multi-criteria Analysis for Planning Renewable Energy) GIS (Geographic Information Systems) Tools are a set of ArcGIS tools to a) conduct site suitability analysis for wind and solar resources using inclusion and exclusion criteria, and create resource maps, b) create project opportunity areas and compute various attributes such as cost, distances to existing and planned infrastructure. and environmental impact factors; and c) calculate and update various attributes for already processed renewable energy zones. In addition, MapRE data sets are geospatial data of renewable energy project opportunity areas and zones with pre-calculated attributes for several countries. These tools and datamore » are available at mapre.lbl.gov.« less
ExAtlas: An interactive online tool for meta-analysis of gene expression data.
Sharov, Alexei A; Schlessinger, David; Ko, Minoru S H
2015-12-01
We have developed ExAtlas, an on-line software tool for meta-analysis and visualization of gene expression data. In contrast to existing software tools, ExAtlas compares multi-component data sets and generates results for all combinations (e.g. all gene expression profiles versus all Gene Ontology annotations). ExAtlas handles both users' own data and data extracted semi-automatically from the public repository (GEO/NCBI database). ExAtlas provides a variety of tools for meta-analyses: (1) standard meta-analysis (fixed effects, random effects, z-score, and Fisher's methods); (2) analyses of global correlations between gene expression data sets; (3) gene set enrichment; (4) gene set overlap; (5) gene association by expression profile; (6) gene specificity; and (7) statistical analysis (ANOVA, pairwise comparison, and PCA). ExAtlas produces graphical outputs, including heatmaps, scatter-plots, bar-charts, and three-dimensional images. Some of the most widely used public data sets (e.g. GNF/BioGPS, Gene Ontology, KEGG, GAD phenotypes, BrainScan, ENCODE ChIP-seq, and protein-protein interaction) are pre-loaded and can be used for functional annotations.
Open source Modeling and optimization tools for Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peles, S.
Open source modeling and optimization tools for planning The existing tools and software used for planning and analysis in California are either expensive, difficult to use, or not generally accessible to a large number of participants. These limitations restrict the availability of participants for larger scale energy and grid studies in the state. The proposed initiative would build upon federal and state investments in open source software, and create and improve open source tools for use in the state planning and analysis activities. Computational analysis and simulation frameworks in development at national labs and universities can be brought forward tomore » complement existing tools. An open source platform would provide a path for novel techniques and strategies to be brought into the larger community and reviewed by a broad set of stakeholders.« less
Modeling Tools for Propulsion Analysis and Computational Fluid Dynamics on the Internet
NASA Technical Reports Server (NTRS)
Muss, J. A.; Johnson, C. W.; Gotchy, M. B.
2000-01-01
The existing RocketWeb(TradeMark) Internet Analysis System (httr)://www.iohnsonrockets.com/rocketweb) provides an integrated set of advanced analysis tools that can be securely accessed over the Internet. Since these tools consist of both batch and interactive analysis codes, the system includes convenient methods for creating input files and evaluating the resulting data. The RocketWeb(TradeMark) system also contains many features that permit data sharing which, when further developed, will facilitate real-time, geographically diverse, collaborative engineering within a designated work group. Adding work group management functionality while simultaneously extending and integrating the system's set of design and analysis tools will create a system providing rigorous, controlled design development, reducing design cycle time and cost.
Vandenberg, Ann E; Vaughan, Camille P; Stevens, Melissa; Hastings, Susan N; Powers, James; Markland, Alayne; Hwang, Ula; Hung, William; Echt, Katharina V
2017-02-01
Clinical decision support (CDS) may improve prescribing for older adults in the Emergency Department (ED) if adopted by providers. Existing prescribing order entry processes were mapped at an initial Veterans Administration Medical Center site, demonstrating cognitive burden, effort and safety concerns. Geriatric order sets incorporating 2012 Beers guidelines and including geriatric prescribing advice and prepopulated order options were developed. Geriatric order sets were implemented at two sites as part of the multicomponent 'Enhancing Quality of Prescribing Practices for Older Veterans Discharged from the Emergency Department' quality improvement initiative. Facilitators and barriers to order sets use at the two sites were evaluated. Phone interviews were conducted with two provider groups (n = 20), those 'EQUiPPED' with the interventions (n = 10, 5 at each site) and Comparison providers who were only exposed to order sets through a clickable option on the ED order menu within the patient's medical record (n = 10, 5 at each site). All providers were asked about order set 'use' and 'usefulness'. Users (n = 11) were asked about 'usability'. Order set adopters described 'usefulness' in terms of 'safety' and 'efficiency', whereas order set consultants and order set non-users described 'usefulness' in terms of 'information' or 'training'. Provider 'autonomy', 'comfort' level with existing tools, and 'learning curve' were stated as barriers to use. Quantifying efficiency advantages and communicating safety benefit over preexisting practices and tools may improve adoption of CDS in ED and in other settings of care. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Developing Healthcare Data Analytics APPs with Open Data Science Tools.
Hao, Bibo; Sun, Wen; Yu, Yiqin; Xie, Guotong
2017-01-01
Recent advances in big data analytics provide more flexible, efficient, and open tools for researchers to gain insight from healthcare data. Whilst many tools require researchers to develop programs with programming languages like Python, R and so on, which is not a skill set grasped by many researchers in the healthcare data analytics area. To make data science more approachable, we explored existing tools and developed a practice that can help data scientists convert existing analytics pipelines to user-friendly analytics APPs with rich interactions and features of real-time analysis. With this practice, data scientists can develop customized analytics pipelines as APPs in Jupyter Notebook and disseminate them to other researchers easily, and researchers can benefit from the shared notebook to perform analysis tasks or reproduce research results much more easily.
Visual business ecosystem intelligence: lessons from the field.
Basole, Rahul C
2014-01-01
Macroscopic insight into business ecosystems is becoming increasingly important. With the emergence of new digital business data, opportunities exist to develop rich, interactive visual-analytics tools. Georgia Institute of Technology researchers have been developing and implementing visual business ecosystem intelligence tools in corporate settings. This article discusses the challenges they faced, the lessons learned, and opportunities for future research.
Automation of Ocean Product Metrics
2008-09-30
Presented in: Ocean Sciences 2008 Conf., 5 Mar 2008. Shriver, J., J. D. Dykes, and J. Fabre: Automation of Operational Ocean Product Metrics. Presented in 2008 EGU General Assembly , 14 April 2008. 9 ...processing (multiple data cuts per day) and multiple-nested models. Routines for generating automated evaluations of model forecast statistics will be...developed and pre-existing tools will be collected to create a generalized tool set, which will include user-interface tools to the metrics data
Taminau, Jonatan; Meganck, Stijn; Lazar, Cosmin; Steenhoff, David; Coletta, Alain; Molter, Colin; Duque, Robin; de Schaetzen, Virginie; Weiss Solís, David Y; Bersini, Hugues; Nowé, Ann
2012-12-24
With an abundant amount of microarray gene expression data sets available through public repositories, new possibilities lie in combining multiple existing data sets. In this new context, analysis itself is no longer the problem, but retrieving and consistently integrating all this data before delivering it to the wide variety of existing analysis tools becomes the new bottleneck. We present the newly released inSilicoMerging R/Bioconductor package which, together with the earlier released inSilicoDb R/Bioconductor package, allows consistent retrieval, integration and analysis of publicly available microarray gene expression data sets. Inside the inSilicoMerging package a set of five visual and six quantitative validation measures are available as well. By providing (i) access to uniformly curated and preprocessed data, (ii) a collection of techniques to remove the batch effects between data sets from different sources, and (iii) several validation tools enabling the inspection of the integration process, these packages enable researchers to fully explore the potential of combining gene expression data for downstream analysis. The power of using both packages is demonstrated by programmatically retrieving and integrating gene expression studies from the InSilico DB repository [https://insilicodb.org/app/].
Facebook: A Potentially Valuable Educational Tool?
ERIC Educational Resources Information Center
Voivonta, Theodora; Avraamidou, Lucy
2018-01-01
This paper is concerned with the educational value of Facebook and specifically how it can be used in formal educational settings. As such, it provides a review of existing literature of how Facebook is used in higher education paying emphasis on the scope of its use and the outcomes achieved. As evident in existing literature, Facebook has been…
NASA Astrophysics Data System (ADS)
Donato, M. B.; Milasi, M.; Vitanza, C.
2010-09-01
An existence result of a Walrasian equilibrium for an integrated model of exchange, consumption and production is obtained. The equilibrium model is characterized in terms of a suitable generalized quasi-variational inequality; so the existence result comes from an original technique which takes into account tools of convex and set-valued analysis.
Methodology for Planning Technical Education: With a Case Study of Polytechnics in Bangladesh.
ERIC Educational Resources Information Center
Ritzen, Jozef M.; Balderston, Judith B.
A product of research first begun by one of the authors in Bangladesh, this book develops a comprehensive set of methods for planning technical education. Wherever possible, the authors draw on existing tools, fitting them to the specific context of technical education. When faced with planning problems for which existing methods are ill suited…
The ADE scorecards: a tool for adverse drug event detection in electronic health records.
Chazard, Emmanuel; Băceanu, Adrian; Ferret, Laurie; Ficheur, Grégoire
2011-01-01
Although several methods exist for Adverse Drug events (ADE) detection due to past hospitalizations, a tool that could display those ADEs to the physicians does not exist yet. This article presents the ADE Scorecards, a Web tool that enables to screen past hospitalizations extracted from Electronic Health Records (EHR), using a set of ADE detection rules, presently rules discovered by data mining. The tool enables the physicians to (1) get contextualized statistics about the ADEs that happen in their medical department, (2) see the rules that are useful in their department, i.e. the rules that could have enabled to prevent those ADEs and (3) review in detail the ADE cases, through a comprehensive interface displaying the diagnoses, procedures, lab results, administered drugs and anonymized records. The article shows a demonstration of the tool through a use case.
Review of Smartphone Applications for the Treatment of Eating Disorders
Juarascio, Adrienne S.; Manasse, Stephanie M.; Goldstein, Stephanie P.; Forman, Evan M.; Butryn, Meghan L.
2016-01-01
mHealth tools may be a feasible modality for delivering evidence-based treatments and principles (EBPs), and may enhance treatment for eating disorders (EDs). However, research on the efficacy of mHealth tools for EDs and the extent to which they include EBPs is lacking. The current study sought to (i) review existing apps for EDs, (ii) determine the extent to which available treatment apps utilize EBPs, and (iii) assess the degree to which existing smartphone apps utilize recent advances in smartphone technology. Overall, existing ED intervention apps contained minimal EBPs and failed to incorporate smartphone capabilities. For smartphone apps to be a feasible and effective ED treatment modality, it may be useful for creators to begin taking utilizing the abilities that set smartphones apart from in-person treatment while incorporating EBPs. Before mHealth tools are incorporated into treatments for EDs, it is necessary that the feasibility, acceptability, and efficacy be evaluated. PMID:25303148
CAS-viewer: web-based tool for splicing-guided integrative analysis of multi-omics cancer data.
Han, Seonggyun; Kim, Dongwook; Kim, Youngjun; Choi, Kanghoon; Miller, Jason E; Kim, Dokyoon; Lee, Younghee
2018-04-20
The Cancer Genome Atlas (TCGA) project is a public resource that provides transcriptomic, DNA sequence, methylation, and clinical data for 33 cancer types. Transforming the large size and high complexity of TCGA cancer genome data into integrated knowledge can be useful to promote cancer research. Alternative splicing (AS) is a key regulatory mechanism of genes in human cancer development and in the interaction with epigenetic factors. Therefore, AS-guided integration of existing TCGA data sets will make it easier to gain insight into the genetic architecture of cancer risk and related outcomes. There are already existing tools analyzing and visualizing alternative mRNA splicing patterns for large-scale RNA-seq experiments. However, these existing web-based tools are limited to the analysis of individual TCGA data sets at a time, such as only transcriptomic information. We implemented CAS-viewer (integrative analysis of Cancer genome data based on Alternative Splicing), a web-based tool leveraging multi-cancer omics data from TCGA. It illustrates alternative mRNA splicing patterns along with methylation, miRNAs, and SNPs, and then provides an analysis tool to link differential transcript expression ratio to methylation, miRNA, and splicing regulatory elements for 33 cancer types. Moreover, one can analyze AS patterns with clinical data to identify potential transcripts associated with different survival outcome for each cancer. CAS-viewer is a web-based application for transcript isoform-driven integration of multi-omics data in multiple cancer types and will aid in the visualization and possible discovery of biomarkers for cancer by integrating multi-omics data from TCGA.
New Tools For Understanding Microbial Diversity Using High-throughput Sequence Data
NASA Astrophysics Data System (ADS)
Knight, R.; Hamady, M.; Liu, Z.; Lozupone, C.
2007-12-01
High-throughput sequencing techniques such as 454 are straining the limits of tools traditionally used to build trees, choose OTUs, and perform other essential sequencing tasks. We have developed a workflow for phylogenetic analysis of large-scale sequence data sets that combines existing tools, such as the Arb phylogeny package and the NAST multiple sequence alignment tool, with new methods for choosing and clustering OTUs and for performing phylogenetic community analysis with UniFrac. This talk discusses the cyberinfrastructure we are developing to support the human microbiome project, and the application of these workflows to analyze very large data sets that contrast the gut microbiota with a range of physical environments. These tools will ultimately help to define core and peripheral microbiomes in a range of environments, and will allow us to understand the physical and biotic factors that contribute most to differences in microbial diversity.
Metadata Authoring with Versatility and Extensibility
NASA Technical Reports Server (NTRS)
Pollack, Janine; Olsen, Lola
2004-01-01
NASA's Global Change Master Directory (GCMD) assists the scientific community in the discovery of and linkage to Earth science data sets and related services. The GCMD holds over 13,800 data set descriptions in Directory Interchange Format (DIF) and 700 data service descriptions in Service Entry Resource Format (SERF), encompassing the disciplines of geology, hydrology, oceanography, meteorology, and ecology. Data descriptions also contain geographic coverage information and direct links to the data, thus allowing researchers to discover data pertaining to a geographic location of interest, then quickly acquire those data. The GCMD strives to be the preferred data locator for world-wide directory-level metadata. In this vein, scientists and data providers must have access to intuitive and efficient metadata authoring tools. Existing GCMD tools are attracting widespread usage; however, a need for tools that are portable, customizable and versatile still exists. With tool usage directly influencing metadata population, it has become apparent that new tools are needed to fill these voids. As a result, the GCMD has released a new authoring tool allowing for both web-based and stand-alone authoring of descriptions. Furthermore, this tool incorporates the ability to plug-and-play the metadata format of choice, offering users options of DIF, SERF, FGDC, ISO or any other defined standard. Allowing data holders to work with their preferred format, as well as an option of a stand-alone application or web-based environment, docBUlLDER will assist the scientific community in efficiently creating quality data and services metadata.
Chapter 8: Web-based Tools - CARNIVORE
NASA Astrophysics Data System (ADS)
Graham, M. J.
Registries are an integral part of the VO infrastructure, yet the greatest exposure that most users will ever need to have to one is discovering resources through a registry portal. Some users, however, will have resources of their own that they need to register and will go to an existing registry to do so, but a small number will want to set up their own registry. They may have too many resources to register with an existing registry; they may want more control over their resource metadata than an existing registry will afford; or they may want to set up a specialized registry, e.g. a subjectspecific one. CARNIVORE is designed to offer those who want their own registry the functionality they require in an off-the-shelf implementation. This chapter describes how to set up your own registry using CARNIVORE.
ERIC Educational Resources Information Center
Mesa, Jennifer Cheryl
2010-01-01
Although young children are major audiences of science museums, limited evidence exists documenting changes in children's knowledge in these settings due in part to the limited number of valid and reliable assessment tools available for use with this population. The purposes of this study were to develop and validate a concept mapping assessment…
Livet, Melanie; Fixsen, Amanda
2018-01-01
With mental health services shifting to community-based settings, community mental health (CMH) organizations are under increasing pressure to deliver effective services. Despite availability of evidence-based interventions, there is a gap between effective mental health practices and the care that is routinely delivered. Bridging this gap requires availability of easily tailorable implementation support tools to assist providers in implementing evidence-based intervention with quality, thereby increasing the likelihood of achieving the desired client outcomes. This study documents the process and lessons learned from exploring the feasibility of adapting such a technology-based tool, Centervention, as the example innovation, for use in CMH settings. Mixed-methods data on core features, innovation-provider fit, and organizational capacity were collected from 44 CMH providers. Lessons learned included the need to augment delivery through technology with more personal interactions, the importance of customizing and integrating the tool with existing technologies, and the need to incorporate a number of strategies to assist with adoption and use of Centervention-like tools in CMH contexts. This study adds to the current body of literature on the adaptation process for technology-based tools and provides information that can guide additional innovations for CMH settings.
The LANDFIRE Total Fuel Change Tool (ToFuΔ) user’s guide
Smail, Tobin; Martin, Charley; Napoli, Jim
2011-01-01
LANDFIRE fuel data were originally developed from coarse-scale existing vegetation type, existing vegetation cover, existing vegetation height, and biophysical setting layers. Fire and fuel specialists from across the country provided input to the original LANDFIRE National (LF_1.0.0) fuel layers to help calibrate fuel characteristics on a more localized scale. The LANDFIRE Total Fuel Change Tool (ToFu∆) was developed from this calibration process. Vegetation is subject to constant change – and fuels are therefore also dynamic, necessitating a systematic method for reflecting changes spatially so that fire behavior can be accurately accessed. ToFuΔ allows local experts to quickly produce maps that spatially display any proposed fuel characteristics changes. ToFu∆ works through a Microsoft Access database to produce spatial results in ArcMap based on rule sets devised by the user that take into account the existing vegetation type (EVT), existing vegetation cover (EVC), existing vegetation height (EVH), and biophysical setting (BpS) from the LANDFIRE grid data. There are also options within ToFu∆ to add discrete variables in grid format through use of the wildcard option and for subdividing specific areas for different fuel characteristic assignments through the BpS grid. The ToFu∆ user determines the size of the area for assessment by defining a Management Unit, or “MU.” User-defined rule sets made up of EVT, EVC, EVH, and BpS layers, as well as any wildcard selections, are used to change or refine fuel characteristics within the MU. Once these changes have been made to the fuel characteristics, new grids are created for fire behavior analysis or planning. These grids represent the most common ToFu∆ output. ToFuΔ is currently under development and will continue to be updated in the future. The current beta version (0.12), released in March 2011, is compatible with Windows 7 and will be the last release until the fall of 2011.
Tsou, Christina; Haynes, Emma; Warner, Wayne D; Gray, Gordon; Thompson, Sandra C
2015-04-23
The need for better partnerships between Aboriginal organisations and mainstream agencies demands attention on process and relational elements of these partnerships, and improving partnership functioning through transformative or iterative evaluation procedures. This paper presents the findings of a literature review which examines the usefulness of existing partnership tools to the Australian Aboriginal-mainstream partnership (AMP) context. Three sets of best practice principles for successful AMP were selected based on authors' knowledge and experience. Items in each set of principles were separated into process and relational elements and used to guide the analysis of partnership assessment tools. The review and analysis of partnership assessment tools were conducted in three distinct but related parts. Part 1- identify and select reviews of partnership tools; part 2 - identify and select partnership self-assessment tool; part 3 - analysis of selected tools using AMP principles. The focus on relational and process elements in the partnership tools reviewed is consistent with the focus of Australian AMP principles by reconciliation advocates; however, historical context, lived experience, cultural context and approaches of Australian Aboriginal people represent key deficiencies in the tools reviewed. The overall assessment indicated that the New York Partnership Self-Assessment Tool and the VicHealth Partnership Analysis Tools reflect the greatest number of AMP principles followed by the Nuffield Partnership Assessment Tool. The New York PSAT has the strongest alignment with the relational elements while VicHealth and Nuffield tools showed greatest alignment with the process elements in the chosen AMP principles. Partnership tools offer opportunities for providing evidence based support to partnership development. The multiplicity of tools in existence and the reported uniqueness of each partnership, mean the development of a generic partnership analysis for AMP may not be a viable option for future effort.
Exploratory Causal Analysis in Bivariate Time Series Data
NASA Astrophysics Data System (ADS)
McCracken, James M.
Many scientific disciplines rely on observational data of systems for which it is difficult (or impossible) to implement controlled experiments and data analysis techniques are required for identifying causal information and relationships directly from observational data. This need has lead to the development of many different time series causality approaches and tools including transfer entropy, convergent cross-mapping (CCM), and Granger causality statistics. In this thesis, the existing time series causality method of CCM is extended by introducing a new method called pairwise asymmetric inference (PAI). It is found that CCM may provide counter-intuitive causal inferences for simple dynamics with strong intuitive notions of causality, and the CCM causal inference can be a function of physical parameters that are seemingly unrelated to the existence of a driving relationship in the system. For example, a CCM causal inference might alternate between ''voltage drives current'' and ''current drives voltage'' as the frequency of the voltage signal is changed in a series circuit with a single resistor and inductor. PAI is introduced to address both of these limitations. Many of the current approaches in the times series causality literature are not computationally straightforward to apply, do not follow directly from assumptions of probabilistic causality, depend on assumed models for the time series generating process, or rely on embedding procedures. A new approach, called causal leaning, is introduced in this work to avoid these issues. The leaning is found to provide causal inferences that agree with intuition for both simple systems and more complicated empirical examples, including space weather data sets. The leaning may provide a clearer interpretation of the results than those from existing time series causality tools. A practicing analyst can explore the literature to find many proposals for identifying drivers and causal connections in times series data sets, but little research exists of how these tools compare to each other in practice. This work introduces and defines exploratory causal analysis (ECA) to address this issue along with the concept of data causality in the taxonomy of causal studies introduced in this work. The motivation is to provide a framework for exploring potential causal structures in time series data sets. ECA is used on several synthetic and empirical data sets, and it is found that all of the tested time series causality tools agree with each other (and intuitive notions of causality) for many simple systems but can provide conflicting causal inferences for more complicated systems. It is proposed that such disagreements between different time series causality tools during ECA might provide deeper insight into the data than could be found otherwise.
Yucca Mountain licensing support network archive assistant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunlavy, Daniel M.; Bauer, Travis L.; Verzi, Stephen J.
2008-03-01
This report describes the Licensing Support Network (LSN) Assistant--a set of tools for categorizing e-mail messages and documents, and investigating and correcting existing archives of categorized e-mail messages and documents. The two main tools in the LSN Assistant are the LSN Archive Assistant (LSNAA) tool for recategorizing manually labeled e-mail messages and documents and the LSN Realtime Assistant (LSNRA) tool for categorizing new e-mail messages and documents. This report focuses on the LSNAA tool. There are two main components of the LSNAA tool. The first is the Sandia Categorization Framework, which is responsible for providing categorizations for documents in anmore » archive and storing them in an appropriate Categorization Database. The second is the actual user interface, which primarily interacts with the Categorization Database, providing a way for finding and correcting categorizations errors in the database. A procedure for applying the LSNAA tool and an example use case of the LSNAA tool applied to a set of e-mail messages are provided. Performance results of the categorization model designed for this example use case are presented.« less
Hallett, Allen M.; Parker, Nathan; Kudia, Ousswa; Kao, Dennis; Modelska, Maria; Rifai, Hanadi; O’Connor, Daniel P.
2015-01-01
Objectives. We developed the policy indicator checklist (PIC) to identify and measure policies for calorie-dense foods and sugar-sweetened beverages to determine how policies are clustered across multiple settings. Methods. In 2012 and 2013 we used existing literature, policy documents, government recommendations, and instruments to identify key policies. We then developed the PIC to examine the policy environments across 3 settings (communities, schools, and early care and education centers) in 8 communities participating in the Childhood Obesity Research Demonstration Project. Results. Principal components analysis revealed 5 components related to calorie-dense food policies and 4 components related to sugar-sweetened beverage policies. Communities with higher youth and racial/ethnic minority populations tended to have fewer and weaker policy environments concerning calorie-dense foods and healthy foods and beverages. Conclusions. The PIC was a helpful tool to identify policies that promote healthy food environments across multiple settings and to measure and compare the overall policy environments across communities. There is need for improved coordination across settings, particularly in areas with greater concentration of youths and racial/ethnic minority populations. Policies to support healthy eating are not equally distributed across communities, and disparities continue to exist in nutrition policies. PMID:25790397
Lee, Rebecca E; Hallett, Allen M; Parker, Nathan; Kudia, Ousswa; Kao, Dennis; Modelska, Maria; Rifai, Hanadi; O'Connor, Daniel P
2015-05-01
We developed the policy indicator checklist (PIC) to identify and measure policies for calorie-dense foods and sugar-sweetened beverages to determine how policies are clustered across multiple settings. In 2012 and 2013 we used existing literature, policy documents, government recommendations, and instruments to identify key policies. We then developed the PIC to examine the policy environments across 3 settings (communities, schools, and early care and education centers) in 8 communities participating in the Childhood Obesity Research Demonstration Project. Principal components analysis revealed 5 components related to calorie-dense food policies and 4 components related to sugar-sweetened beverage policies. Communities with higher youth and racial/ethnic minority populations tended to have fewer and weaker policy environments concerning calorie-dense foods and healthy foods and beverages. The PIC was a helpful tool to identify policies that promote healthy food environments across multiple settings and to measure and compare the overall policy environments across communities. There is need for improved coordination across settings, particularly in areas with greater concentration of youths and racial/ethnic minority populations. Policies to support healthy eating are not equally distributed across communities, and disparities continue to exist in nutrition policies.
XML schemas for common bioinformatic data types and their application in workflow systems
Seibel, Philipp N; Krüger, Jan; Hartmeier, Sven; Schwarzer, Knut; Löwenthal, Kai; Mersch, Henning; Dandekar, Thomas; Giegerich, Robert
2006-01-01
Background Today, there is a growing need in bioinformatics to combine available software tools into chains, thus building complex applications from existing single-task tools. To create such workflows, the tools involved have to be able to work with each other's data – therefore, a common set of well-defined data formats is needed. Unfortunately, current bioinformatic tools use a great variety of heterogeneous formats. Results Acknowledging the need for common formats, the Helmholtz Open BioInformatics Technology network (HOBIT) identified several basic data types used in bioinformatics and developed appropriate format descriptions, formally defined by XML schemas, and incorporated them in a Java library (BioDOM). These schemas currently cover sequence, sequence alignment, RNA secondary structure and RNA secondary structure alignment formats in a form that is independent of any specific program, thus enabling seamless interoperation of different tools. All XML formats are available at , the BioDOM library can be obtained at . Conclusion The HOBIT XML schemas and the BioDOM library simplify adding XML support to newly created and existing bioinformatic tools, enabling these tools to interoperate seamlessly in workflow scenarios. PMID:17087823
Collins, Sarah; Hurley, Ann C; Chang, Frank Y; Illa, Anisha R; Benoit, Angela; Laperle, Sarah; Dykes, Patricia C
2014-01-01
Maintaining continuity of care (CoC) in the inpatient setting is dependent on aligning goals and tasks with the plan of care (POC) during multidisciplinary rounds (MDRs). A number of locally developed rounding tools exist, yet there is a lack of standard content and functional specifications for electronic tools to support MDRs within and across settings. To identify content and functional requirements for an MDR tool to support CoC. We collected discrete clinical data elements (CDEs) discussed during rounds for 128 acute and critical care patients. To capture CDEs, we developed and validated an iPad-based observational tool based on informatics CoC standards. We observed 19 days of rounds and conducted eight group and individual interviews. Descriptive and bivariate statistics and network visualization were conducted to understand associations between CDEs discussed during rounds with a particular focus on the POC. Qualitative data were thematically analyzed. All analyses were triangulated. We identified the need for universal and configurable MDR tool views across settings and users and the provision of messaging capability. Eleven empirically derived universal CDEs were identified, including four POC CDEs: problems, plan, goals, and short-term concerns. Configurable POC CDEs were: rationale, tasks/'to dos', pending results and procedures, discharge planning, patient preferences, need for urgent review, prognosis, and advice/guidance. Some requirements differed between settings; yet, there was overlap between POC CDEs. We recommend an initial list of 11 universal CDEs for continuity in MDRs across settings and 27 CDEs that can be configured to meet setting-specific needs.
Coeli M. Hoover
2010-01-01
Although long-term research is a critical tool for answering forest management questions, managers must often make decisions before results from such experiments are available. One way to meet those information needs is to reanalyze existing long-term data sets to address current research questions; the Forest Service Experimental Forests and Ranges (EFRs) network...
Mentoring: the retention factor in the acute care setting.
Funderburk, Amy E
2008-01-01
The most difficult time to retain staff nurses can be the first year after hire. Because of the high costs of recruitment and orientation, retention of these new employees is essential. Mentoring is a viable retention tool for the new employee and for existing experienced nurses. Mentoring also provides professional growth benefits that appeal to existing employees and increase their job enjoyment and satisfaction.
Dependency visualization for complex system understanding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smart, J. Allison Cory
1994-09-01
With the volume of software in production use dramatically increasing, the importance of software maintenance has become strikingly apparent. Techniques now sought and developed for reverse engineering and design extraction and recovery. At present, numerous commercial products and research tools exist which are capable of visualizing a variety of programming languages and software constructs. The list of new tools and services continues to grow rapidly. Although the scope of the existing commercial and academic product set is quite broad, these tools still share a common underlying problem. The ability of each tool to visually organize object representations is increasingly impairedmore » as the number of components and component dependencies within systems increases. Regardless of how objects are defined, complex ``spaghetti`` networks result in nearly all large system cases. While this problem is immediately apparent in modem systems analysis involving large software implementations, it is not new. As will be discussed in Chapter 2, related problems involving the theory of graphs were identified long ago. This important theoretical foundation provides a useful vehicle for representing and analyzing complex system structures. While the utility of directed graph based concepts in software tool design has been demonstrated in literature, these tools still lack the capabilities necessary for large system comprehension. This foundation must therefore be expanded with new organizational and visualization constructs necessary to meet this challenge. This dissertation addresses this need by constructing a conceptual model and a set of methods for interactively exploring, organizing, and understanding the structure of complex software systems.« less
Dereplication, Aggregation and Scoring Tool (DAS Tool) v1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
SIEBER, CHRISTIAN
Communities of uncultivated microbes are critical to ecosystem function and microorganism health, and a key objective of metagenomic studies is to analyze organism-specific metabolic pathways and reconstruct community interaction networks. This requires accurate assignment of genes to genomes, yet existing binning methods often fail to predict a reasonable number of genomes and report many bins of low quality and completeness. Furthermore, the performance of existing algorithms varies between samples and biotypes. Here, we present a dereplication, aggregation and scoring strategy, DAS Tool, that combines the strengths of a flexible set of established binning algorithms. DAS Tools applied to a constructedmore » community generated more accurate bins than any automated method. Further, when applied to samples of different complexity, including soil, natural oil seeps, and the human gut, DAS Tool recovered substantially more near-complete genomes than any single binning method alone. Included were three genomes from a novel lineage . The ability to reconstruct many near-complete genomes from metagenomics data will greatly advance genome-centric analyses of ecosystems.« less
Scalable Visual Analytics of Massive Textual Datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Manoj Kumar; Bohn, Shawn J.; Cowley, Wendy E.
2007-04-01
This paper describes the first scalable implementation of text processing engine used in Visual Analytics tools. These tools aid information analysts in interacting with and understanding large textual information content through visual interfaces. By developing parallel implementation of the text processing engine, we enabled visual analytics tools to exploit cluster architectures and handle massive dataset. The paper describes key elements of our parallelization approach and demonstrates virtually linear scaling when processing multi-gigabyte data sets such as Pubmed. This approach enables interactive analysis of large datasets beyond capabilities of existing state-of-the art visual analytics tools.
Modeling and Simulation Tools for Heavy Lift Airships
NASA Technical Reports Server (NTRS)
Hochstetler, Ron; Chachad, Girish; Hardy, Gordon; Blanken, Matthew; Melton, John
2016-01-01
For conventional fixed wing and rotary wing aircraft a variety of modeling and simulation tools have been developed to provide designers the means to thoroughly investigate proposed designs and operational concepts. However, lighter-than-air (LTA) airships, hybrid air vehicles, and aerostats have some important aspects that are different from heavier-than-air (HTA) vehicles. In order to account for these differences, modifications are required to the standard design tools to fully characterize the LTA vehicle design and performance parameters.. To address these LTA design and operational factors, LTA development organizations have created unique proprietary modeling tools, often at their own expense. An expansion of this limited LTA tool set could be accomplished by leveraging existing modeling and simulation capabilities available in the National laboratories and public research centers. Development of an expanded set of publicly available LTA modeling and simulation tools for LTA developers would mitigate the reliance on proprietary LTA design tools in use today. A set of well researched, open source, high fidelity LTA design modeling and simulation tools would advance LTA vehicle development and also provide the analytical basis for accurate LTA operational cost assessments. This paper will present the modeling and analysis tool capabilities required for LTA vehicle design, analysis of operations, and full life-cycle support. A survey of the tools currently available will be assessed to identify the gaps between their capabilities and the LTA industry's needs. Options for development of new modeling and analysis capabilities to supplement contemporary tools will also be presented.
Leveraging Existing Mission Tools in a Re-Usable, Component-Based Software Environment
NASA Technical Reports Server (NTRS)
Greene, Kevin; Grenander, Sven; Kurien, James; z,s (fshir. z[orttr); z,scer; O'Reilly, Taifun
2006-01-01
Emerging methods in component-based software development offer significant advantages but may seem incompatible with existing mission operations applications. In this paper we relate our positive experiences integrating existing mission applications into component-based tools we are delivering to three missions. In most operations environments, a number of software applications have been integrated together to form the mission operations software. In contrast, with component-based software development chunks of related functionality and data structures, referred to as components, can be individually delivered, integrated and re-used. With the advent of powerful tools for managing component-based development, complex software systems can potentially see significant benefits in ease of integration, testability and reusability from these techniques. These benefits motivate us to ask how component-based development techniques can be relevant in a mission operations environment, where there is significant investment in software tools that are not component-based and may not be written in languages for which component-based tools even exist. Trusted and complex software tools for sequencing, validation, navigation, and other vital functions cannot simply be re-written or abandoned in order to gain the advantages offered by emerging component-based software techniques. Thus some middle ground must be found. We have faced exactly this issue, and have found several solutions. Ensemble is an open platform for development, integration, and deployment of mission operations software that we are developing. Ensemble itself is an extension of an open source, component-based software development platform called Eclipse. Due to the advantages of component-based development, we have been able to vary rapidly develop mission operations tools for three surface missions by mixing and matching from a common set of mission operation components. We have also had to determine how to integrate existing mission applications for sequence development, sequence validation, and high level activity planning, and other functions into a component-based environment. For each of these, we used a somewhat different technique based upon the structure and usage of the existing application.
In Interactive, Web-Based Approach to Metadata Authoring
NASA Technical Reports Server (NTRS)
Pollack, Janine; Wharton, Stephen W. (Technical Monitor)
2001-01-01
NASA's Global Change Master Directory (GCMD) serves a growing number of users by assisting the scientific community in the discovery of and linkage to Earth science data sets and related services. The GCMD holds over 8000 data set descriptions in Directory Interchange Format (DIF) and 200 data service descriptions in Service Entry Resource Format (SERF), encompassing the disciplines of geology, hydrology, oceanography, meteorology, and ecology. Data descriptions also contain geographic coverage information, thus allowing researchers to discover data pertaining to a particular geographic location, as well as subject of interest. The GCMD strives to be the preeminent data locator for world-wide directory level metadata. In this vein, scientists and data providers must have access to intuitive and efficient metadata authoring tools. Existing GCMD tools are not currently attracting. widespread usage. With usage being the prime indicator of utility, it has become apparent that current tools must be improved. As a result, the GCMD has released a new suite of web-based authoring tools that enable a user to create new data and service entries, as well as modify existing data entries. With these tools, a more interactive approach to metadata authoring is taken, as they feature a visual "checklist" of data/service fields that automatically update when a field is completed. In this way, the user can quickly gauge which of the required and optional fields have not been populated. With the release of these tools, the Earth science community will be further assisted in efficiently creating quality data and services metadata. Keywords: metadata, Earth science, metadata authoring tools
Parker, Dianne; Wensing, Michel; Esmail, Aneez; Valderas, Jose M
2015-09-01
There is little guidance available to healthcare practitioners about what tools they might use to assess the patient safety culture. To identify useful tools for assessing patient safety culture in primary care organizations in Europe; to identify those aspects of performance that should be assessed when investigating the relationship between safety culture and performance in primary care. Two consensus-based studies were carried out, in which subject matter experts and primary healthcare professionals from several EU states rated (a) the applicability to their healthcare system of several existing safety culture assessment tools and (b) the appropriateness and usefulness of a range of potential indicators of a positive patient safety culture to primary care settings. The safety culture tools were field-tested in four countries to ascertain any challenges and issues arising when used in primary care. The two existing tools that received the most favourable ratings were the Manchester patient safety framework (MaPsAF primary care version) and the Agency for healthcare research and quality survey (medical office version). Several potential safety culture process indicators were identified. The one that emerged as offering the best combination of appropriateness and usefulness related to the collection of data on adverse patient events. Two tools, one quantitative and one qualitative, were identified as applicable and useful in assessing patient safety culture in primary care settings in Europe. Safety culture indicators in primary care should focus on the processes rather than the outcomes of care.
Parker, Dianne; Wensing, Michel; Esmail, Aneez; Valderas, Jose M
2015-01-01
ABSTRACT Background: There is little guidance available to healthcare practitioners about what tools they might use to assess the patient safety culture. Objective: To identify useful tools for assessing patient safety culture in primary care organizations in Europe; to identify those aspects of performance that should be assessed when investigating the relationship between safety culture and performance in primary care. Methods: Two consensus-based studies were carried out, in which subject matter experts and primary healthcare professionals from several EU states rated (a) the applicability to their healthcare system of several existing safety culture assessment tools and (b) the appropriateness and usefulness of a range of potential indicators of a positive patient safety culture to primary care settings. The safety culture tools were field-tested in four countries to ascertain any challenges and issues arising when used in primary care. Results: The two existing tools that received the most favourable ratings were the Manchester patient safety framework (MaPsAF primary care version) and the Agency for healthcare research and quality survey (medical office version). Several potential safety culture process indicators were identified. The one that emerged as offering the best combination of appropriateness and usefulness related to the collection of data on adverse patient events. Conclusion: Two tools, one quantitative and one qualitative, were identified as applicable and useful in assessing patient safety culture in primary care settings in Europe. Safety culture indicators in primary care should focus on the processes rather than the outcomes of care. PMID:26339832
Lennox, Laura; Doyle, Cathal; Reed, Julie E
2017-01-01
Objectives Although improvement initiatives show benefits to patient care, they often fail to sustain. Models and frameworks exist to address this challenge, but issues with design, clarity and usability have been barriers to use in healthcare settings. This work aimed to collaborate with stakeholders to develop a sustainability tool relevant to people in healthcare settings and practical for use in improvement initiatives. Design Tool development was conducted in six stages. A scoping literature review, group discussions and a stakeholder engagement event explored literature findings and their resonance with stakeholders in healthcare settings. Interviews, small-scale trialling and piloting explored the design and tested the practicality of the tool in improvement initiatives. Setting National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care for Northwest London (CLAHRC NWL). Participants CLAHRC NWL improvement initiative teams and staff. Results The iterative design process and engagement of stakeholders informed the articulation of the sustainability factors identified from the literature and guided tool design for practical application. Key iterations of factors and tool design are discussed. From the development process, the Long Term Success Tool (LTST) has been designed. The Tool supports those implementing improvements to reflect on 12 sustainability factors to identify risks to increase chances of achieving sustainability over time. The Tool is designed to provide a platform for improvement teams to share their own views on sustainability as well as learn about the different views held within their team to prompt discussion and actions. Conclusion The development of the LTST has reinforced the importance of working with stakeholders to design strategies which respond to their needs and preferences and can practically be implemented in real-world settings. Further research is required to study the use and effectiveness of the tool in practice and assess engagement with the method over time. PMID:28947436
Yang, Qian; Wang, Shuyuan; Dai, Enyu; Zhou, Shunheng; Liu, Dianming; Liu, Haizhou; Meng, Qianqian; Jiang, Bin; Jiang, Wei
2017-08-16
Pathway enrichment analysis has been widely used to identify cancer risk pathways, and contributes to elucidating the mechanism of tumorigenesis. However, most of the existing approaches use the outdated pathway information and neglect the complex gene interactions in pathway. Here, we first reviewed the existing widely used pathway enrichment analysis approaches briefly, and then, we proposed a novel topology-based pathway enrichment analysis (TPEA) method, which integrated topological properties and global upstream/downstream positions of genes in pathways. We compared TPEA with four widely used pathway enrichment analysis tools, including database for annotation, visualization and integrated discovery (DAVID), gene set enrichment analysis (GSEA), centrality-based pathway enrichment (CePa) and signaling pathway impact analysis (SPIA), through analyzing six gene expression profiles of three tumor types (colorectal cancer, thyroid cancer and endometrial cancer). As a result, we identified several well-known cancer risk pathways that could not be obtained by the existing tools, and the results of TPEA were more stable than that of the other tools in analyzing different data sets of the same cancer. Ultimately, we developed an R package to implement TPEA, which could online update KEGG pathway information and is available at the Comprehensive R Archive Network (CRAN): https://cran.r-project.org/web/packages/TPEA/. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
BeadArray Expression Analysis Using Bioconductor
Ritchie, Matthew E.; Dunning, Mark J.; Smith, Mike L.; Shi, Wei; Lynch, Andy G.
2011-01-01
Illumina whole-genome expression BeadArrays are a popular choice in gene profiling studies. Aside from the vendor-provided software tools for analyzing BeadArray expression data (GenomeStudio/BeadStudio), there exists a comprehensive set of open-source analysis tools in the Bioconductor project, many of which have been tailored to exploit the unique properties of this platform. In this article, we explore a number of these software packages and demonstrate how to perform a complete analysis of BeadArray data in various formats. The key steps of importing data, performing quality assessments, preprocessing, and annotation in the common setting of assessing differential expression in designed experiments will be covered. PMID:22144879
GenomicTools: a computational platform for developing high-throughput analytics in genomics.
Tsirigos, Aristotelis; Haiminen, Niina; Bilal, Erhan; Utro, Filippo
2012-01-15
Recent advances in sequencing technology have resulted in the dramatic increase of sequencing data, which, in turn, requires efficient management of computational resources, such as computing time, memory requirements as well as prototyping of computational pipelines. We present GenomicTools, a flexible computational platform, comprising both a command-line set of tools and a C++ API, for the analysis and manipulation of high-throughput sequencing data such as DNA-seq, RNA-seq, ChIP-seq and MethylC-seq. GenomicTools implements a variety of mathematical operations between sets of genomic regions thereby enabling the prototyping of computational pipelines that can address a wide spectrum of tasks ranging from pre-processing and quality control to meta-analyses. Additionally, the GenomicTools platform is designed to analyze large datasets of any size by minimizing memory requirements. In practical applications, where comparable, GenomicTools outperforms existing tools in terms of both time and memory usage. The GenomicTools platform (version 2.0.0) was implemented in C++. The source code, documentation, user manual, example datasets and scripts are available online at http://code.google.com/p/ibm-cbc-genomic-tools.
Defining a caring hospital by using currently implemented survey tools.
Jennings, Nancy
2010-09-01
Health care organizations are constantly striving to provide a more cost-effective and higher quality treatment within a caring environment. However, balancing the demands of regulatory agencies with the holistic needs of the patient is challenging. Further challenging is how to define those hospitals that provide an exceptional caring environment for their patients. By using survey tools that are already being administered in hospital settings, the opportunity exists to analyze the results obtained from these tools to define a hospital as a caring organization without the added burden of separate data collection.
Odaga, John; Henriksson, Dorcus K; Nkolo, Charles; Tibeihaho, Hector; Musabe, Richard; Katusiime, Margaret; Sinabulya, Zaccheus; Mucunguzi, Stephen; Mbonye, Anthony K; Valadez, Joseph J
2016-01-01
Local health system managers in low- and middle-income countries have the responsibility to set health priorities and allocate resources accordingly. Although tools exist to aid this process, they are not widely applied for various reasons including non-availability, poor knowledge of the tools, and poor adaptability into the local context. In Uganda, delivery of basic services is devolved to the District Local Governments through the District Health Teams (DHTs). The Community and District Empowerment for Scale-up (CODES) project aims to provide a set of management tools that aid contextualised priority setting, fund allocation, and problem-solving in a systematic way to improve effective coverage and quality of child survival interventions. Although the various tools have previously been used at the national level, the project aims to combine them in an integral way for implementation at the district level. These tools include Lot Quality Assurance Sampling (LQAS) surveys to generate local evidence, Bottleneck analysis and Causal analysis as analytical tools, Continuous Quality Improvement, and Community Dialogues based on Citizen Report Cards and U reports. The tools enable identification of gaps, prioritisation of possible solutions, and allocation of resources accordingly. This paper presents some of the tools used by the project in five districts in Uganda during the proof-of-concept phase of the project. All five districts were trained and participated in LQAS surveys and readily adopted the tools for priority setting and resource allocation. All districts developed health operational work plans, which were based on the evidence and each of the districts implemented more than three of the priority activities which were included in their work plans. In the five districts, the CODES project demonstrated that DHTs can adopt and integrate these tools in the planning process by systematically identifying gaps and setting priority interventions for child survival.
Odaga, John; Henriksson, Dorcus K.; Nkolo, Charles; Tibeihaho, Hector; Musabe, Richard; Katusiime, Margaret; Sinabulya, Zaccheus; Mucunguzi, Stephen; Mbonye, Anthony K.; Valadez, Joseph J.
2016-01-01
Background Local health system managers in low- and middle-income countries have the responsibility to set health priorities and allocate resources accordingly. Although tools exist to aid this process, they are not widely applied for various reasons including non-availability, poor knowledge of the tools, and poor adaptability into the local context. In Uganda, delivery of basic services is devolved to the District Local Governments through the District Health Teams (DHTs). The Community and District Empowerment for Scale-up (CODES) project aims to provide a set of management tools that aid contextualised priority setting, fund allocation, and problem-solving in a systematic way to improve effective coverage and quality of child survival interventions. Design Although the various tools have previously been used at the national level, the project aims to combine them in an integral way for implementation at the district level. These tools include Lot Quality Assurance Sampling (LQAS) surveys to generate local evidence, Bottleneck analysis and Causal analysis as analytical tools, Continuous Quality Improvement, and Community Dialogues based on Citizen Report Cards and U reports. The tools enable identification of gaps, prioritisation of possible solutions, and allocation of resources accordingly. This paper presents some of the tools used by the project in five districts in Uganda during the proof-of-concept phase of the project. Results All five districts were trained and participated in LQAS surveys and readily adopted the tools for priority setting and resource allocation. All districts developed health operational work plans, which were based on the evidence and each of the districts implemented more than three of the priority activities which were included in their work plans. Conclusions In the five districts, the CODES project demonstrated that DHTs can adopt and integrate these tools in the planning process by systematically identifying gaps and setting priority interventions for child survival. PMID:27225791
NASA Technical Reports Server (NTRS)
Eckhardt, Dave E., Jr.; Jipping, Michael J.; Wild, Chris J.; Zeil, Steven J.; Roberts, Cathy C.
1993-01-01
A study of computer engineering tool integration using the Portable Common Tool Environment (PCTE) Public Interface Standard is presented. Over a 10-week time frame, three existing software products were encapsulated to work in the Emeraude environment, an implementation of the PCTE version 1.5 standard. The software products used were a computer-aided software engineering (CASE) design tool, a software reuse tool, and a computer architecture design and analysis tool. The tool set was then demonstrated to work in a coordinated design process in the Emeraude environment. The project and the features of PCTE used are described, experience with the use of Emeraude environment over the project time frame is summarized, and several related areas for future research are summarized.
Collins, Sarah; Hurley, Ann C; Chang, Frank Y; Illa, Anisha R; Benoit, Angela; Laperle, Sarah; Dykes, Patricia C
2014-01-01
Background Maintaining continuity of care (CoC) in the inpatient setting is dependent on aligning goals and tasks with the plan of care (POC) during multidisciplinary rounds (MDRs). A number of locally developed rounding tools exist, yet there is a lack of standard content and functional specifications for electronic tools to support MDRs within and across settings. Objective To identify content and functional requirements for an MDR tool to support CoC. Materials and methods We collected discrete clinical data elements (CDEs) discussed during rounds for 128 acute and critical care patients. To capture CDEs, we developed and validated an iPad-based observational tool based on informatics CoC standards. We observed 19 days of rounds and conducted eight group and individual interviews. Descriptive and bivariate statistics and network visualization were conducted to understand associations between CDEs discussed during rounds with a particular focus on the POC. Qualitative data were thematically analyzed. All analyses were triangulated. Results We identified the need for universal and configurable MDR tool views across settings and users and the provision of messaging capability. Eleven empirically derived universal CDEs were identified, including four POC CDEs: problems, plan, goals, and short-term concerns. Configurable POC CDEs were: rationale, tasks/‘to dos’, pending results and procedures, discharge planning, patient preferences, need for urgent review, prognosis, and advice/guidance. Discussion Some requirements differed between settings; yet, there was overlap between POC CDEs. Conclusions We recommend an initial list of 11 universal CDEs for continuity in MDRs across settings and 27 CDEs that can be configured to meet setting-specific needs. PMID:24081019
MAPPER: A personal computer map projection tool
NASA Technical Reports Server (NTRS)
Bailey, Steven A.
1993-01-01
MAPPER is a set of software tools designed to let users create and manipulate map projections on a personal computer (PC). The capability exists to generate five popular map projections. These include azimuthal, cylindrical, mercator, lambert, and sinusoidal projections. Data for projections are contained in five coordinate databases at various resolutions. MAPPER is managed by a system of pull-down windows. This interface allows the user to intuitively create, view and export maps to other platforms.
A toolbox and a record for scientific model development
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1994-01-01
Scientific computation can benefit from software tools that facilitate construction of computational models, control the application of models, and aid in revising models to handle new situations. Existing environments for scientific programming provide only limited means of handling these tasks. This paper describes a two pronged approach for handling these tasks: (1) designing a 'Model Development Toolbox' that includes a basic set of model constructing operations; and (2) designing a 'Model Development Record' that is automatically generated during model construction. The record is subsequently exploited by tools that control the application of scientific models and revise models to handle new situations. Our two pronged approach is motivated by our belief that the model development toolbox and record should be highly interdependent. In particular, a suitable model development record can be constructed only when models are developed using a well defined set of operations. We expect this research to facilitate rapid development of new scientific computational models, to help ensure appropriate use of such models and to facilitate sharing of such models among working computational scientists. We are testing this approach by extending SIGMA, and existing knowledge-based scientific software design tool.
XML schemas for common bioinformatic data types and their application in workflow systems.
Seibel, Philipp N; Krüger, Jan; Hartmeier, Sven; Schwarzer, Knut; Löwenthal, Kai; Mersch, Henning; Dandekar, Thomas; Giegerich, Robert
2006-11-06
Today, there is a growing need in bioinformatics to combine available software tools into chains, thus building complex applications from existing single-task tools. To create such workflows, the tools involved have to be able to work with each other's data--therefore, a common set of well-defined data formats is needed. Unfortunately, current bioinformatic tools use a great variety of heterogeneous formats. Acknowledging the need for common formats, the Helmholtz Open BioInformatics Technology network (HOBIT) identified several basic data types used in bioinformatics and developed appropriate format descriptions, formally defined by XML schemas, and incorporated them in a Java library (BioDOM). These schemas currently cover sequence, sequence alignment, RNA secondary structure and RNA secondary structure alignment formats in a form that is independent of any specific program, thus enabling seamless interoperation of different tools. All XML formats are available at http://bioschemas.sourceforge.net, the BioDOM library can be obtained at http://biodom.sourceforge.net. The HOBIT XML schemas and the BioDOM library simplify adding XML support to newly created and existing bioinformatic tools, enabling these tools to interoperate seamlessly in workflow scenarios.
Iterative user centered design for development of a patient-centered fall prevention toolkit.
Katsulis, Zachary; Ergai, Awatef; Leung, Wai Yin; Schenkel, Laura; Rai, Amisha; Adelman, Jason; Benneyan, James; Bates, David W; Dykes, Patricia C
2016-09-01
Due to the large number of falls that occur in hospital settings, inpatient fall prevention is a topic of great interest to patients and health care providers. The use of electronic decision support that tailors fall prevention strategy to patient-specific risk factors, known as Fall T.I.P.S (Tailoring Interventions for Patient Safety), has proven to be an effective approach for decreasing hospital falls. A paper version of the Fall T.I.P.S toolkit was developed primarily for hospitals that do not have the resources to implement the electronic solution; however, more work is needed to optimize the effectiveness of the paper version of this tool. We examined the use of human factors techniques in the redesign of the existing paper fall prevention tool with the goal of increasing ease of use and decreasing inpatient falls. The inclusion of patients and clinical staff in the redesign of the existing tool was done to increase adoption of the tool and fall prevention best practices. The redesigned paper Fall T.I.P.S toolkit showcased a built in clinical decision support system and increased ease of use over the existing version. Copyright © 2016 Elsevier Ltd. All rights reserved.
Rostami, Paryaneh; Ashcroft, Darren M; Tully, Mary P
2018-01-01
Reducing medication-related harm is a global priority; however, impetus for improvement is impeded as routine medication safety data are seldom available. Therefore, the Medication Safety Thermometer was developed within England's National Health Service. This study aimed to explore the implementation of the tool into routine practice from users' perspectives. Fifteen semi-structured interviews were conducted with purposely sampled National Health Service staff from primary and secondary care settings. Interview data were analysed using an initial thematic analysis, and subsequent analysis using Normalisation Process Theory. Secondary care staff understood that the Medication Safety Thermometer's purpose was to measure medication safety and improvement. However, other uses were reported, such as pinpointing poor practice. Confusion about its purpose existed in primary care, despite further training, suggesting unsuitability of the tool. Decreased engagement was displayed by staff less involved with medication use, who displayed less ownership. Nonetheless, these advocates often lacked support from management and frontline levels, leading to an overall lack of engagement. Many participants reported efforts to drive scale-up of the use of the tool, for example, by securing funding, despite uncertainty around how to use data. Successful improvement was often at ward-level and went unrecognised within the wider organisation. There was mixed feedback regarding the value of the tool, often due to a perceived lack of "capacity". However, participants demonstrated interest in learning how to use their data and unexpected applications of data were reported. Routine medication safety data collection is complex, but achievable and facilitates improvements. However, collected data must be analysed, understood and used for further work to achieve improvement, which often does not happen. The national roll-out of the tool has accelerated shared learning; however, a number of difficulties still exist, particularly in primary care settings, where a different approach is likely to be required.
Ashcroft, Darren M.; Tully, Mary P.
2018-01-01
Background Reducing medication-related harm is a global priority; however, impetus for improvement is impeded as routine medication safety data are seldom available. Therefore, the Medication Safety Thermometer was developed within England’s National Health Service. This study aimed to explore the implementation of the tool into routine practice from users’ perspectives. Method Fifteen semi-structured interviews were conducted with purposely sampled National Health Service staff from primary and secondary care settings. Interview data were analysed using an initial thematic analysis, and subsequent analysis using Normalisation Process Theory. Results Secondary care staff understood that the Medication Safety Thermometer’s purpose was to measure medication safety and improvement. However, other uses were reported, such as pinpointing poor practice. Confusion about its purpose existed in primary care, despite further training, suggesting unsuitability of the tool. Decreased engagement was displayed by staff less involved with medication use, who displayed less ownership. Nonetheless, these advocates often lacked support from management and frontline levels, leading to an overall lack of engagement. Many participants reported efforts to drive scale-up of the use of the tool, for example, by securing funding, despite uncertainty around how to use data. Successful improvement was often at ward-level and went unrecognised within the wider organisation. There was mixed feedback regarding the value of the tool, often due to a perceived lack of “capacity”. However, participants demonstrated interest in learning how to use their data and unexpected applications of data were reported. Conclusion Routine medication safety data collection is complex, but achievable and facilitates improvements. However, collected data must be analysed, understood and used for further work to achieve improvement, which often does not happen. The national roll-out of the tool has accelerated shared learning; however, a number of difficulties still exist, particularly in primary care settings, where a different approach is likely to be required. PMID:29489842
Bayesian ISOLA: new tool for automated centroid moment tensor inversion
NASA Astrophysics Data System (ADS)
Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John
2017-08-01
We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances are rejected and full-waveform inversion in a space-time grid around a provided hypocentre. A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequency ranges. The method is tested on synthetic and observed data. It is applied on a data set from the Swiss seismic network and the results are compared with the existing high-quality MT catalogue. The software package programmed in Python is designed to be as versatile as possible in order to be applicable in various networks ranging from local to regional. The method can be applied either to the everyday network data flow, or to process large pre-existing earthquake catalogues and data sets.
Ocular Fundus Photography as an Educational Tool.
Mackay, Devin D; Garza, Philip S
2015-10-01
The proficiency of nonophthalmologists with direct ophthalmoscopy is poor, which has prompted a search for alternative technologies to examine the ocular fundus. Although ocular fundus photography has existed for decades, its use has been traditionally restricted to ophthalmology clinical care settings and textbooks. Recent research has shown a role for nonmydriatic fundus photography in nonophthalmic settings, encouraging more widespread adoption of fundus photography technology. Recent studies have also affirmed the role of fundus photography as an adjunct or alternative to direct ophthalmoscopy in undergraduate medical education. In this review, the authors examine the use of ocular fundus photography as an educational tool and suggest future applications for this important technology. Novel applications of fundus photography as an educational tool have the potential to resurrect the dying art of funduscopy. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Moving Large Data Sets Over High-Performance Long Distance Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodson, Stephen W; Poole, Stephen W; Ruwart, Thomas
2011-04-01
In this project we look at the performance characteristics of three tools used to move large data sets over dedicated long distance networking infrastructure. Although performance studies of wide area networks have been a frequent topic of interest, performance analyses have tended to focus on network latency characteristics and peak throughput using network traffic generators. In this study we instead perform an end-to-end long distance networking analysis that includes reading large data sets from a source file system and committing large data sets to a destination file system. An evaluation of end-to-end data movement is also an evaluation of themore » system configurations employed and the tools used to move the data. For this paper, we have built several storage platforms and connected them with a high performance long distance network configuration. We use these systems to analyze the capabilities of three data movement tools: BBcp, GridFTP, and XDD. Our studies demonstrate that existing data movement tools do not provide efficient performance levels or exercise the storage devices in their highest performance modes. We describe the device information required to achieve high levels of I/O performance and discuss how this data is applicable in use cases beyond data movement performance.« less
Development of a web-based toolkit to support improvement of care coordination in primary care.
Ganz, David A; Barnard, Jenny M; Smith, Nina Z Y; Miake-Lye, Isomi M; Delevan, Deborah M; Simon, Alissa; Rose, Danielle E; Stockdale, Susan E; Chang, Evelyn T; Noël, Polly H; Finley, Erin P; Lee, Martin L; Zulman, Donna M; Cordasco, Kristina M; Rubenstein, Lisa V
2018-05-23
Promising practices for the coordination of chronic care exist, but how to select and share these practices to support quality improvement within a healthcare system is uncertain. This study describes an approach for selecting high-quality tools for an online care coordination toolkit to be used in Veterans Health Administration (VA) primary care practices. We evaluated tools in three steps: (1) an initial screening to identify tools relevant to care coordination in VA primary care, (2) a two-clinician expert review process assessing tool characteristics (e.g. frequency of problem addressed, linkage to patients' experience of care, effect on practice workflow, and sustainability with existing resources) and assigning each tool a summary rating, and (3) semi-structured interviews with VA patients and frontline clinicians and staff. Of 300 potentially relevant tools identified by searching online resources, 65, 38, and 18 remained after steps one, two and three, respectively. The 18 tools cover five topics: managing referrals to specialty care, medication management, patient after-visit summary, patient activation materials, agenda setting, patient pre-visit packet, and provider contact information for patients. The final toolkit provides access to the 18 tools, as well as detailed information about tools' expected benefits, and resources required for tool implementation. Future care coordination efforts can benefit from systematically reviewing available tools to identify those that are high quality and relevant.
A National Solar Digital Observatory
NASA Astrophysics Data System (ADS)
Hill, F.
2000-05-01
The continuing development of the Internet as a research tool, combined with an improving funding climate, has sparked new interest in the development of Internet-linked astronomical data bases and analysis tools. Here I outline a concept for a National Solar Digital Observatory (NSDO), a set of data archives and analysis tools distributed in physical location at sites which already host such systems. A central web site would be implemented from which a user could search all of the component archives, select and download data, and perform analyses. Example components include NSO's Digital Library containing its synoptic and GONG data, and the forthcoming SOLIS archive. Several other archives, in various stages of development, also exist. Potential analysis tools include content-based searches, visualized programming tools, and graphics routines. The existence of an NSDO would greatly facilitate solar physics research, as a user would no longer need to have detailed knowledge of all solar archive sites. It would also improve public outreach efforts. The National Solar Observatory is operated by AURA, Inc. under a cooperative agreement with the National Science Foundation.
Desiderata for a Computer-Assisted Audit Tool for Clinical Data Source Verification Audits
Duda, Stephany N.; Wehbe, Firas H.; Gadd, Cynthia S.
2013-01-01
Clinical data auditing often requires validating the contents of clinical research databases against source documents available in health care settings. Currently available data audit software, however, does not provide features necessary to compare the contents of such databases to source data in paper medical records. This work enumerates the primary weaknesses of using paper forms for clinical data audits and identifies the shortcomings of existing data audit software, as informed by the experiences of an audit team evaluating data quality for an international research consortium. The authors propose a set of attributes to guide the development of a computer-assisted clinical data audit tool to simplify and standardize the audit process. PMID:20841814
Anatomical information in radiation treatment planning.
Kalet, I J; Wu, J; Lease, M; Austin-Seymour, M M; Brinkley, J F; Rosse, C
1999-01-01
We report on experience and insights gained from prototyping, for clinical radiation oncologists, a new access tool for the University of Washington Digital Anatomist information resources. This access tool is designed to integrate with a radiation therapy planning (RTP) system in use in a clinical setting. We hypothesize that the needs of practitioners in a clinical setting are different from the needs of students, the original targeted users of the Digital Anatomist system, but that a common knowledge resource can serve both. Our prototype was designed to help define those differences and study the feasibility of a full anatomic reference system that will support both clinical radiation therapy and all the existing educational applications.
Mentoring as a Developmental Tool for Higher Education
ERIC Educational Resources Information Center
Knippelmeyer, Sheri A.; Torraco, Richard J.
2007-01-01
Higher education, a setting devoted to the enhancement of learning, inquiry, and development, continues to lack effective development for faculty. Mentoring relationships seek to provide enhancement, yet few mentoring programs exist. This literature review examines forms of mentoring, its benefits, barriers to implementation, means for successful…
Intersection of Three Planes Revisited--An Algebraic Approach
ERIC Educational Resources Information Center
Trenkler, Götz; Trenkler, Dietrich
2017-01-01
Given three planes in space, a complete characterization of their intersection is provided. Special attention is paid to the case when the intersection set does not exist of one point only. Besides the vector cross product, the tool of generalized inverse of a matrix is used extensively.
MAGMA: Generalized Gene-Set Analysis of GWAS Data
de Leeuw, Christiaan A.; Mooij, Joris M.; Heskes, Tom; Posthuma, Danielle
2015-01-01
By aggregating data for complex traits in a biologically meaningful way, gene and gene-set analysis constitute a valuable addition to single-marker analysis. However, although various methods for gene and gene-set analysis currently exist, they generally suffer from a number of issues. Statistical power for most methods is strongly affected by linkage disequilibrium between markers, multi-marker associations are often hard to detect, and the reliance on permutation to compute p-values tends to make the analysis computationally very expensive. To address these issues we have developed MAGMA, a novel tool for gene and gene-set analysis. The gene analysis is based on a multiple regression model, to provide better statistical performance. The gene-set analysis is built as a separate layer around the gene analysis for additional flexibility. This gene-set analysis also uses a regression structure to allow generalization to analysis of continuous properties of genes and simultaneous analysis of multiple gene sets and other gene properties. Simulations and an analysis of Crohn’s Disease data are used to evaluate the performance of MAGMA and to compare it to a number of other gene and gene-set analysis tools. The results show that MAGMA has significantly more power than other tools for both the gene and the gene-set analysis, identifying more genes and gene sets associated with Crohn’s Disease while maintaining a correct type 1 error rate. Moreover, the MAGMA analysis of the Crohn’s Disease data was found to be considerably faster as well. PMID:25885710
MAGMA: generalized gene-set analysis of GWAS data.
de Leeuw, Christiaan A; Mooij, Joris M; Heskes, Tom; Posthuma, Danielle
2015-04-01
By aggregating data for complex traits in a biologically meaningful way, gene and gene-set analysis constitute a valuable addition to single-marker analysis. However, although various methods for gene and gene-set analysis currently exist, they generally suffer from a number of issues. Statistical power for most methods is strongly affected by linkage disequilibrium between markers, multi-marker associations are often hard to detect, and the reliance on permutation to compute p-values tends to make the analysis computationally very expensive. To address these issues we have developed MAGMA, a novel tool for gene and gene-set analysis. The gene analysis is based on a multiple regression model, to provide better statistical performance. The gene-set analysis is built as a separate layer around the gene analysis for additional flexibility. This gene-set analysis also uses a regression structure to allow generalization to analysis of continuous properties of genes and simultaneous analysis of multiple gene sets and other gene properties. Simulations and an analysis of Crohn's Disease data are used to evaluate the performance of MAGMA and to compare it to a number of other gene and gene-set analysis tools. The results show that MAGMA has significantly more power than other tools for both the gene and the gene-set analysis, identifying more genes and gene sets associated with Crohn's Disease while maintaining a correct type 1 error rate. Moreover, the MAGMA analysis of the Crohn's Disease data was found to be considerably faster as well.
Biomimetics: process, tools and practice.
Fayemi, P E; Wanieck, K; Zollfrank, C; Maranzana, N; Aoussat, A
2017-01-23
Biomimetics applies principles and strategies abstracted from biological systems to engineering and technological design. With a huge potential for innovation, biomimetics could evolve into a key process in businesses. Yet challenges remain within the process of biomimetics, especially from the perspective of potential users. We work to clarify the understanding of the process of biomimetics. Therefore, we briefly summarize the terminology of biomimetics and bioinspiration. The implementation of biomimetics requires a stated process. Therefore, we present a model of the problem-driven process of biomimetics that can be used for problem-solving activity. The process of biomimetics can be facilitated by existing tools and creative methods. We mapped a set of tools to the biomimetic process model and set up assessment sheets to evaluate the theoretical and practical value of these tools. We analyzed the tools in interdisciplinary research workshops and present the characteristics of the tools. We also present the attempt of a utility tree which, once finalized, could be used to guide users through the process by choosing appropriate tools respective to their own expertize. The aim of this paper is to foster the dialogue and facilitate a closer collaboration within the field of biomimetics.
bioalcidae, samjs and vcffilterjs: object-oriented formatters and filters for bioinformatics files.
Lindenbaum, Pierre; Redon, Richard
2018-04-01
Reformatting and filtering bioinformatics files are common tasks for bioinformaticians. Standard Linux tools and specific programs are usually used to perform such tasks but there is still a gap between using these tools and the programming interface of some existing libraries. In this study, we developed a set of tools namely bioalcidae, samjs and vcffilterjs that reformat or filter files using a JavaScript engine or a pure java expression and taking advantage of the java API for high-throughput sequencing data (htsjdk). https://github.com/lindenb/jvarkit. pierre.lindenbaum@univ-nantes.fr.
kpLogo: positional k-mer analysis reveals hidden specificity in biological sequences
2017-01-01
Abstract Motifs of only 1–4 letters can play important roles when present at key locations within macromolecules. Because existing motif-discovery tools typically miss these position-specific short motifs, we developed kpLogo, a probability-based logo tool for integrated detection and visualization of position-specific ultra-short motifs from a set of aligned sequences. kpLogo also overcomes the limitations of conventional motif-visualization tools in handling positional interdependencies and utilizing ranked or weighted sequences increasingly available from high-throughput assays. kpLogo can be found at http://kplogo.wi.mit.edu/. PMID:28460012
NASA Astrophysics Data System (ADS)
Moore, B., III
2014-12-01
Climate Science Centers: An "Existence Theorem" for a Federal-University Partnership to Develop Actionable and Needs-Driven Science Agendas. Berrien Moore III (University of Oklahoma) The South Central Climate Science Center (CSC) is one of eight regional centers established by the Department of the Interior (DoI) under Secretarial Order 3289 to address the impacts of climate change on America's water, land, and other natural and cultural resources. Under DoI leadership and funding, these CSCs will provide scientific information tools and techniques to study impacts of climate change synthesize and integrate climate change impact data develop tools that the DoI managers and partners can use when managing the DOI's land, water, fish and wildlife, and cultural heritage resources (emphasis added) The network of Climate Science Centers will provide decision makers with the science, tools, and information they need to address the impacts of climate variability and change on their areas of responsibility. Note from Webster, a tool is a device for doing work; it makes outcomes more realizable and more cost effective, and, in a word, better. Prior to the existence of CSCs, the university and federal scientific world certainly contained a large "set" of scientists with considerable strength in the physical, biological, natural, and social sciences to address the complexities and interdisciplinary nature of the challenges in the areas of climate variability, change, impacts, and adaptation. However, this set of scientists were hardly an integrated community let alone a focused team, but rather a collection of distinguished researchers, educators, and practitioners that were working with disparate though at times linked objectives, and they were rarely aligning themselves formally to an overarching strategic pathway. In addition, data, models, research results, tools, and products were generally somewhat "disconnected" from the broad range of stakeholders. I should note also that NOAA's Regional Integrated Sciences and Assessments ( RISA) program is an earlier "Existence Theorem" for a Federal-University Partnership to Develop Actionable and Needs-Driven Science Agendas. This contribution will discuss the important cultural shift that has flowed from Secretarial Order 3289.
Lommen, Arjen; van der Kamp, Henk J; Kools, Harrie J; van der Lee, Martijn K; van der Weg, Guido; Mol, Hans G J
2012-11-09
A new alternative data processing tool set, metAlignID, is developed for automated pre-processing and library-based identification and concentration estimation of target compounds after analysis by comprehensive two-dimensional gas chromatography with mass spectrometric detection. The tool set has been developed for and tested on LECO data. The software is developed to run multi-threaded (one thread per processor core) on a standard PC (personal computer) under different operating systems and is as such capable of processing multiple data sets simultaneously. Raw data files are converted into netCDF (network Common Data Form) format using a fast conversion tool. They are then preprocessed using previously developed algorithms originating from metAlign software. Next, the resulting reduced data files are searched against a user-composed library (derived from user or commercial NIST-compatible libraries) (NIST=National Institute of Standards and Technology) and the identified compounds, including an indicative concentration, are reported in Excel format. Data can be processed batch wise. The overall time needed for conversion together with processing and searching of 30 raw data sets for 560 compounds is routinely within an hour. The screening performance is evaluated for detection of pesticides and contaminants in raw data obtained after analysis of soil and plant samples. Results are compared to the existing data-handling routine based on proprietary software (LECO, ChromaTOF). The developed software tool set, which is freely downloadable at www.metalign.nl, greatly accelerates data-analysis and offers more options for fine-tuning automated identification toward specific application needs. The quality of the results obtained is slightly better than the standard processing and also adds a quantitative estimate. The software tool set in combination with two-dimensional gas chromatography coupled to time-of-flight mass spectrometry shows great potential as a highly-automated and fast multi-residue instrumental screening method. Copyright © 2012 Elsevier B.V. All rights reserved.
Timmings, Caitlyn; Khan, Sobia; Moore, Julia E; Marquez, Christine; Pyka, Kasha; Straus, Sharon E
2016-02-24
To address challenges related to selecting a valid, reliable, and appropriate readiness assessment measure in practice, we developed an online decision support tool to aid frontline implementers in healthcare settings in this process. The focus of this paper is to describe a multi-step, end-user driven approach to developing this tool for use during the planning stages of implementation. A multi-phase, end-user driven approach was used to develop and test the usability of a readiness decision support tool. First, readiness assessment measures that are valid, reliable, and appropriate for healthcare settings were identified from a systematic review. Second, a mapping exercise was performed to categorize individual items of included measures according to key readiness constructs from an existing framework. Third, a modified Delphi process was used to collect stakeholder ratings of the included measures on domains of feasibility, relevance, and likelihood to recommend. Fourth, two versions of a decision support tool prototype were developed and evaluated for usability. Nine valid and reliable readiness assessment measures were included in the decision support tool. The mapping exercise revealed that of the nine measures, most measures (78 %) focused on assessing readiness for change at the organizational versus the individual level, and that four measures (44 %) represented all constructs of organizational readiness. During the modified Delphi process, stakeholders rated most measures as feasible and relevant for use in practice, and reported that they would be likely to recommend use of most measures. Using data from the mapping exercise and stakeholder panel, an algorithm was developed to link users to a measure based on characteristics of their organizational setting and their readiness for change assessment priorities. Usability testing yielded recommendations that were used to refine the Ready, Set, Change! decision support tool . Ready, Set, Change! decision support tool is an implementation support that is designed to facilitate the routine incorporation of a readiness assessment as an early step in implementation. Use of this tool in practice may offer time and resource-saving implications for implementation.
Comparative analysis and visualization of multiple collinear genomes
2012-01-01
Background Genome browsers are a common tool used by biologists to visualize genomic features including genes, polymorphisms, and many others. However, existing genome browsers and visualization tools are not well-suited to perform meaningful comparative analysis among a large number of genomes. With the increasing quantity and availability of genomic data, there is an increased burden to provide useful visualization and analysis tools for comparison of multiple collinear genomes such as the large panels of model organisms which are the basis for much of the current genetic research. Results We have developed a novel web-based tool for visualizing and analyzing multiple collinear genomes. Our tool illustrates genome-sequence similarity through a mosaic of intervals representing local phylogeny, subspecific origin, and haplotype identity. Comparative analysis is facilitated through reordering and clustering of tracks, which can vary throughout the genome. In addition, we provide local phylogenetic trees as an alternate visualization to assess local variations. Conclusions Unlike previous genome browsers and viewers, ours allows for simultaneous and comparative analysis. Our browser provides intuitive selection and interactive navigation about features of interest. Dynamic visualizations adjust to scale and data content making analysis at variable resolutions and of multiple data sets more informative. We demonstrate our genome browser for an extensive set of genomic data sets composed of almost 200 distinct mouse laboratory strains. PMID:22536897
Lewis, Sheri L.; Feighner, Brian H.; Loschen, Wayne A.; Wojcik, Richard A.; Skora, Joseph F.; Coberly, Jacqueline S.; Blazes, David L.
2011-01-01
Public health surveillance is undergoing a revolution driven by advances in the field of information technology. Many countries have experienced vast improvements in the collection, ingestion, analysis, visualization, and dissemination of public health data. Resource-limited countries have lagged behind due to challenges in information technology infrastructure, public health resources, and the costs of proprietary software. The Suite for Automated Global Electronic bioSurveillance (SAGES) is a collection of modular, flexible, freely-available software tools for electronic disease surveillance in resource-limited settings. One or more SAGES tools may be used in concert with existing surveillance applications or the SAGES tools may be used en masse for an end-to-end biosurveillance capability. This flexibility allows for the development of an inexpensive, customized, and sustainable disease surveillance system. The ability to rapidly assess anomalous disease activity may lead to more efficient use of limited resources and better compliance with World Health Organization International Health Regulations. PMID:21572957
KeyWare: an open wireless distributed computing environment
NASA Astrophysics Data System (ADS)
Shpantzer, Isaac; Schoenfeld, Larry; Grindahl, Merv; Kelman, Vladimir
1995-12-01
Deployment of distributed applications in the wireless domain lack equivalent tools, methodologies, architectures, and network management that exist in LAN based applications. A wireless distributed computing environment (KeyWareTM) based on intelligent agents within a multiple client multiple server scheme was developed to resolve this problem. KeyWare renders concurrent application services to wireline and wireless client nodes encapsulated in multiple paradigms such as message delivery, database access, e-mail, and file transfer. These services and paradigms are optimized to cope with temporal and spatial radio coverage, high latency, limited throughput and transmission costs. A unified network management paradigm for both wireless and wireline facilitates seamless extensions of LAN- based management tools to include wireless nodes. A set of object oriented tools and methodologies enables direct asynchronous invocation of agent-based services supplemented by tool-sets matched to supported KeyWare paradigms. The open architecture embodiment of KeyWare enables a wide selection of client node computing platforms, operating systems, transport protocols, radio modems and infrastructures while maintaining application portability.
ISAAC - InterSpecies Analysing Application using Containers.
Baier, Herbert; Schultz, Jörg
2014-01-15
Information about genes, transcripts and proteins is spread over a wide variety of databases. Different tools have been developed using these databases to identify biological signals in gene lists from large scale analysis. Mostly, they search for enrichments of specific features. But, these tools do not allow an explorative walk through different views and to change the gene lists according to newly upcoming stories. To fill this niche, we have developed ISAAC, the InterSpecies Analysing Application using Containers. The central idea of this web based tool is to enable the analysis of sets of genes, transcripts and proteins under different biological viewpoints and to interactively modify these sets at any point of the analysis. Detailed history and snapshot information allows tracing each action. Furthermore, one can easily switch back to previous states and perform new analyses. Currently, sets can be viewed in the context of genomes, protein functions, protein interactions, pathways, regulation, diseases and drugs. Additionally, users can switch between species with an automatic, orthology based translation of existing gene sets. As todays research usually is performed in larger teams and consortia, ISAAC provides group based functionalities. Here, sets as well as results of analyses can be exchanged between members of groups. ISAAC fills the gap between primary databases and tools for the analysis of large gene lists. With its highly modular, JavaEE based design, the implementation of new modules is straight forward. Furthermore, ISAAC comes with an extensive web-based administration interface including tools for the integration of third party data. Thus, a local installation is easily feasible. In summary, ISAAC is tailor made for highly explorative interactive analyses of gene, transcript and protein sets in a collaborative environment.
Boomerang: A method for recursive reclassification.
Devlin, Sean M; Ostrovnaya, Irina; Gönen, Mithat
2016-09-01
While there are many validated prognostic classifiers used in practice, often their accuracy is modest and heterogeneity in clinical outcomes exists in one or more risk subgroups. Newly available markers, such as genomic mutations, may be used to improve the accuracy of an existing classifier by reclassifying patients from a heterogenous group into a higher or lower risk category. The statistical tools typically applied to develop the initial classifiers are not easily adapted toward this reclassification goal. In this article, we develop a new method designed to refine an existing prognostic classifier by incorporating new markers. The two-stage algorithm called Boomerang first searches for modifications of the existing classifier that increase the overall predictive accuracy and then merges to a prespecified number of risk groups. Resampling techniques are proposed to assess the improvement in predictive accuracy when an independent validation data set is not available. The performance of the algorithm is assessed under various simulation scenarios where the marker frequency, degree of censoring, and total sample size are varied. The results suggest that the method selects few false positive markers and is able to improve the predictive accuracy of the classifier in many settings. Lastly, the method is illustrated on an acute myeloid leukemia data set where a new refined classifier incorporates four new mutations into the existing three category classifier and is validated on an independent data set. © 2016, The International Biometric Society.
Simulation of a Start-Up Manufacturing Facility for Nanopore Arrays
ERIC Educational Resources Information Center
Field, Dennis W.
2009-01-01
Simulation is a powerful tool in developing and troubleshooting manufacturing processes, particularly when considering process flows for manufacturing systems that do not yet exist. Simulation can bridge the gap in terms of setting up full-scale manufacturing for nanotechnology products if limited production experience is an issue. An effective…
Women and Alcohol Problems: Tools for Prevention.
ERIC Educational Resources Information Center
National Inst. on Alcohol Abuse and Alcoholism (DHHS), Rockville, MD.
This report presents a practical guide to the prevention of women's alcohol problems. It is intended for use by individuals interested in incorporating prevention measures into the workplace, schools, treatment facilities, and other settings, and for women interested in reducing the risks of alcohol problems or preventing existing problems from…
Expanding Academic Vocabulary with an Interactive On-Line Database
ERIC Educational Resources Information Center
Horst, Marlise; Cobb, Tom; Nicolae, Ioana
2005-01-01
University students used a set of existing and purpose-built on-line tools for vocabulary learning in an experimental ESL course. The resources included concordance, dictionary, cloze-builder, hypertext, and a database with interactive self-quizzing feature (all freely available at www.lextutor.ca). The vocabulary targeted for learning consisted…
Secondary Data Analysis: An Important Tool for Addressing Developmental Questions
ERIC Educational Resources Information Center
Greenhoot, Andrea Follmer; Dowsett, Chantelle J.
2012-01-01
Existing data sets can be an efficient, powerful, and readily available resource for addressing questions about developmental science. Many of the available databases contain hundreds of variables of interest to developmental psychologists, track participants longitudinally, and have representative samples. In this article, the authors discuss the…
Getting Started in Multimedia Training: Cutting or Bleeding Edge?
ERIC Educational Resources Information Center
Anderson, Vicki; Sleezer, Catherine M.
1995-01-01
Defines multimedia, explores uses of multimedia training, and discusses the effects and challenges of adding multimedia such as graphics, photographs, full motion video, sound effects, or CD-ROMs to existing training methods. Offers planning tips, and suggests software and hardware tools to help set up multimedia training programs. (JMV)
Teaching NMR spectra analysis with nmr.cheminfo.org.
Patiny, Luc; Bolaños, Alejandro; Castillo, Andrés M; Bernal, Andrés; Wist, Julien
2018-06-01
Teaching spectra analysis and structure elucidation requires students to get trained on real problems. This involves solving exercises of increasing complexity and when necessary using computational tools. Although desktop software packages exist for this purpose, nmr.cheminfo.org platform offers students an online alternative. It provides a set of exercises and tools to help solving them. Only a small number of exercises are currently available, but contributors are invited to submit new ones and suggest new types of problems. Copyright © 2018 John Wiley & Sons, Ltd.
Integrated workflows for spiking neuronal network simulations
Antolík, Ján; Davison, Andrew P.
2013-01-01
The increasing availability of computational resources is enabling more detailed, realistic modeling in computational neuroscience, resulting in a shift toward more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeler's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modelers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity. To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo, and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organized configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualization stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modeling studies by relieving the user from manual handling of the flow of metadata between the individual workflow stages. PMID:24368902
Integrated workflows for spiking neuronal network simulations.
Antolík, Ján; Davison, Andrew P
2013-01-01
The increasing availability of computational resources is enabling more detailed, realistic modeling in computational neuroscience, resulting in a shift toward more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeler's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modelers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity. To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualization into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo, and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organized configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualization stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modeling studies by relieving the user from manual handling of the flow of metadata between the individual workflow stages.
Stout, Anna; Wood, Siri; Namagembe, Allen; Kaboré, Alain; Siddo, Daouda; Ndione, Ida
2018-06-01
In collaboration with ministries of health, PATH and key partners launched the first pilot introductions of subcutaneous depot medroxyprogesterone acetate (DMPA-SC, brand name Sayana ® Press) in Burkina Faso, Niger, Senegal, and Uganda from July 2014 through June 2016. While each country implemented a unique introduction strategy, all agreed to track a set of uniform indicators to chart the effect of introducing this new method across settings. Existing national health information systems (HIS) were unable to track new methods or delivery channels introduced for a pilot, thus were not a feasible source for project data. We successfully monitored the four-country pilot introductions by implementing a four-phase approach: 1) developing and defining global indicators, 2) integrating indicators into existing country data collection tools, 3) facilitating consistent reporting and data management, and 4) analyzing and interpreting data and sharing results. Project partners leveraged existing family planning registers to the extent possible, and introduced new or modified data collection and reporting tools to generate project-specific data where necessary. We routinely shared monitoring results with global and national stakeholders, informing decisions about future investments in the product and scale up of DMPA-SC nationwide. Our process and lessons learned may provide insights for countries planning to introduce DMPA-SC or other new contraceptive methods in settings where stakeholder expectations for measureable results for decision-making are high. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
A Tool for Model-Based Generation of Scenario-driven Electric Power Load Profiles
NASA Technical Reports Server (NTRS)
Rozek, Matthew L.; Donahue, Kenneth M.; Ingham, Michel D.; Kaderka, Justin D.
2015-01-01
Power consumption during all phases of spacecraft flight is of great interest to the aerospace community. As a result, significant analysis effort is exerted to understand the rates of electrical energy generation and consumption under many operational scenarios of the system. Previously, no standard tool existed for creating and maintaining a power equipment list (PEL) of spacecraft components that consume power, and no standard tool existed for generating power load profiles based on this PEL information during mission design phases. This paper presents the Scenario Power Load Analysis Tool (SPLAT) as a model-based systems engineering tool aiming to solve those problems. SPLAT is a plugin for MagicDraw (No Magic, Inc.) that aids in creating and maintaining a PEL, and also generates a power and temporal variable constraint set, in Maple language syntax, based on specified operational scenarios. The constraint set can be solved in Maple to show electric load profiles (i.e. power consumption from loads over time). SPLAT creates these load profiles from three modeled inputs: 1) a list of system components and their respective power modes, 2) a decomposition hierarchy of the system into these components, and 3) the specification of at least one scenario, which consists of temporal constraints on component power modes. In order to demonstrate how this information is represented in a system model, a notional example of a spacecraft planetary flyby is introduced. This example is also used to explain the overall functionality of SPLAT, and how this is used to generate electric power load profiles. Lastly, a cursory review of the usage of SPLAT on the Cold Atom Laboratory project is presented to show how the tool was used in an actual space hardware design application.
Integrative Functional Genomics for Systems Genetics in GeneWeaver.org.
Bubier, Jason A; Langston, Michael A; Baker, Erich J; Chesler, Elissa J
2017-01-01
The abundance of existing functional genomics studies permits an integrative approach to interpreting and resolving the results of diverse systems genetics studies. However, a major challenge lies in assembling and harmonizing heterogeneous data sets across species for facile comparison to the positional candidate genes and coexpression networks that come from systems genetic studies. GeneWeaver is an online database and suite of tools at www.geneweaver.org that allows for fast aggregation and analysis of gene set-centric data. GeneWeaver contains curated experimental data together with resource-level data such as GO annotations, MP annotations, and KEGG pathways, along with persistent stores of user entered data sets. These can be entered directly into GeneWeaver or transferred from widely used resources such as GeneNetwork.org. Data are analyzed using statistical tools and advanced graph algorithms to discover new relations, prioritize candidate genes, and generate function hypotheses. Here we use GeneWeaver to find genes common to multiple gene sets, prioritize candidate genes from a quantitative trait locus, and characterize a set of differentially expressed genes. Coupling a large multispecies repository curated and empirical functional genomics data to fast computational tools allows for the rapid integrative analysis of heterogeneous data for interpreting and extrapolating systems genetics results.
BiNChE: a web tool and library for chemical enrichment analysis based on the ChEBI ontology.
Moreno, Pablo; Beisken, Stephan; Harsha, Bhavana; Muthukrishnan, Venkatesh; Tudose, Ilinca; Dekker, Adriano; Dornfeldt, Stefanie; Taruttis, Franziska; Grosse, Ivo; Hastings, Janna; Neumann, Steffen; Steinbeck, Christoph
2015-02-21
Ontology-based enrichment analysis aids in the interpretation and understanding of large-scale biological data. Ontologies are hierarchies of biologically relevant groupings. Using ontology annotations, which link ontology classes to biological entities, enrichment analysis methods assess whether there is a significant over or under representation of entities for ontology classes. While many tools exist that run enrichment analysis for protein sets annotated with the Gene Ontology, there are only a few that can be used for small molecules enrichment analysis. We describe BiNChE, an enrichment analysis tool for small molecules based on the ChEBI Ontology. BiNChE displays an interactive graph that can be exported as a high-resolution image or in network formats. The tool provides plain, weighted and fragment analysis based on either the ChEBI Role Ontology or the ChEBI Structural Ontology. BiNChE aids in the exploration of large sets of small molecules produced within Metabolomics or other Systems Biology research contexts. The open-source tool provides easy and highly interactive web access to enrichment analysis with the ChEBI ontology tool and is additionally available as a standalone library.
AMIDE: a free software tool for multimodality medical image analysis.
Loening, Andreas Markus; Gambhir, Sanjiv Sam
2003-07-01
Amide's a Medical Image Data Examiner (AMIDE) has been developed as a user-friendly, open-source software tool for displaying and analyzing multimodality volumetric medical images. Central to the package's abilities to simultaneously display multiple data sets (e.g., PET, CT, MRI) and regions of interest is the on-demand data reslicing implemented within the program. Data sets can be freely shifted, rotated, viewed, and analyzed with the program automatically handling interpolation as needed from the original data. Validation has been performed by comparing the output of AMIDE with that of several existing software packages. AMIDE runs on UNIX, Macintosh OS X, and Microsoft Windows platforms, and it is freely available with source code under the terms of the GNU General Public License.
Requirements for clinical information modelling tools.
Moreno-Conde, Alberto; Jódar-Sánchez, Francisco; Kalra, Dipak
2015-07-01
This study proposes consensus requirements for clinical information modelling tools that can support modelling tasks in medium/large scale institutions. Rather than identify which functionalities are currently available in existing tools, the study has focused on functionalities that should be covered in order to provide guidance about how to evolve the existing tools. After identifying a set of 56 requirements for clinical information modelling tools based on a literature review and interviews with experts, a classical Delphi study methodology was applied to conduct a two round survey in order to classify them as essential or recommended. Essential requirements are those that must be met by any tool that claims to be suitable for clinical information modelling, and if we one day have a certified tools list, any tool that does not meet essential criteria would be excluded. Recommended requirements are those more advanced requirements that may be met by tools offering a superior product or only needed in certain modelling situations. According to the answers provided by 57 experts from 14 different countries, we found a high level of agreement to enable the study to identify 20 essential and 21 recommended requirements for these tools. It is expected that this list of identified requirements will guide developers on the inclusion of new basic and advanced functionalities that have strong support by end users. This list could also guide regulators in order to identify requirements that could be demanded of tools adopted within their institutions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Prioritizing Health: A Systematic Approach to Scoping Determinants in Health Impact Assessment.
McCallum, Lindsay C; Ollson, Christopher A; Stefanovic, Ingrid L
2016-01-01
The determinants of health are those factors that have the potential to affect health, either positively or negatively, and include a range of personal, social, economic, and environmental factors. In the practice of health impact assessment (HIA), the stage at which the determinants of health are considered for inclusion is during the scoping step. The scoping step is intended to identify how the HIA will be carried out and to set the boundaries (e.g., temporal and geographical) for the assessment. There are several factors that can help to inform the scoping process, many of which are considered in existing HIA tools and guidance; however, a systematic method of prioritizing determinants was found to be lacking. In order to analyze existing HIA scoping tools that are available, a systematic literature review was conducted, including both primary and gray literature. A total of 10 HIA scoping tools met the inclusion/exclusion criteria and were carried forward for comparative analysis. The analysis focused on minimum elements and practice standards of HIA scoping that have been established in the field. The analysis determined that existing approaches lack a clear, systematic method of prioritization of health determinants for inclusion in HIA. This finding led to the development of a Systematic HIA Scoping tool that addressed this gap. The decision matrix tool uses factors, such as impact, public concern, and data availability, to prioritize health determinants. Additionally, the tool allows for identification of data gaps and provides a transparent method for budget allocation and assessment planning. In order to increase efficiency and improve utility, the tool was programed into Microsoft Excel. Future work in the area of HIA methodology development is vital to the ongoing success of the practice and utilization of HIA as a reliable decision-making tool.
PLINK: A Tool Set for Whole-Genome Association and Population-Based Linkage Analyses
Purcell, Shaun ; Neale, Benjamin ; Todd-Brown, Kathe ; Thomas, Lori ; Ferreira, Manuel A. R. ; Bender, David ; Maller, Julian ; Sklar, Pamela ; de Bakker, Paul I. W. ; Daly, Mark J. ; Sham, Pak C.
2007-01-01
Whole-genome association studies (WGAS) bring new computational, as well as analytic, challenges to researchers. Many existing genetic-analysis tools are not designed to handle such large data sets in a convenient manner and do not necessarily exploit the new opportunities that whole-genome data bring. To address these issues, we developed PLINK, an open-source C/C++ WGAS tool set. With PLINK, large data sets comprising hundreds of thousands of markers genotyped for thousands of individuals can be rapidly manipulated and analyzed in their entirety. As well as providing tools to make the basic analytic steps computationally efficient, PLINK also supports some novel approaches to whole-genome data that take advantage of whole-genome coverage. We introduce PLINK and describe the five main domains of function: data management, summary statistics, population stratification, association analysis, and identity-by-descent estimation. In particular, we focus on the estimation and use of identity-by-state and identity-by-descent information in the context of population-based whole-genome studies. This information can be used to detect and correct for population stratification and to identify extended chromosomal segments that are shared identical by descent between very distantly related individuals. Analysis of the patterns of segmental sharing has the potential to map disease loci that contain multiple rare variants in a population-based linkage analysis. PMID:17701901
Bespalova, Nadejda; Morgan, Juliet; Coverdale, John
2016-02-01
Because training residents and faculty to identify human trafficking victims is a major public health priority, the authors review existing assessment tools. PubMed and Google were searched using combinations of search terms including human, trafficking, sex, labor, screening, identification, and tool. Nine screening tools that met the inclusion criteria were found. They varied greatly in length, format, target demographic, supporting resources, and other parameters. Only two tools were designed specifically for healthcare providers. Only one tool was formally assessed to be valid and reliable in a pilot project in trafficking victim service organizations, although it has not been validated in the healthcare setting. This toolbox should facilitate the education of resident physicians and faculty in screening for trafficking victims, assist educators in assessing screening skills, and promote future research on the identification of trafficking victims.
NASA Astrophysics Data System (ADS)
Riah, Zoheir; Sommet, Raphael; Nallatamby, Jean C.; Prigent, Michel; Obregon, Juan
2004-05-01
We present in this paper a set of coherent tools for noise characterization and physics-based analysis of noise in semiconductor devices. This noise toolbox relies on a low frequency noise measurement setup with special high current capabilities thanks to an accurate and original calibration. It relies also on a simulation tool based on the drift diffusion equations and the linear perturbation theory, associated with the Green's function technique. This physics-based noise simulator has been implemented successfully in the Scilab environment and is specifically dedicated to HBTs. Some results are given and compared to those existing in the literature.
A Framework for Semantic Group Formation in Education
ERIC Educational Resources Information Center
Ounnas, Asma; Davis, Hugh C.; Millard, David E.
2009-01-01
Collaboration has long been considered an effective approach to learning. However, forming optimal groups can be a time consuming and complex task. Different approaches have been developed to assist teachers allocate students to groups based on a set of constraints. However, existing tools often fail to assign some students to groups creating a…
Elements, Principles, and Critical Inquiry for Identity-Centered Design of Online Environments
ERIC Educational Resources Information Center
Dudek, Jaclyn; Heiser, Rebecca
2017-01-01
Within higher education, a need exists for learning designs that facilitate education and support students in sharing, examining, and refining their critical identities as learners and professionals. In the past, technology-mediated identity work has focused on individual tool use or a learning setting. However, we as professional learning…
Best predictors for postfire mortality of ponderosa pine trees in the Intermountain West
Carolyn Hull Sieg; Joel D. McMillin; James F. Fowler; Kurt K. Allen; Jose F. Negron; Linda L. Wadleigh; John A. Anhold; Ken E. Gibson
2006-01-01
Numerous wildfires in recent years have highlighted managers' needs for reliable tools to predict postfire mortality of ponderosa pine (Pinus ponderosa Dougl. ex Laws.) trees. General applicability of existing mortality models is uncertain, as researchers have used different sets of variables. We quantified tree attributes, crown and bole fire...
ERIC Educational Resources Information Center
Tillmanns, Tanja; Holland, Charlotte; Filho, Alfredo Salomão
2017-01-01
This paper presents the design criteria for Visual Cues--visual stimuli that are used in combination with other pedagogical processes and tools in Disruptive Learning interventions in sustainability education--to disrupt learners' existing frames of mind and help re-orient learners' mind-sets towards sustainability. The theory of Disruptive…
Interactive visual analysis promotes exploration of long-term ecological data
T.N. Pham; J.A. Jones; R. Metoyer; F.J. Swanson; R.J. Pabst
2013-01-01
Long-term ecological data are crucial in helping ecologists understand ecosystem function and environmental change. Nevertheless, these kinds of data sets are difficult to analyze because they are usually large, multivariate, and spatiotemporal. Although existing analysis tools such as statistical methods and spreadsheet software permit rigorous tests of pre-conceived...
Web scraping technologies in an API world.
Glez-Peña, Daniel; Lourenço, Anália; López-Fernández, Hugo; Reboiro-Jato, Miguel; Fdez-Riverola, Florentino
2014-09-01
Web services are the de facto standard in biomedical data integration. However, there are data integration scenarios that cannot be fully covered by Web services. A number of Web databases and tools do not support Web services, and existing Web services do not cover for all possible user data demands. As a consequence, Web data scraping, one of the oldest techniques for extracting Web contents, is still in position to offer a valid and valuable service to a wide range of bioinformatics applications, ranging from simple extraction robots to online meta-servers. This article reviews existing scraping frameworks and tools, identifying their strengths and limitations in terms of extraction capabilities. The main focus is set on showing how straightforward it is today to set up a data scraping pipeline, with minimal programming effort, and answer a number of practical needs. For exemplification purposes, we introduce a biomedical data extraction scenario where the desired data sources, well-known in clinical microbiology and similar domains, do not offer programmatic interfaces yet. Moreover, we describe the operation of WhichGenes and PathJam, two bioinformatics meta-servers that use scraping as means to cope with gene set enrichment analysis. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Fielden, Sarah J; Anema, Aranka; Fergusson, Pamela; Muldoon, Katherine; Grede, Nils; de Pee, Saskia
2014-10-01
As an increasing number of countries implement integrated food and nutrition security (FNS) and HIV programs, global stakeholders need clarity on how to best measure FNS at the individual and household level. This paper reviews prominent FNS measurement tools, and describes considerations for interpretation in the context of HIV. There exist a range of FNS measurement tools and many have been adapted for use in HIV-endemic settings. Considerations in selecting appropriate tools include sub-types (food sufficiency, dietary diversity and food safety); scope/level of application; and available resources. Tools need to reflect both the needs of PLHIV and affected households and FNS program objectives. Generalized food sufficiency and dietary diversity tools may provide adequate measures of FNS in PLHIV for programmatic applications. Food consumption measurement tools provide further data for clinical or research applications. Measurement of food safety is an important, but underdeveloped aspect of assessment, especially for PLHIV.
Computer-Aided Systems Engineering for Flight Research Projects Using a Workgroup Database
NASA Technical Reports Server (NTRS)
Mizukami, Masahi
2004-01-01
An online systems engineering tool for flight research projects has been developed through the use of a workgroup database. Capabilities are implemented for typical flight research systems engineering needs in document library, configuration control, hazard analysis, hardware database, requirements management, action item tracking, project team information, and technical performance metrics. Repetitive tasks are automated to reduce workload and errors. Current data and documents are instantly available online and can be worked on collaboratively. Existing forms and conventional processes are used, rather than inventing or changing processes to fit the tool. An integrated tool set offers advantages by automatically cross-referencing data, minimizing redundant data entry, and reducing the number of programs that must be learned. With a simplified approach, significant improvements are attained over existing capabilities for minimal cost. By using a workgroup-level database platform, personnel most directly involved in the project can develop, modify, and maintain the system, thereby saving time and money. As a pilot project, the system has been used to support an in-house flight experiment. Options are proposed for developing and deploying this type of tool on a more extensive basis.
Malone, Patrick S; Glezer, Laurie S; Kim, Judy; Jiang, Xiong; Riesenhuber, Maximilian
2016-09-28
The neural substrates of semantic representation have been the subject of much controversy. The study of semantic representations is complicated by difficulty in disentangling perceptual and semantic influences on neural activity, as well as in identifying stimulus-driven, "bottom-up" semantic selectivity unconfounded by top-down task-related modulations. To address these challenges, we trained human subjects to associate pseudowords (TPWs) with various animal and tool categories. To decode semantic representations of these TPWs, we used multivariate pattern classification of fMRI data acquired while subjects performed a semantic oddball detection task. Crucially, the classifier was trained and tested on disjoint sets of TPWs, so that the classifier had to use the semantic information from the training set to correctly classify the test set. Animal and tool TPWs were successfully decoded based on fMRI activity in spatially distinct subregions of the left medial anterior temporal lobe (LATL). In addition, tools (but not animals) were successfully decoded from activity in the left inferior parietal lobule. The tool-selective LATL subregion showed greater functional connectivity with left inferior parietal lobule and ventral premotor cortex, indicating that each LATL subregion exhibits distinct patterns of connectivity. Our findings demonstrate category-selective organization of semantic representations in LATL into spatially distinct subregions, continuing the lateral-medial segregation of activation in posterior temporal cortex previously observed in response to images of animals and tools, respectively. Together, our results provide evidence for segregation of processing hierarchies for different classes of objects and the existence of multiple, category-specific semantic networks in the brain. The location and specificity of semantic representations in the brain are still widely debated. We trained human participants to associate specific pseudowords with various animal and tool categories, and used multivariate pattern classification of fMRI data to decode the semantic representations of the trained pseudowords. We found that: (1) animal and tool information was organized in category-selective subregions of medial left anterior temporal lobe (LATL); (2) tools, but not animals, were encoded in left inferior parietal lobe; and (3) LATL subregions exhibited distinct patterns of functional connectivity with category-related regions across cortex. Our findings suggest that semantic knowledge in LATL is organized in category-related subregions, providing evidence for the existence of multiple, category-specific semantic representations in the brain. Copyright © 2016 the authors 0270-6474/16/3610089-08$15.00/0.
The efficiency of geophysical adjoint codes generated by automatic differentiation tools
NASA Astrophysics Data System (ADS)
Vlasenko, A. V.; Köhl, A.; Stammer, D.
2016-02-01
The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the continuous use of AD tools for solving geophysical problems on modern computer architectures.
miBLAST: scalable evaluation of a batch of nucleotide sequence queries with BLAST
Kim, You Jung; Boyd, Andrew; Athey, Brian D.; Patel, Jignesh M.
2005-01-01
A common task in many modern bioinformatics applications is to match a set of nucleotide query sequences against a large sequence dataset. Exis-ting tools, such as BLAST, are designed to evaluate a single query at a time and can be unacceptably slow when the number of sequences in the query set is large. In this paper, we present a new algorithm, called miBLAST, that evaluates such batch workloads efficiently. At the core, miBLAST employs a q-gram filtering and an index join for efficiently detecting similarity between the query sequences and database sequences. This set-oriented technique, which indexes both the query and the database sets, results in substantial performance improvements over existing methods. Our results show that miBLAST is significantly faster than BLAST in many cases. For example, miBLAST aligned 247 965 oligonucleotide sequences in the Affymetrix probe set against the Human UniGene in 1.26 days, compared with 27.27 days with BLAST (an improvement by a factor of 22). The relative performance of miBLAST increases for larger word sizes; however, it decreases for longer queries. miBLAST employs the familiar BLAST statistical model and output format, guaranteeing the same accuracy as BLAST and facilitating a seamless transition for existing BLAST users. PMID:16061938
Towards a Framework for Modeling Space Systems Architectures
NASA Technical Reports Server (NTRS)
Shames, Peter; Skipper, Joseph
2006-01-01
Topics covered include: 1) Statement of the problem: a) Space system architecture is complex; b) Existing terrestrial approaches must be adapted for space; c) Need a common architecture methodology and information model; d) Need appropriate set of viewpoints. 2) Requirements on a space systems model. 3) Model Based Engineering and Design (MBED) project: a) Evaluated different methods; b) Adapted and utilized RASDS & RM-ODP; c) Identified useful set of viewpoints; d) Did actual model exchanges among selected subset of tools. 4) Lessons learned & future vision.
An Upgrade of the Aeroheating Software ''MINIVER''
NASA Technical Reports Server (NTRS)
Louderback, Pierce
2013-01-01
Detailed computational modeling: CFO often used to create and execute computational domains. Increasing complexity when moving from 20 to 30 geometries. Computational time increased as finer grids are used (accuracy). Strong tool, but takes time to set up and run. MINIVER: Uses theoretical and empirical correlations. Orders of magnitude faster to set up and run. Not as accurate as CFO, but gives reasonable estimations. MINIVER's Drawbacks: Rigid command-line interface. Lackluster, unorganized documentation. No central control; multiple versions exist and have diverged.
Corbett, Anne; Achterberg, Wilco; Husebo, Bettina; Lobbezoo, Frank; de Vet, Henrica; Kunz, Miriam; Strand, Liv; Constantinou, Marios; Tudose, Catalina; Kappesser, Judith; de Waal, Margot; Lautenbacher, Stefan
2014-12-10
Pain is common in people with dementia, yet identification is challenging. A number of pain assessment tools exist, utilizing observation of pain-related behaviours, vocalizations and facial expressions. Whilst they have been developed robustly, these often lack sufficient evidence of psychometric properties, like reliability, face and construct validity, responsiveness and usability, and are not internationally implemented. The EU-COST initiative "Pain in impaired cognition, especially dementia" aims to combine the expertise of clinicians and researchers to address this important issue by building on previous research in the area, identifying existing pain assessment tools for dementia, and developing consensus for items for a new universal meta-tool for use in research and clinical settings. This paper reports on the initial phase of this collaboration task. All existing observational pain behaviour tools were identified and elements categorised using a three-step reduction process. Selection and refinement of items for the draft Pain Assessment in Impaired Cognition (PAIC) meta-tool was achieved through scrutiny of the evidence, consensus of expert opinion, frequency of use and alignment with the American Geriatric Society guidelines. The main aim of this process was to identify key items with potential empirical, rather than theoretical value to take forward for testing. 12 eligible assessment tools were identified, and pain items categorised according to behaviour, facial expression and vocalisation according to the AGS guidelines (Domains 1 - 3). This has been refined to create the PAIC meta-tool for validation and further refinement. A decision was made to create a supporting comprehensive toolkit to support the core assessment tool to provide additional resources for the assessment of overlapping symptoms in dementia, including AGS domains four to six, identification of specific types of pain and assessment of duration and location of pain. This multidisciplinary, cross-cultural initiative has created a draft meta-tool for capturing pain behaviour to be used across languages and culture, based on the most promising items used in existing tools. The draft PAIC meta-tool will now be taken forward for evaluation according to COSMIN guidelines and the EU-COST protocol in order to exclude invalid items, refine included items and optimise the meta-tool.
Antimicrobial Stewardship in the Emergency Department and Guidelines for Development
May, Larissa; Cosgrove, Sara; L’Archeveque, Michelle; Talan, David A.; Payne, Perry; Rothman, Richard E.
2013-01-01
Antimicrobial resistance is a mounting public health concern. Emergency departments (EDs) represent a particularly important setting for addressing inappropriate antimicrobial prescribing practices, given the frequent use of antibiotics in this setting that sits at the interface of the community and the hospital. This article outlines the importance of antimicrobial stewardship in the ED setting and provides practical recommendations drawn from existing evidence for the application of various strategies and tools that could be implemented in the ED including advancement of clinical guidelines, clinical decision support systems, rapid diagnostics, and expansion of ED pharmacist programs. PMID:23122955
Pisa, Pedro T; Landais, Edwige; Margetts, Barrie; Vorster, Hester H; Friedenreich, Christine M; Huybrechts, Inge; Martin-Prevel, Yves; Branca, Francesco; Lee, Warren T K; Leclercq, Catherine; Jerling, Johann; Zotor, Francis; Amuna, Paul; Al Jawaldeh, Ayoub; Aderibigbe, Olaide Ruth; Amoussa, Waliou Hounkpatin; Anderson, Cheryl A M; Aounallah-Skhiri, Hajer; Atek, Madjid; Benhura, Chakare; Chifamba, Jephat; Covic, Namukolo; Dary, Omar; Delisle, Hélène; El Ati, Jalila; El Hamdouchi, Asmaa; El Rhazi, Karima; Faber, Mieke; Kalimbira, Alexander; Korkalo, Liisa; Kruger, Annamarie; Ledo, James; Machiweni, Tatenda; Mahachi, Carol; Mathe, Nonsikelelo; Mokori, Alex; Mouquet-Rivier, Claire; Mutie, Catherine; Nashandi, Hilde Liisa; Norris, Shane A; Onabanjo, Oluseye Olusegun; Rambeloson, Zo; Saha, Foudjo Brice U; Ubaoji, Kingsley Ikechukwu; Zaghloul, Sahar; Slimani, Nadia
2018-01-02
To carry out an inventory on the availability, challenges, and needs of dietary assessment (DA) methods in Africa as a pre-requisite to provide evidence, and set directions (strategies) for implementing common dietary methods and support web-research infrastructure across countries. The inventory was performed within the framework of the "Africa's Study on Physical Activity and Dietary Assessment Methods" (AS-PADAM) project. It involves international institutional and African networks. An inventory questionnaire was developed and disseminated through the networks. Eighteen countries responded to the dietary inventory questionnaire. Various DA tools were reported in Africa; 24-Hour Dietary Recall and Food Frequency Questionnaire were the most commonly used tools. Few tools were validated and tested for reliability. Face-to-face interview was the common method of administration. No computerized software or other new (web) technologies were reported. No tools were standardized across countries. The lack of comparable DA methods across represented countries is a major obstacle to implement comprehensive and joint nutrition-related programmes for surveillance, programme evaluation, research, and prevention. There is a need to develop new or adapt existing DA methods across countries by employing related research infrastructure that has been validated and standardized in other settings, with the view to standardizing methods for wider use.
mESAdb: microRNA Expression and Sequence Analysis Database
Kaya, Koray D.; Karakülah, Gökhan; Yakıcıer, Cengiz M.; Acar, Aybar C.; Konu, Özlen
2011-01-01
microRNA expression and sequence analysis database (http://konulab.fen.bilkent.edu.tr/mirna/) (mESAdb) is a regularly updated database for the multivariate analysis of sequences and expression of microRNAs from multiple taxa. mESAdb is modular and has a user interface implemented in PHP and JavaScript and coupled with statistical analysis and visualization packages written for the R language. The database primarily comprises mature microRNA sequences and their target data, along with selected human, mouse and zebrafish expression data sets. mESAdb analysis modules allow (i) mining of microRNA expression data sets for subsets of microRNAs selected manually or by motif; (ii) pair-wise multivariate analysis of expression data sets within and between taxa; and (iii) association of microRNA subsets with annotation databases, HUGE Navigator, KEGG and GO. The use of existing and customized R packages facilitates future addition of data sets and analysis tools. Furthermore, the ability to upload and analyze user-specified data sets makes mESAdb an interactive and expandable analysis tool for microRNA sequence and expression data. PMID:21177657
mESAdb: microRNA expression and sequence analysis database.
Kaya, Koray D; Karakülah, Gökhan; Yakicier, Cengiz M; Acar, Aybar C; Konu, Ozlen
2011-01-01
microRNA expression and sequence analysis database (http://konulab.fen.bilkent.edu.tr/mirna/) (mESAdb) is a regularly updated database for the multivariate analysis of sequences and expression of microRNAs from multiple taxa. mESAdb is modular and has a user interface implemented in PHP and JavaScript and coupled with statistical analysis and visualization packages written for the R language. The database primarily comprises mature microRNA sequences and their target data, along with selected human, mouse and zebrafish expression data sets. mESAdb analysis modules allow (i) mining of microRNA expression data sets for subsets of microRNAs selected manually or by motif; (ii) pair-wise multivariate analysis of expression data sets within and between taxa; and (iii) association of microRNA subsets with annotation databases, HUGE Navigator, KEGG and GO. The use of existing and customized R packages facilitates future addition of data sets and analysis tools. Furthermore, the ability to upload and analyze user-specified data sets makes mESAdb an interactive and expandable analysis tool for microRNA sequence and expression data.
Mission Analysis, Operations, and Navigation Toolkit Environment (Monte) Version 040
NASA Technical Reports Server (NTRS)
Sunseri, Richard F.; Wu, Hsi-Cheng; Evans, Scott E.; Evans, James R.; Drain, Theodore R.; Guevara, Michelle M.
2012-01-01
Monte is a software set designed for use in mission design and spacecraft navigation operations. The system can process measurement data, design optimal trajectories and maneuvers, and do orbit determination, all in one application. For the first time, a single software set can be used for mission design and navigation operations. This eliminates problems due to different models and fidelities used in legacy mission design and navigation software. The unique features of Monte 040 include a blowdown thruster model for GRAIL (Gravity Recovery and Interior Laboratory) with associated pressure models, as well as an updated, optimalsearch capability (COSMIC) that facilitated mission design for ARTEMIS. Existing legacy software lacked the capabilities necessary for these two missions. There is also a mean orbital element propagator and an osculating to mean element converter that allows long-term orbital stability analysis for the first time in compiled code. The optimized trajectory search tool COSMIC allows users to place constraints and controls on their searches without any restrictions. Constraints may be user-defined and depend on trajectory information either forward or backwards in time. In addition, a long-term orbit stability analysis tool (morbiter) existed previously as a set of scripts on top of Monte. Monte is becoming the primary tool for navigation operations, a core competency at JPL. The mission design capabilities in Monte are becoming mature enough for use in project proposals as well as post-phase A mission design. Monte has three distinct advantages over existing software. First, it is being developed in a modern paradigm: object- oriented C++ and Python. Second, the software has been developed as a toolkit, which allows users to customize their own applications and allows the development team to implement requirements quickly, efficiently, and with minimal bugs. Finally, the software is managed in accordance with the CMMI (Capability Maturity Model Integration), where it has been ap praised at maturity level 3.
Reyes, E Michael; Sharma, Anjali; Thomas, Kate K; Kuehn, Chuck; Morales, José Rafael
2014-09-17
Little information exists on the technical assistance needs of local indigenous organizations charged with managing HIV care and treatment programs funded by the US President's Emergency Plan for AIDS Relief (PEPFAR). This paper describes the methods used to adapt the Primary Care Assessment Tool (PCAT) framework, which has successfully strengthened HIV primary care services in the US, into one that could strengthen the capacity of local partners to deliver priority health programs in resource-constrained settings by identifying their specific technical assistance needs. Qualitative methods and inductive reasoning approaches were used to conceptualize and adapt the new Clinical Assessment for Systems Strengthening (ClASS) framework. Stakeholder interviews, comparisons of existing assessment tools, and a pilot test helped determine the overall ClASS framework for use in low-resource settings. The framework was further refined one year post-ClASS implementation. Stakeholder interviews, assessment of existing tools, a pilot process and the one-year post- implementation assessment informed the adaptation of the ClASS framework for assessing and strengthening technical and managerial capacities of health programs at three levels: international partner, local indigenous partner, and local partner treatment facility. The PCAT focus on organizational strengths and systems strengthening was retained and implemented in the ClASS framework and approach. A modular format was chosen to allow the use of administrative, fiscal and clinical modules in any combination and to insert new modules as needed by programs. The pilot led to refined pre-visit planning, informed review team composition, increased visit duration, and restructured modules. A web-based toolkit was developed to capture three years of experiential learning; this kit can also be used for independent implementation of the ClASS framework. A systematic adaptation process has produced a qualitative framework that can inform implementation strategies in support of country led HIV care and treatment programs. The framework, as a well-received iterative process focused on technical assistance, may have broader utility in other global programs.
Lichtner, Valentina; Dowding, Dawn; Esterhuizen, Philip; Closs, S José; Long, Andrew F; Corbett, Anne; Briggs, Michelle
2014-12-17
There is evidence of under-detection and poor management of pain in patients with dementia, in both long-term and acute care. Accurate assessment of pain in people with dementia is challenging and pain assessment tools have received considerable attention over the years, with an increasing number of tools made available. Systematic reviews on the evidence of their validity and utility mostly compare different sets of tools. This review of systematic reviews analyses and summarises evidence concerning the psychometric properties and clinical utility of pain assessment tools in adults with dementia or cognitive impairment. We searched for systematic reviews of pain assessment tools providing evidence of reliability, validity and clinical utility. Two reviewers independently assessed each review and extracted data from them, with a third reviewer mediating when consensus was not reached. Analysis of the data was carried out collaboratively. The reviews were synthesised using a narrative synthesis approach. We retrieved 441 potentially eligible reviews, 23 met the criteria for inclusion and 8 provided data for extraction. Each review evaluated between 8 and 13 tools, in aggregate providing evidence on a total of 28 tools. The quality of the reviews varied and the reporting often lacked sufficient methodological detail for quality assessment. The 28 tools appear to have been studied in a variety of settings and with varied types of patients. The reviews identified several methodological limitations across the original studies. The lack of a 'gold standard' significantly hinders the evaluation of tools' validity. Most importantly, the samples were small providing limited evidence for use of any of the tools across settings or populations. There are a considerable number of pain assessment tools available for use with the elderly cognitive impaired population. However there is limited evidence about their reliability, validity and clinical utility. On the basis of this review no one tool can be recommended given the existing evidence.
Lennox, Laura; Doyle, Cathal; Reed, Julie E; Bell, Derek
2017-09-24
Although improvement initiatives show benefits to patient care, they often fail to sustain. Models and frameworks exist to address this challenge, but issues with design, clarity and usability have been barriers to use in healthcare settings. This work aimed to collaborate with stakeholders to develop a sustainability tool relevant to people in healthcare settings and practical for use in improvement initiatives. Tool development was conducted in six stages. A scoping literature review, group discussions and a stakeholder engagement event explored literature findings and their resonance with stakeholders in healthcare settings. Interviews, small-scale trialling and piloting explored the design and tested the practicality of the tool in improvement initiatives. National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care for Northwest London (CLAHRC NWL). CLAHRC NWL improvement initiative teams and staff. The iterative design process and engagement of stakeholders informed the articulation of the sustainability factors identified from the literature and guided tool design for practical application. Key iterations of factors and tool design are discussed. From the development process, the Long Term Success Tool (LTST) has been designed. The Tool supports those implementing improvements to reflect on 12 sustainability factors to identify risks to increase chances of achieving sustainability over time. The Tool is designed to provide a platform for improvement teams to share their own views on sustainability as well as learn about the different views held within their team to prompt discussion and actions. The development of the LTST has reinforced the importance of working with stakeholders to design strategies which respond to their needs and preferences and can practically be implemented in real-world settings. Further research is required to study the use and effectiveness of the tool in practice and assess engagement with the method over time. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
[The Italian instrument evaluating the nursing students clinical learning quality].
Palese, Alvisa; Grassetti, Luca; Mansutti, Irene; Destrebecq, Anne; Terzoni, Stefano; Altini, Pietro; Bevilacqua, Anita; Brugnolli, Anna; Benaglio, Carla; Dal Ponte, Adriana; De Biasio, Laura; Dimonte, Valerio; Gambacorti, Benedetta; Fasci, Adriana; Grosso, Silvia; Mantovan, Franco; Marognolli, Oliva; Montalti, Sandra; Nicotera, Raffaela; Randon, Giulia; Stampfl, Brigitte; Tollini, Morena; Canzan, Federica; Saiani, Luisa; Zannini, Lucia
2017-01-01
. The Clinical Learning Quality Evaluation Index for nursing students. The Italian nursing programs, the need to introduce tools evaluating the quality of the clinical learning as perceived by nursing students. Several tools already exist, however, several limitations suggesting the need to develop a new tool. A national project aimed at developing and validating a new instrument capable of measuring the clinical learning quality as experience by nursing students. A validation study design was undertaken from 2015 to 2016. All nursing national programs (n=43) were invited to participate by including all nursing students attending regularly their clinical learning. The tool developed based upon a) literature, b) validated tools already established among other healthcare professionals, and c) consensus expressed by experts and nursing students, was administered to the eligible students. 9606 nursing in 27 universities (62.8%) participated. The psychometric properties of the new instrument ranged from good to excellent. According to the findings, the tool consists in 22 items and five factors: a) quality of the tutorial strategies, b) learning opportunities; c) safety and nursing care quality; d) self-direct learning; e) quality of the learning environment. The tool is already used. Its systematic adoption may support comparison among settings and across different programs; moreover, the tool may also support in accrediting new settings as well as in measuring the effects of strategies aimed at improving the quality of the clinical learning.
Jang, Sung-In; Nam, Jung-Mo; Choi, Jongwon; Park, Eun-Cheol
2014-03-01
Limited healthcare resources make it necessary to maximize efficiency in disease management at the country level by priority-setting according to disease burden. To make the best priority settings, it is necessary to measure health status and have standards for its judgment, as well as consider disease management trends among nations. We used 17 International Classification of Diseases (ICD) categories of potential years of life lost (YPLL) from Organization for Economic Co-operation and Development (OECD) health data for 2012, 37 disease diagnoses YPLL from OECD health data for 2009 across 22 countries and disability-adjusted life years (DALY) from the World Health Organization (WHO). We set a range of 1-1 for each YPLL per disease in a nation (position value for relative comparison, PARC). Changes over 5 years were also accounted for in this disease management index (disease management index, DMI). In terms of ICD categories, the DMI indicated specific areas for priority setting for different countries with regard to managing disease treatment and diagnosis. Our study suggests that DMI is a realistic index that reflects trend changes over the past 5 years to the present state, and PARC is an easy index for identifying relative status. Moreover, unlike existing indices, DMI and PARC make it easy to conduct multiple comparisons among countries and diseases. DMI and PARC are therefore useful tools for policy implications and for future studies incorporating them and other existing indexes. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Fridge, Ernest M., III; Hiott, Jim; Golej, Jim; Plumb, Allan
1993-01-01
Today's software systems generally use obsolete technology, are not integrated properly with other software systems, and are difficult and costly to maintain. The discipline of reverse engineering is becoming prominent as organizations try to move their systems up to more modern and maintainable technology in a cost effective manner. The Johnson Space Center (JSC) created a significant set of tools to develop and maintain FORTRAN and C code during development of the space shuttle. This tool set forms the basis for an integrated environment to reengineer existing code into modern software engineering structures which are then easier and less costly to maintain and which allow a fairly straightforward translation into other target languages. The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering. The knowledge and data captured using the reverse engineering tools is passed to standard forward engineering tools to redesign or perform major upgrades to software systems in a much more cost effective manner than using older technologies. The latest release of the environment was in Feb. 1992.
NASA Technical Reports Server (NTRS)
Fridge, Ernest M., III
1991-01-01
Today's software systems generally use obsolete technology, are not integrated properly with other software systems, and are difficult and costly to maintain. The discipline of reverse engineering is becoming prominent as organizations try to move their systems up to more modern and maintainable technology in a cost effective manner. JSC created a significant set of tools to develop and maintain FORTRAN and C code during development of the Space Shuttle. This tool set forms the basis for an integrated environment to re-engineer existing code into modern software engineering structures which are then easier and less costly to maintain and which allow a fairly straightforward translation into other target languages. The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering. The knowledge and data captured using the reverse engineering tools is passed to standard forward engineering tools to redesign or perform major upgrades to software systems in a much more cost effective manner than using older technologies. A beta vision of the environment was released in Mar. 1991. The commercial potential for such re-engineering tools is very great. CASE TRENDS magazine reported it to be the primary concern of over four hundred of the top MIS executives.
Vassallo, James; Beavis, John; Smith, Jason E; Wallis, Lee A
2017-05-01
Triage is a key principle in the effective management at a major incident. There are at least three different triage systems in use worldwide and previous attempts to validate them, have revealed limited sensitivity. Within a civilian adult population, there has been no work to develop an improved system. A retrospective database review of the UK Joint Theatre Trauma Registry was performed for all adult patients (>18years) presenting to a deployed Military Treatment Facility between 2006 and 2013. Patients were defined as Priority One if they had received one or more life-saving interventions from a previously defined list. Using first recorded hospital physiological data (HR/RR/GCS), binary logistic regression models were used to derive optimum physiological ranges to predict need for life-saving intervention. This allowed for the derivation of the Modified Physiological Triage Tool-MPTT (GCS≥14, HR≥100, 12
Eitzen, Abby; Finlayson, Marcia; Carolan-Laing, Leanne; Nacionales, Arthur Junn; Walker, Christie; O'Connor, Josephine; Asano, Miho; Coote, Susan
2017-08-01
The purpose of this study was to identify potential items for an observational screening tool to assess safe, effective and appropriate walking aid use among people with multiple sclerosis (MS). Such a tool is needed because of the association between fall risk and mobility aid use in this population. Four individuals with MS were videotaped using a one or two straight canes, crutches or a rollator in different settings. Seventeen health care professionals from Canada, Ireland and the United States were recruited, and viewed the videos, and were then interviewed about the use of the devices by the individuals in the videos. Interview questions addressed safety, effectiveness and appropriateness of the device in the setting. Data were analyzed qualitatively. Coding consistency across raters was evaluated and confirmed. Nineteen codes were identified as possible items for the screening tool. The most frequent issues raised regardless of setting and device were "device used for duration/abandoned", "appropriate device", "balance and stability", "device technique", "environmental modification" and "hands free." With the identification of a number of potential tool items, researchers can now move forward with the development of the tool. This will involve consultation with both healthcare professionals and people with MS. Implications for rehabilitation Falls among people with multiple sclerosis are associated with mobility device use and use of multiple devices is associated with greater falls risk. The ability to assess for safe, effective and efficient use of walking aids is therefore important, no tools currently exist for this purpose. The codes arising from this study will be used to develop a screening tool for safe, effective and efficient walking aid use with the aim of reducing falls risk.
Computational science: shifting the focus from tools to models
Hinsen, Konrad
2014-01-01
Computational techniques have revolutionized many aspects of scientific research over the last few decades. Experimentalists use computation for data analysis, processing ever bigger data sets. Theoreticians compute predictions from ever more complex models. However, traditional articles do not permit the publication of big data sets or complex models. As a consequence, these crucial pieces of information no longer enter the scientific record. Moreover, they have become prisoners of scientific software: many models exist only as software implementations, and the data are often stored in proprietary formats defined by the software. In this article, I argue that this emphasis on software tools over models and data is detrimental to science in the long term, and I propose a means by which this can be reversed. PMID:25309728
Khalifa, Abdulrahman; Meystre, Stéphane
2015-12-01
The 2014 i2b2 natural language processing shared task focused on identifying cardiovascular risk factors such as high blood pressure, high cholesterol levels, obesity and smoking status among other factors found in health records of diabetic patients. In addition, the task involved detecting medications, and time information associated with the extracted data. This paper presents the development and evaluation of a natural language processing (NLP) application conceived for this i2b2 shared task. For increased efficiency, the application main components were adapted from two existing NLP tools implemented in the Apache UIMA framework: Textractor (for dictionary-based lookup) and cTAKES (for preprocessing and smoking status detection). The application achieved a final (micro-averaged) F1-measure of 87.5% on the final evaluation test set. Our attempt was mostly based on existing tools adapted with minimal changes and allowed for satisfying performance with limited development efforts. Copyright © 2015 Elsevier Inc. All rights reserved.
Visual management support system
Lee Anderson; Jerry Mosier; Geoffrey Chandler
1979-01-01
The Visual Management Support System (VMSS) is an extension of an existing computer program called VIEWIT, which has been extensively used by the U. S. Forest Service. The capabilities of this program lie in the rapid manipulation of large amounts of data, specifically opera-ting as a tool to overlay or merge one set of data with another. VMSS was conceived to...
Evaluation of a Mobile Learning Organiser for University Students
ERIC Educational Resources Information Center
Corlett, Dan; Sharples, Mike; Bull, Susan; Chan, Tony
2005-01-01
This paper describes a 10-month trial of a mobile learning organiser, developed for use by university students. Implemented on a wireless-enabled Pocket PC hand-held computer, the organiser makes use of existing mobile applications as well as tools designed specifically for students to manage their learning. The trial set out to identify the…
ERIC Educational Resources Information Center
Kelsey, Kathleen D.; Lin, Hong; Franke-Dvorak, Tanya C.
2011-01-01
Wiki has been lauded as a tool that enhances collaborative writing in educational settings and moves learners toward a state of communal constructivism (Holmes, Tangney, FitzGibbon, Savage, & Mehan., 2001). Many pedagogical claims exist regarding the benefits of using wiki. However, these claims have rarely been challenged. This study used a…
ERIC Educational Resources Information Center
Botzet, Andria M.; McIlvaine, Patrick W.; Winters, Ken C.; Fahnhorst, Tamara; Dittel, Christine
2014-01-01
Accurate evaluation and documentation of the efficacy of recovery schools can be vital to the continuation and expansion of these beneficial resources. A very limited data set currently exists that examines the value of specific schools established to support adolescents and young adults in recovery; additional research is necessary. The following…
Signell, Richard; Camossi, E.
2016-01-01
Work over the last decade has resulted in standardised web services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by (1) making it simple for providers to enable web service access to existing output files; (2) using free technologies that are easy to deploy and configure; and (3) providing standardised, service-based tools that work in existing research environments. We present a simple, local brokering approach that lets modellers continue to use their existing files and tools, while serving virtual data sets that can be used with standardised tools. The goal of this paper is to convince modellers that a standardised framework is not only useful but can be implemented with modest effort using free software components. We use NetCDF Markup language for data aggregation and standardisation, the THREDDS Data Server for data delivery, pycsw for data search, NCTOOLBOX (MATLAB®) and Iris (Python) for data access, and Open Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.
NASA Astrophysics Data System (ADS)
Signell, Richard P.; Camossi, Elena
2016-05-01
Work over the last decade has resulted in standardised web services and tools that can significantly improve the efficiency and effectiveness of working with meteorological and ocean model data. While many operational modelling centres have enabled query and access to data via common web services, most small research groups have not. The penetration of this approach into the research community, where IT resources are limited, can be dramatically improved by (1) making it simple for providers to enable web service access to existing output files; (2) using free technologies that are easy to deploy and configure; and (3) providing standardised, service-based tools that work in existing research environments. We present a simple, local brokering approach that lets modellers continue to use their existing files and tools, while serving virtual data sets that can be used with standardised tools. The goal of this paper is to convince modellers that a standardised framework is not only useful but can be implemented with modest effort using free software components. We use NetCDF Markup language for data aggregation and standardisation, the THREDDS Data Server for data delivery, pycsw for data search, NCTOOLBOX (MATLAB®) and Iris (Python) for data access, and Open Geospatial Consortium Web Map Service for data preview. We illustrate the effectiveness of this approach with two use cases involving small research modelling groups at NATO and USGS.
NASA Astrophysics Data System (ADS)
Sivarami Reddy, N.; Ramamurthy, D. V., Dr.; Prahlada Rao, K., Dr.
2017-08-01
This article addresses simultaneous scheduling of machines, AGVs and tools where machines are allowed to share the tools considering transfer times of jobs and tools between machines, to generate best optimal sequences that minimize makespan in a multi-machine Flexible Manufacturing System (FMS). Performance of FMS is expected to improve by effective utilization of its resources, by proper integration and synchronization of their scheduling. Symbiotic Organisms Search (SOS) algorithm is a potent tool which is a better alternative for solving optimization problems like scheduling and proven itself. The proposed SOS algorithm is tested on 22 job sets with makespan as objective for scheduling of machines and tools where machines are allowed to share tools without considering transfer times of jobs and tools and the results are compared with the results of existing methods. The results show that the SOS has outperformed. The same SOS algorithm is used for simultaneous scheduling of machines, AGVs and tools where machines are allowed to share tools considering transfer times of jobs and tools to determine the best optimal sequences that minimize makespan.
Hestand, Matthew S; van Galen, Michiel; Villerius, Michel P; van Ommen, Gert-Jan B; den Dunnen, Johan T; 't Hoen, Peter AC
2008-01-01
Background The identification of transcription factor binding sites is difficult since they are only a small number of nucleotides in size, resulting in large numbers of false positives and false negatives in current approaches. Computational methods to reduce false positives are to look for over-representation of transcription factor binding sites in a set of similarly regulated promoters or to look for conservation in orthologous promoter alignments. Results We have developed a novel tool, "CORE_TF" (Conserved and Over-REpresented Transcription Factor binding sites) that identifies common transcription factor binding sites in promoters of co-regulated genes. To improve upon existing binding site predictions, the tool searches for position weight matrices from the TRANSFACR database that are over-represented in an experimental set compared to a random set of promoters and identifies cross-species conservation of the predicted transcription factor binding sites. The algorithm has been evaluated with expression and chromatin-immunoprecipitation on microarray data. We also implement and demonstrate the importance of matching the random set of promoters to the experimental promoters by GC content, which is a unique feature of our tool. Conclusion The program CORE_TF is accessible in a user friendly web interface at . It provides a table of over-represented transcription factor binding sites in the users input genes' promoters and a graphical view of evolutionary conserved transcription factor binding sites. In our test data sets it successfully predicts target transcription factors and their binding sites. PMID:19036135
Towards Making Data Bases Practical for use in the Field
NASA Astrophysics Data System (ADS)
Fischer, T. P.; Lehnert, K. A.; Chiodini, G.; McCormick, B.; Cardellini, C.; Clor, L. E.; Cottrell, E.
2014-12-01
Geological, geochemical, and geophysical research is often field based with travel to remote areas and collection of samples and data under challenging environmental conditions. Cross-disciplinary investigations would greatly benefit from near real-time data access and visualisation within the existing framework of databases and GIS tools. An example of complex, interdisciplinary field-based and data intensive investigations is that of volcanologists and gas geochemists, who sample gases from fumaroles, hot springs, dry gas vents, hydrothermal vents and wells. Compositions of volcanic gas plumes are measured directly or by remote sensing. Soil gas fluxes from volcanic areas are measured by accumulation chamber and involve hundreds of measurements to calculate the total emission of a region. Many investigators also collect rock samples from recent or ancient volcanic eruptions. Structural, geochronological, and geophysical data collected during the same or related field campaigns complement these emissions data. All samples and data collected in the field require a set of metadata including date, time, location, sample or measurement id, and descriptive comments. Currently, most of these metadata are written in field notebooks and later transferred into a digital format. Final results such as laboratory analyses of samples and calculated flux data are tabulated for plotting, correlation with other types of data, modeling and finally publication and presentation. Data handling, organization and interpretation could be greatly streamlined by using digital tools available in the field to record metadata, assign an International Geo Sample Number (IGSN), upload measurements directly from field instruments, and arrange sample curation. Available data display tools such as GeoMapApp and existing data sets (PetDB, IRIS, UNAVCO) could be integrated to direct locations for additional measurements during a field campaign. Nearly live display of sampling locations, pictures, and comments could be used as an educational and outreach tool during sampling expeditions. Achieving these goals requires the integration of existing online data resources, with common access through a dedicated web portal.
EURRECA: development of tools to improve the alignment of micronutrient recommendations.
Matthys, C; Bucchini, L; Busstra, M C; Cavelaars, A E J M; Eleftheriou, P; Garcia-Alvarez, A; Fairweather-Tait, S; Gurinović, M; van Ommen, B; Contor, L
2010-11-01
Approaches through which reference values for micronutrients are derived, as well as the reference values themselves, vary considerably across countries. Harmonisation is needed to improve nutrition policy and public health strategies. The EURRECA (EURopean micronutrient RECommendations Aligned, http://www.eurreca.org) Network of Excellence is developing generic tools for systematically establishing and updating micronutrient reference values or recommendations. Different types of instruments (including best practice guidelines, interlinked web pages, online databases and decision trees) have been identified. The first set of instruments is for training purposes and includes mainly interactive digital learning materials. The second set of instruments comprises collection and interlinkage of diverse information sources that have widely varying contents and purposes. In general, these sources are collections of existing information. The purpose of the majority of these information sources is to provide guidance on best practice for use in a wider scientific community or for users and stakeholders of reference values. The third set of instruments includes decision trees and frameworks. The purpose of these tools is to guide non-scientists in decision making based on scientific evidence. This platform of instruments will, in particular in Central and Eastern European countries, contribute to future capacity-building development in nutrition. The use of these tools by the scientific community, the European Food Safety Authority, bodies responsible for setting national nutrient requirements and others should ultimately help to align nutrient-based recommendations across Europe. Therefore, EURRECA can contribute towards nutrition policy development and public health strategies.
2012-01-01
Background Discovery of functionally significant short, statistically overrepresented subsequence patterns (motifs) in a set of sequences is a challenging problem in bioinformatics. Oftentimes, not all sequences in the set contain a motif. These non-motif-containing sequences complicate the algorithmic discovery of motifs. Filtering the non-motif-containing sequences from the larger set of sequences while simultaneously determining the identity of the motif is, therefore, desirable and a non-trivial problem in motif discovery research. Results We describe MotifCatcher, a framework that extends the sensitivity of existing motif-finding tools by employing random sampling to effectively remove non-motif-containing sequences from the motif search. We developed two implementations of our algorithm; each built around a commonly used motif-finding tool, and applied our algorithm to three diverse chromatin immunoprecipitation (ChIP) data sets. In each case, the motif finder with the MotifCatcher extension demonstrated improved sensitivity over the motif finder alone. Our approach organizes candidate functionally significant discovered motifs into a tree, which allowed us to make additional insights. In all cases, we were able to support our findings with experimental work from the literature. Conclusions Our framework demonstrates that additional processing at the sequence entry level can significantly improve the performance of existing motif-finding tools. For each biological data set tested, we were able to propose novel biological hypotheses supported by experimental work from the literature. Specifically, in Escherichia coli, we suggested binding site motifs for 6 non-traditional LexA protein binding sites; in Saccharomyces cerevisiae, we hypothesize 2 disparate mechanisms for novel binding sites of the Cse4p protein; and in Halobacterium sp. NRC-1, we discoverd subtle differences in a general transcription factor (GTF) binding site motif across several data sets. We suggest that small differences in our discovered motif could confer specificity for one or more homologous GTF proteins. We offer a free implementation of the MotifCatcher software package at http://www.bme.ucdavis.edu/facciotti/resources_data/software/. PMID:23181585
Semantic memory: a feature-based analysis and new norms for Italian.
Montefinese, Maria; Ambrosini, Ettore; Fairfield, Beth; Mammarella, Nicola
2013-06-01
Semantic norms for properties produced by native speakers are valuable tools for researchers interested in the structure of semantic memory and in category-specific semantic deficits in individuals following brain damage. The aims of this study were threefold. First, we sought to extend existing semantic norms by adopting an empirical approach to category (Exp. 1) and concept (Exp. 2) selection, in order to obtain a more representative set of semantic memory features. Second, we extensively outlined a new set of semantic production norms collected from Italian native speakers for 120 artifactual and natural basic-level concepts, using numerous measures and statistics following a feature-listing task (Exp. 3b). Finally, we aimed to create a new publicly accessible database, since only a few existing databases are publicly available online.
Haider, Kamran; Cruz, Anthony; Ramsey, Steven; Gilson, Michael K; Kurtzman, Tom
2018-01-09
We have developed SSTMap, a software package for mapping structural and thermodynamic water properties in molecular dynamics trajectories. The package introduces automated analysis and mapping of local measures of frustration and enhancement of water structure. The thermodynamic calculations are based on Inhomogeneous Fluid Solvation Theory (IST), which is implemented using both site-based and grid-based approaches. The package also extends the applicability of solvation analysis calculations to multiple molecular dynamics (MD) simulation programs by using existing cross-platform tools for parsing MD parameter and trajectory files. SSTMap is implemented in Python and contains both command-line tools and a Python module to facilitate flexibility in setting up calculations and for automated generation of large data sets involving analysis of multiple solutes. Output is generated in formats compatible with popular Python data science packages. This tool will be used by the molecular modeling community for computational analysis of water in problems of biophysical interest such as ligand binding and protein function.
Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.
Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele
2015-01-01
Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable.
2013-01-01
Background Although research interest in hospital process orientation (HPO) is growing, the development of a measurement tool to assess process orientation (PO) has not been very successful yet. To view a hospital as a series of processes organized around patients with a similar demand seems to be an attractive proposition, but it is hard to operationalize this idea in a measurement tool that can actually measure the level of PO. This research contributes to HPO from an operations management (OM) perspective by addressing the alignment, integration and coordination of activities within patient care processes. The objective of this study was to develop and practically test a new measurement tool for assessing the degree of PO within hospitals using existing tools. Methods Through a literature search we identified a number of constructs to measure PO in hospital settings. These constructs were further operationalized, using an OM perspective. Based on five dimensions of an existing questionnaire a new HPO-measurement tool was developed to measure the degree of PO within hospitals on the basis of respondents’ perception. The HPO-measurement tool was pre-tested in a non-participating hospital and discussed with experts in a focus group. The multicentre exploratory case study was conducted in the ophthalmic practices of three different types of Dutch hospitals. In total 26 employees from three disciplines participated. After filling in the questionnaire an interview was held with each participant to check the validity and the reliability of the measurement tool. Results The application of the HPO-measurement tool, analysis of the scores and interviews with the participants resulted in the possibility to identify differences of PO performance and the areas of improvement – from a PO point of view – within each hospital. The result of refinement of the items of the measurement tool after practical testing is a set of 41 items to assess the degree of PO from an OM perspective within hospitals. Conclusions The development and practically testing of a new HPO-measurement tool improves the understanding and application of PO in hospitals and the reliability of the measurement tool. The study shows that PO is a complex concept and appears still hard to objectify. PMID:24219362
Gonçalves, Pedro D; Hagenbeek, Marie Louise; Vissers, Jan M H
2013-11-13
Although research interest in hospital process orientation (HPO) is growing, the development of a measurement tool to assess process orientation (PO) has not been very successful yet. To view a hospital as a series of processes organized around patients with a similar demand seems to be an attractive proposition, but it is hard to operationalize this idea in a measurement tool that can actually measure the level of PO. This research contributes to HPO from an operations management (OM) perspective by addressing the alignment, integration and coordination of activities within patient care processes. The objective of this study was to develop and practically test a new measurement tool for assessing the degree of PO within hospitals using existing tools. Through a literature search we identified a number of constructs to measure PO in hospital settings. These constructs were further operationalized, using an OM perspective. Based on five dimensions of an existing questionnaire a new HPO-measurement tool was developed to measure the degree of PO within hospitals on the basis of respondents' perception. The HPO-measurement tool was pre-tested in a non-participating hospital and discussed with experts in a focus group. The multicentre exploratory case study was conducted in the ophthalmic practices of three different types of Dutch hospitals. In total 26 employees from three disciplines participated. After filling in the questionnaire an interview was held with each participant to check the validity and the reliability of the measurement tool. The application of the HPO-measurement tool, analysis of the scores and interviews with the participants resulted in the possibility to identify differences of PO performance and the areas of improvement--from a PO point of view--within each hospital. The result of refinement of the items of the measurement tool after practical testing is a set of 41 items to assess the degree of PO from an OM perspective within hospitals. The development and practically testing of a new HPO-measurement tool improves the understanding and application of PO in hospitals and the reliability of the measurement tool. The study shows that PO is a complex concept and appears still hard to objectify.
Towards structured sharing of raw and derived neuroimaging data across existing resources
Keator, D.B.; Helmer, K.; Steffener, J.; Turner, J.A.; Van Erp, T.G.M.; Gadde, S.; Ashish, N.; Burns, G.A.; Nichols, B.N.
2013-01-01
Data sharing efforts increasingly contribute to the acceleration of scientific discovery. Neuroimaging data is accumulating in distributed domain-specific databases and there is currently no integrated access mechanism nor an accepted format for the critically important meta-data that is necessary for making use of the combined, available neuroimaging data. In this manuscript, we present work from the Derived Data Working Group, an open-access group sponsored by the Biomedical Informatics Research Network (BIRN) and the International Neuroimaging Coordinating Facility (INCF) focused on practical tools for distributed access to neuroimaging data. The working group develops models and tools facilitating the structured interchange of neuroimaging meta-data and is making progress towards a unified set of tools for such data and meta-data exchange. We report on the key components required for integrated access to raw and derived neuroimaging data as well as associated meta-data and provenance across neuroimaging resources. The components include (1) a structured terminology that provides semantic context to data, (2) a formal data model for neuroimaging with robust tracking of data provenance, (3) a web service-based application programming interface (API) that provides a consistent mechanism to access and query the data model, and (4) a provenance library that can be used for the extraction of provenance data by image analysts and imaging software developers. We believe that the framework and set of tools outlined in this manuscript have great potential for solving many of the issues the neuroimaging community faces when sharing raw and derived neuroimaging data across the various existing database systems for the purpose of accelerating scientific discovery. PMID:23727024
Design of Mobile Health Tools to Promote Goal Achievement in Self-Management Tasks
Henderson, Geoffrey; Parmanto, Bambang
2017-01-01
Background Goal-setting within rehabilitation is a common practice ultimately geared toward helping patients make functional progress. Objective The purposes of this study were to (1) qualitatively analyze data from a wellness program for patients with spina bifida (SB) and spinal cord injury (SCI) in order to generate software requirements for a goal-setting module to support their complex goal-setting routines, (2) design a prototype of a goal-setting module within an existing mobile health (mHealth) system, and (3) identify what educational content might be necessary to integrate into the system. Methods A total of 750 goals were analyzed from patients with SB and SCI enrolled in a wellness program. These goals were qualitatively analyzed in order to operationalize a set of software requirements for an mHealth goal-setting module and identify important educational content. Results Those of male sex (P=.02) and with SCI diagnosis (P<.001) were more likely to achieve goals than females or those with SB. Temporality (P<.001) and type (P<.001) of goal were associated with likelihood that the goal would be achieved. Nearly all (210/213; 98.6%) of the fact-finding goals were achieved. There was no significant difference in achievement based on goal theme. Checklists, data tracking, and fact-finding tools were identified as three functionalities that could support goal-setting and achievement in an mHealth system. Based on the qualitative analysis, a list of software requirements for a goal-setting module was generated, and a prototype was developed. Targets for educational content were also generated. Conclusions Innovative mHealth tools can be developed to support commonly set goals by individuals with disabilities. PMID:28739558
Devlin, Joseph C; Battaglia, Thomas; Blaser, Martin J; Ruggles, Kelly V
2018-06-25
Exploration of large data sets, such as shotgun metagenomic sequence or expression data, by biomedical experts and medical professionals remains as a major bottleneck in the scientific discovery process. Although tools for this purpose exist for 16S ribosomal RNA sequencing analysis, there is a growing but still insufficient number of user-friendly interactive visualization workflows for easy data exploration and figure generation. The development of such platforms for this purpose is necessary to accelerate and streamline microbiome laboratory research. We developed the Workflow Hub for Automated Metagenomic Exploration (WHAM!) as a web-based interactive tool capable of user-directed data visualization and statistical analysis of annotated shotgun metagenomic and metatranscriptomic data sets. WHAM! includes exploratory and hypothesis-based gene and taxa search modules for visualizing differences in microbial taxa and gene family expression across experimental groups, and for creating publication quality figures without the need for command line interface or in-house bioinformatics. WHAM! is an interactive and customizable tool for downstream metagenomic and metatranscriptomic analysis providing a user-friendly interface allowing for easy data exploration by microbiome and ecological experts to facilitate discovery in multi-dimensional and large-scale data sets.
The scope of cell phones in diabetes management in developing country health care settings.
Ajay, Vamadevan S; Prabhakaran, Dorairaj
2011-05-01
Diabetes has emerged as a major public health concern in developing nations. Health systems in most developing countries are yet to integrate effective prevention and control programs for diabetes into routine health care services. Given the inadequate human resources and underfunctioning health systems, we need novel and innovative approaches to combat diabetes in developing-country settings. In this regard, the tremendous advances in telecommunication technology, particularly cell phones, can be harnessed to improve diabetes care. Cell phones could serve as a tool for collecting information on surveillance, service delivery, evidence-based care, management, and supply systems pertaining to diabetes from primary care settings in addition to providing health messages as part of diabetes education. As a screening/diagnostic tool for diabetes, cell phones can aid the health workers in undertaking screening and diagnostic and follow-up care for diabetes in the community. Cell phones are also capable of acting as a vehicle for continuing medical education; a decision support system for evidence-based management; and a tool for patient education, self-management, and compliance. However, for widespread use, we need robust evaluations of cell phone applications in existing practices and appropriate interventions in diabetes. © 2011 Diabetes Technology Society.
The Scope of Cell Phones in Diabetes Management in Developing Country Health Care Settings
Ajay, Vamadevan S; Prabhakaran, Dorairaj
2011-01-01
Diabetes has emerged as a major public health concern in developing nations. Health systems in most developing countries are yet to integrate effective prevention and control programs for diabetes into routine health care services. Given the inadequate human resources and underfunctioning health systems, we need novel and innovative approaches to combat diabetes in developing-country settings. In this regard, the tremendous advances in telecommunication technology, particularly cell phones, can be harnessed to improve diabetes care. Cell phones could serve as a tool for collecting information on surveillance, service delivery, evidence-based care, management, and supply systems pertaining to diabetes from primary care settings in addition to providing health messages as part of diabetes education. As a screening/diagnostic tool for diabetes, cell phones can aid the health workers in undertaking screening and diagnostic and follow-up care for diabetes in the community. Cell phones are also capable of acting as a vehicle for continuing medical education; a decision support system for evidence-based management; and a tool for patient education, self-management, and compliance. However, for widespread use, we need robust evaluations of cell phone applications in existing practices and appropriate interventions in diabetes. PMID:21722593
Check-Cases for Verification of 6-Degree-of-Freedom Flight Vehicle Simulations
NASA Technical Reports Server (NTRS)
Murri, Daniel G.; Jackson, E. Bruce; Shelton, Robert O.
2015-01-01
The rise of innovative unmanned aeronautical systems and the emergence of commercial space activities have resulted in a number of relatively new aerospace organizations that are designing innovative systems and solutions. These organizations use a variety of commercial off-the-shelf and in-house-developed simulation and analysis tools including 6-degree-of-freedom (6-DOF) flight simulation tools. The increased affordability of computing capability has made highfidelity flight simulation practical for all participants. Verification of the tools' equations-of-motion and environment models (e.g., atmosphere, gravitation, and geodesy) is desirable to assure accuracy of results. However, aside from simple textbook examples, minimal verification data exists in open literature for 6-DOF flight simulation problems. This assessment compared multiple solution trajectories to a set of verification check-cases that covered atmospheric and exo-atmospheric (i.e., orbital) flight. Each scenario consisted of predefined flight vehicles, initial conditions, and maneuvers. These scenarios were implemented and executed in a variety of analytical and real-time simulation tools. This tool-set included simulation tools in a variety of programming languages based on modified flat-Earth, round- Earth, and rotating oblate spheroidal Earth geodesy and gravitation models, and independently derived equations-of-motion and propagation techniques. The resulting simulated parameter trajectories were compared by over-plotting and difference-plotting to yield a family of solutions. In total, seven simulation tools were exercised.
compomics-utilities: an open-source Java library for computational proteomics.
Barsnes, Harald; Vaudel, Marc; Colaert, Niklaas; Helsens, Kenny; Sickmann, Albert; Berven, Frode S; Martens, Lennart
2011-03-08
The growing interest in the field of proteomics has increased the demand for software tools and applications that process and analyze the resulting data. And even though the purpose of these tools can vary significantly, they usually share a basic set of features, including the handling of protein and peptide sequences, the visualization of (and interaction with) spectra and chromatograms, and the parsing of results from various proteomics search engines. Developers typically spend considerable time and effort implementing these support structures, which detracts from working on the novel aspects of their tool. In order to simplify the development of proteomics tools, we have implemented an open-source support library for computational proteomics, called compomics-utilities. The library contains a broad set of features required for reading, parsing, and analyzing proteomics data. compomics-utilities is already used by a long list of existing software, ensuring library stability and continued support and development. As a user-friendly, well-documented and open-source library, compomics-utilities greatly simplifies the implementation of the basic features needed in most proteomics tools. Implemented in 100% Java, compomics-utilities is fully portable across platforms and architectures. Our library thus allows the developers to focus on the novel aspects of their tools, rather than on the basic functions, which can contribute substantially to faster development, and better tools for proteomics.
Discovering Psychological Principles by Mining Naturally Occurring Data Sets.
Goldstone, Robert L; Lupyan, Gary
2016-07-01
The very expertise with which psychologists wield their tools for achieving laboratory control may have had the unwelcome effect of blinding psychologists to the possibilities of discovering principles of behavior without conducting experiments. When creatively interrogated, a diverse range of large, real-world data sets provides powerful diagnostic tools for revealing principles of human judgment, perception, categorization, decision-making, language use, inference, problem solving, and representation. Examples of these data sets include patterns of website links, dictionaries, logs of group interactions, collections of images and image tags, text corpora, history of financial transactions, trends in twitter tag usage and propagation, patents, consumer product sales, performance in high-stakes sporting events, dialect maps, and scientific citations. The goal of this issue is to present some exemplary case studies of mining naturally existing data sets to reveal important principles and phenomena in cognitive science, and to discuss some of the underlying issues involved with conducting traditional experiments, analyses of naturally occurring data, computational modeling, and the synthesis of all three methods. Copyright © 2016 Cognitive Science Society, Inc.
Katsahian, Sandrine; Simond Moreau, Erica; Leprovost, Damien; Lardon, Jeremy; Bousquet, Cedric; Kerdelhué, Gaétan; Abdellaoui, Redhouane; Texier, Nathalie; Burgun, Anita; Boussadi, Abdelali; Faviez, Carole
2015-01-01
Suspected adverse drug reactions (ADR) reported by patients through social media can be a complementary tool to already existing ADRs signal detection processes. However, several studies have shown that the quality of medical information published online varies drastically whatever the health topic addressed. The aim of this study is to use an existing rating tool on a set of social network web sites in order to assess the capabilities of these tools to guide experts for selecting the most adapted social network web site to mine ADRs. First, we reviewed and rated 132 Internet forums and social networks according to three major criteria: the number of visits, the notoriety of the forum and the number of messages posted in relation with health and drug therapy. Second, the pharmacist reviewed the topic-oriented message boards with a small number of drug names to ensure that they were not off topic. Six experts have been chosen to assess the selected internet forums using a French scoring tool: Net scoring. Three different scores and the agreement between experts according to each set of scores using weighted kappa pooled using mean have been computed. Three internet forums were chosen at the end of the selection step. Some criteria get high score (scores 3-4) no matter the website evaluated like accessibility (45-46) or design (34-36), at the opposite some criteria always have bad scores like quantitative (40-42) and ethical aspect (43-44), hyperlinks actualization (30-33). Kappa were positives but very small which corresponds to a weak agreement between experts. The personal opinion of the expert seems to have a major impact, undermining the relevance of the criterion. Our future work is to collect results given by this evaluation grid and proposes a new scoring tool for Internet social networks assessment.
PT-SAFE: a software tool for development and annunciation of medical audible alarms.
Bennett, Christopher L; McNeer, Richard R
2012-03-01
Recent reports by The Joint Commission as well as the Anesthesia Patient Safety Foundation have indicated that medical audible alarm effectiveness needs to be improved. Several recent studies have explored various approaches to improving the audible alarms, motivating the authors to develop real-time software capable of comparing such alarms. We sought to devise software that would allow for the development of a variety of audible alarm designs that could also integrate into existing operating room equipment configurations. The software is meant to be used as a tool for alarm researchers to quickly evaluate novel alarm designs. A software tool was developed for the purpose of creating and annunciating audible alarms. The alarms consisted of annunciators that were mapped to vital sign data received from a patient monitor. An object-oriented approach to software design was used to create a tool that is flexible and modular at run-time, can annunciate wave-files from disk, and can be programmed with MATLAB by the user to create custom alarm algorithms. The software was tested in a simulated operating room to measure technical performance and to validate the time-to-annunciation against existing equipment alarms. The software tool showed efficacy in a simulated operating room environment by providing alarm annunciation in response to physiologic and ventilator signals generated by a human patient simulator, on average 6.2 seconds faster than existing equipment alarms. Performance analysis showed that the software was capable of supporting up to 15 audible alarms on a mid-grade laptop computer before audio dropouts occurred. These results suggest that this software tool provides a foundation for rapidly staging multiple audible alarm sets from the laboratory to a simulation environment for the purpose of evaluating novel alarm designs, thus producing valuable findings for medical audible alarm standardization.
ERIC Educational Resources Information Center
Siver, Christi; Greenfest, Seth W.; Haeg, G. Claire
2016-01-01
While the literature emphasizes the importance of teaching political science students methods skills, there currently exists little guidance for how to assess student learning over the course of their time in the major. To address this gap, we develop a model set of assessment tools that may be adopted and adapted by political science departments…
ERIC Educational Resources Information Center
Talbot, Elizabeth A.; Harland, Dawn; Wieland-Alter, Wendy; Burrer, Sherry; Adams, Lisa V.
2012-01-01
Objective: Interferon-[gamma] release assays (IGRAs) are an important tool for detecting latent "Mycobacterium tuberculosis" infection (LTBI). Insufficient data exist about IGRA specificity in college health centers, most of which screen students for LTBI using the tuberculin skin test (TST). Participants: Students at a low-TB incidence college…
Technology for Improving Early Reading in Multi-Lingual Settings: Evidence from Rural South Africa
ERIC Educational Resources Information Center
Castillo, Nathan M.
2017-01-01
In September 2015, the United Nations ratified 17 Sustainable Development Goals (SDGs), including a central goal to improve the quality of learning, and attain universal literacy. As part of this effort, the UN and other funding agencies see technology as a major enabling tool for achievement of the SDGs. However, little evidence exists concerning…
Adrian S. Di Giacomo; Santiago Krapovickas
2005-01-01
In the southern part of South America, knowledge about bird species distribution is still not used as a tool for land use planning and conservation priority-setting. BirdLife International’s Important Bird Areas (IBA) Program is an appropriate vehicle for analyzing existing information about birds, and to generate new data where necessary. IBA inventories...
Simulation of recreational use in backcountry settings: an aid to management planning
David N. Cole
2002-01-01
Simulation models of recreation use patterns can be a valuable tool to managers of backcountry areas, such as wilderness areas and national parks. They can help fine-tune existing management programs, particularly in places that ration recreation use or that require the use of designated campsites. They can assist managers in evaluating the likely effects of increasing...
NASA Astrophysics Data System (ADS)
Hobley, Daniel E. J.; Adams, Jordan M.; Nudurupati, Sai Siddhartha; Hutton, Eric W. H.; Gasparini, Nicole M.; Istanbulluoglu, Erkan; Tucker, Gregory E.
2017-01-01
The ability to model surface processes and to couple them to both subsurface and atmospheric regimes has proven invaluable to research in the Earth and planetary sciences. However, creating a new model typically demands a very large investment of time, and modifying an existing model to address a new problem typically means the new work is constrained to its detriment by model adaptations for a different problem. Landlab is an open-source software framework explicitly designed to accelerate the development of new process models by providing (1) a set of tools and existing grid structures - including both regular and irregular grids - to make it faster and easier to develop new process components, or numerical implementations of physical processes; (2) a suite of stable, modular, and interoperable process components that can be combined to create an integrated model; and (3) a set of tools for data input, output, manipulation, and visualization. A set of example models built with these components is also provided. Landlab's structure makes it ideal not only for fully developed modelling applications but also for model prototyping and classroom use. Because of its modular nature, it can also act as a platform for model intercomparison and epistemic uncertainty and sensitivity analyses. Landlab exposes a standardized model interoperability interface, and is able to couple to third-party models and software. Landlab also offers tools to allow the creation of cellular automata, and allows native coupling of such models to more traditional continuous differential equation-based modules. We illustrate the principles of component coupling in Landlab using a model of landform evolution, a cellular ecohydrologic model, and a flood-wave routing model.
Multidisciplinary Optimization for Aerospace Using Genetic Optimization
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Hahn, Edward E.; Herrera, Claudia Y.
2007-01-01
In support of the ARMD guidelines NASA's Dryden Flight Research Center is developing a multidisciplinary design and optimization tool This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Optimization has made its way into many mainstream applications. For example NASTRAN(TradeMark) has its solution sequence 200 for Design Optimization, and MATLAB(TradeMark) has an Optimization Tool box. Other packages, such as ZAERO(TradeMark) aeroelastic panel code and the CFL3D(TradeMark) Navier-Stokes solver have no built in optimizer. The goal of the tool development is to generate a central executive capable of using disparate software packages ina cross platform network environment so as to quickly perform optimization and design tasks in a cohesive streamlined manner. A provided figure (Figure 1) shows a typical set of tools and their relation to the central executive. Optimization can take place within each individual too, or in a loop between the executive and the tool, or both.
Collaborative workbench for cyberinfrastructure to accelerate science algorithm development
NASA Astrophysics Data System (ADS)
Ramachandran, R.; Maskey, M.; Kuo, K.; Lynnes, C.
2013-12-01
There are significant untapped resources for information and knowledge creation within the Earth Science community in the form of data, algorithms, services, analysis workflows or scripts, and the related knowledge about these resources. Despite the huge growth in social networking and collaboration platforms, these resources often reside on an investigator's workstation or laboratory and are rarely shared. A major reason for this is that there are very few scientific collaboration platforms, and those that exist typically require the use of a new set of analysis tools and paradigms to leverage the shared infrastructure. As a result, adoption of these collaborative platforms for science research is inhibited by the high cost to an individual scientist of switching from his or her own familiar environment and set of tools to a new environment and tool set. This presentation will describe an ongoing project developing an Earth Science Collaborative Workbench (CWB). The CWB approach will eliminate this barrier by augmenting a scientist's current research environment and tool set to allow him or her to easily share diverse data and algorithms. The CWB will leverage evolving technologies such as commodity computing and social networking to design an architecture for scalable collaboration that will support the emerging vision of an Earth Science Collaboratory. The CWB is being implemented on the robust and open source Eclipse framework and will be compatible with widely used scientific analysis tools such as IDL. The myScience Catalog built into CWB will capture and track metadata and provenance about data and algorithms for the researchers in a non-intrusive manner with minimal overhead. Seamless interfaces to multiple Cloud services will support sharing algorithms, data, and analysis results, as well as access to storage and computer resources. A Community Catalog will track the use of shared science artifacts and manage collaborations among researchers.
NASA Astrophysics Data System (ADS)
Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng
2017-06-01
A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o
NASA Astrophysics Data System (ADS)
Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.
We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.
BIG: a large-scale data integration tool for renal physiology.
Zhao, Yue; Yang, Chin-Rang; Raghuram, Viswanathan; Parulekar, Jaya; Knepper, Mark A
2016-10-01
Due to recent advances in high-throughput techniques, we and others have generated multiple proteomic and transcriptomic databases to describe and quantify gene expression, protein abundance, or cellular signaling on the scale of the whole genome/proteome in kidney cells. The existence of so much data from diverse sources raises the following question: "How can researchers find information efficiently for a given gene product over all of these data sets without searching each data set individually?" This is the type of problem that has motivated the "Big-Data" revolution in Data Science, which has driven progress in fields such as marketing. Here we present an online Big-Data tool called BIG (Biological Information Gatherer) that allows users to submit a single online query to obtain all relevant information from all indexed databases. BIG is accessible at http://big.nhlbi.nih.gov/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Barton
2014-06-30
Peta-scale computing environments pose significant challenges for both system and application developers and addressing them required more than simply scaling up existing tera-scale solutions. Performance analysis tools play an important role in gaining this understanding, but previous monolithic tools with fixed feature sets have not sufficed. Instead, this project worked on the design, implementation, and evaluation of a general, flexible tool infrastructure supporting the construction of performance tools as “pipelines” of high-quality tool building blocks. These tool building blocks provide common performance tool functionality, and are designed for scalability, lightweight data acquisition and analysis, and interoperability. For this project, wemore » built on Open|SpeedShop, a modular and extensible open source performance analysis tool set. The design and implementation of such a general and reusable infrastructure targeted for petascale systems required us to address several challenging research issues. All components needed to be designed for scale, a task made more difficult by the need to provide general modules. The infrastructure needed to support online data aggregation to cope with the large amounts of performance and debugging data. We needed to be able to map any combination of tool components to each target architecture. And we needed to design interoperable tool APIs and workflows that were concrete enough to support the required functionality, yet provide the necessary flexibility to address a wide range of tools. A major result of this project is the ability to use this scalable infrastructure to quickly create tools that match with a machine architecture and a performance problem that needs to be understood. Another benefit is the ability for application engineers to use the highly scalable, interoperable version of Open|SpeedShop, which are reassembled from the tool building blocks into a flexible, multi-user interface set of tools. This set of tools targeted at Office of Science Leadership Class computer systems and selected Office of Science application codes. We describe the contributions made by the team at the University of Wisconsin. The project built on the efforts in Open|SpeedShop funded by DOE/NNSA and the DOE/NNSA Tri-Lab community, extended Open|Speedshop to the Office of Science Leadership Class Computing Facilities, and addressed new challenges found on these cutting edge systems. Work done under this project at Wisconsin can be divided into two categories, new algorithms and techniques for debugging, and foundation infrastructure work on our Dyninst binary analysis and instrumentation toolkits and MRNet scalability infrastructure.« less
Fabp4-CreER lineage tracing reveals two distinctive coronary vascular populations.
He, Lingjuan; Tian, Xueying; Zhang, Hui; Wythe, Joshua D; Zhou, Bin
2014-11-01
Over the last two decades, genetic lineage tracing has allowed for the elucidation of the cellular origins and fates during both embryogenesis and in pathological settings in adults. Recent lineage tracing studies using Apln-CreER tool indicated that a large number of post-natal coronary vessels do not form from pre-existing vessels. Instead, they form de novo after birth, which represents a coronary vascular population (CVP) distinct from the pre-existing one. Herein, we present new coronary vasculature lineage tracing results using a novel tool, Fabp4-CreER. Our results confirm the distinct existence of two unique CVPs. The 1(st) CVP, which is labelled by Fabp4-CreER, arises through angiogenic sprouting of pre-existing vessels established during early embryogenesis. The 2(nd) CVP is not labelled by Fabp4, suggesting that these vessels form de novo, rather than through expansion of the 1(st) CVP. These results support the de novo formation of vessels in the post-natal heart, which has implications for studies in cardiovascular disease and heart regeneration. © 2014 The Authors. Journal of Cellular and Molecular Medicine published by John Wiley & Sons Ltd and Foundation for Cellular and Molecular Medicine.
Optimal motion planning using navigation measure
NASA Astrophysics Data System (ADS)
Vaidya, Umesh
2018-05-01
We introduce navigation measure as a new tool to solve the motion planning problem in the presence of static obstacles. Existence of navigation measure guarantees collision-free convergence at the final destination set beginning with almost every initial condition with respect to the Lebesgue measure. Navigation measure can be viewed as a dual to the navigation function. While the navigation function has its minimum at the final destination set and peaks at the obstacle set, navigation measure takes the maximum value at the destination set and is zero at the obstacle set. A linear programming formalism is proposed for the construction of navigation measure. Set-oriented numerical methods are utilised to obtain finite dimensional approximation of this navigation measure. Application of the proposed navigation measure-based theoretical and computational framework is demonstrated for a motion planning problem in a complex fluid flow.
Jørgensen, Katarina M; Haddow, Pauline C
2011-08-01
Simulation tools are playing an increasingly important role behind advances in the field of systems biology. However, the current generation of biological science students has either little or no experience with such tools. As such, this educational glitch is limiting both the potential use of such tools as well as the potential for tighter cooperation between the designers and users. Although some simulation tool producers encourage their use in teaching, little attempt has hitherto been made to analyze and discuss their suitability as an educational tool for noncomputing science students. In general, today's simulation tools assume that the user has a stronger mathematical and computing background than that which is found in most biological science curricula, thus making the introduction of such tools a considerable pedagogical challenge. This paper provides an evaluation of the pedagogical attributes of existing simulation tools for cell signal transduction based on Cognitive Load theory. Further, design recommendations for an improved educational simulation tool are provided. The study is based on simulation tools for cell signal transduction. However, the discussions are relevant to a broader biological simulation tool set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radhakrishnan, Ben D.
2012-06-30
This research project, which was conducted during the Summer and Fall of 2011, investigated some commercially available assessment tools with a focus on IT equipment to see if such tools could round out the DC Pro tool suite. In this research, the assessment capabilities of the various tools were compiled to help make “non-biased” information available to the public. This research should not be considered to be exhaustive on all existing vendor tools although a number of vendors were contacted. Large IT equipment OEM’s like IBM and Dell provide their proprietary internal automated software which does not work on anymore » other IT equipment. However, found two companies with products that showed promise in performing automated assessments for IT equipment from different OEM vendors. This report documents the research and provides a list of software products reviewed, contacts and websites, product details, discussions with specific companies, a set of recommendations, and next steps. As a result of this research, a simple 3-level approach to an IT assessment tool is proposed along with an example of an assessment using a simple IT equipment data collection tool (Level 1, spreadsheet). The tool has been reviewed with the Green Grid and LBNL staff. The initial feedback has been positive although further refinement to the tool will be necessary. Proposed next steps include a field trial of at least two vendors’ software in two different data centers with an objective to prove the concept, ascertain the extent of energy and computational assessment, ease of installation and opportunities for continuous improvement. Based on the discussions, field trials (or case studies) are proposed with two vendors – JouleX (expected to be completed in 2012) and Sentilla.« less
Analysis of Pharmacy Student Perceptions and Attitudes Toward Web 2.0 Tools for Educational Purposes
Zhang, Yingzhi; Kim, Jessica; Awad, Nadia I.
2015-01-01
Background: The use of Wikis, blogs, and podcasts can engage students in collaborative learning, allow peer feedback, and enhance reflective learning. However, no survey to date has been performed across all professional years of pharmacy students in order to obtain a comprehensive overview of student perceptions. Objectives: To identify the familiarity of pharmacy students with Web 2.0 resources available for medical education, and what barriers exist. Methods: This study surveyed students enrolled in the professional program of a US-accredited pharmacy school to assess their knowledge and current use of available online resources and attitudes toward the use of Web 2.0 technologies for educational purposes. Results: Of the 836 surveys distributed, 293 were collected and analyzed (35.0% response rate). Students reported using the following Web 2.0 technologies in the didactic and experiential settings, respectively: Wikipedia (88%, 70%), YouTube (87%, 41%), Khan Academy (40%, 5%), and medical or scientific blogs (25%, 38%). Although these technologies were more commonly used in the classroom, students agreed or strongly agreed such resources should be used more often in both the didactic (n = 187, 64%) and experiential settings (n = 172, 59%). The barriers associated with the use of Web 2.0 in both the didactic and experiential settings that were ranked highest among students included accuracy and quality of information and lack of familiarity among faculty members and preceptors. Conclusion: Pharmacy students across all professional years actively use Web 2.0 tools for educational purposes and believe that opportunities exist to expand use of such technologies within the didactic and experiential settings.
A portal to validated websites on cosmetic surgery: the design of an archetype.
Parikh, A R; Kok, K; Redfern, B; Clarke, A; Withey, S; Butler, P E M
2006-09-01
There has recently been an increase in the usage of the Internet as a source of patient information. It is very difficult for laypersons to establish the accuracy and validity of these medical websites. Although many website assessment tools exist, most of these are not practical.A combination of consumer- and clinician-based website assessment tools was applied to 200 websites on cosmetic surgery. The top-scoring websites were used as links from a portal website that was designed using Microsoft Macromedia Suite.Seventy-one (35.5%) websites were excluded. One hundred fifteen websites (89%) failed to reach an acceptable standard.The provision of new websites has proceeded without quality controls. Patients need to be better educated on the limitations of the Internet. This paper suggests an archetypal model, which makes efficient use of existing resources, validates them, and is easily transferable to different health settings.
ORAC: 21st Century Observing at UKIRT
NASA Astrophysics Data System (ADS)
Bridger, A.; Wright, G. S.; Tan, M.; Pickup, D. A.; Economou, F.; Currie, M. J.; Adamson, A. J.; Rees, N. P.; Purves, M. H.
The Observatory Reduction and Acquisition Control system replaces all of the existing software which interacts with the observers at UKIRT. The aim is to improve observing efficiency with a set of integrated tools that take the user from pre-observing preparation, through the acquisition of observations to the reduction using a data-driven pipeline. ORAC is designed to be flexible and extensible, and is intended for use with all future UKIRT instruments, as well as existing telescope hardware and ``legacy'' instruments. It is also designed to allow integration with phase-1 and queue-scheduled observing tools in anticipation of possible future requirements. A brief overview of the project and its relationship to other systems is given. ORAC also re-uses much code from other systems and we discuss issues relating to the trade-off between reuse and the generation of new software specific to our requirements.
Ferrari, Thomas; Lombardo, Anna; Benfenati, Emilio
2018-05-14
Several methods exist to develop QSAR models automatically. Some are based on indices of the presence of atoms, other on the most similar compounds, other on molecular descriptors. Here we introduce QSARpy v1.0, a new QSAR modeling tool based on a different approach: the dissimilarity. This tool fragments the molecules of the training set to extract fragments that can be associated to a difference in the property/activity value, called modulators. If the target molecule share part of the structure with a molecule of the training set and differences can be explained with one or more modulators, the property/activity value of the molecule of the training set is adjusted using the value associated to the modulator(s). This tool is tested here on the n-octanol/water partition coefficient (Kow, usually expressed in logarithmic units as log Kow). It is a key parameter in risk assessment since it is a measure of hydrophobicity. Its wide spread use makes these estimation methods very useful to reduce testing costs. Using QSARpy v1.0, we obtained a new model to predict log Kow with accurate performance (RMSE 0.43 and R 2 0.94 for the external test set), comparing favorably with other programs. QSARpy is freely available on request. Copyright © 2018 Elsevier B.V. All rights reserved.
Conceptualisation and development of the Conversational Health Literacy Assessment Tool (CHAT).
O'Hara, Jonathan; Hawkins, Melanie; Batterham, Roy; Dodson, Sarity; Osborne, Richard H; Beauchamp, Alison
2018-03-22
The aim of this study was to develop a tool to support health workers' ability to identify patients' multidimensional health literacy strengths and challenges. The tool was intended to be suitable for administration in healthcare settings where health workers must identify health literacy priorities as the basis for person-centred care. Development was based on a qualitative co-design process that used the Health Literacy Questionnaire (HLQ) as a framework to generate questions. Health workers were recruited to participate in an online consultation, a workshop, and two rounds of pilot testing. Participating health workers identified and refined ten questions that target five areas of assessment: supportive professional relationships, supportive personal relationships, health information access and comprehension, current health behaviours, and health promotion barriers and support. Preliminary evidence suggests that application of the Conversational Health Literacy Assessment Tool (CHAT) can support health workers to better understand the health literacy challenges and supportive resources of their patients. As an integrated clinical process, the CHAT can supplement existing intake and assessment procedures across healthcare settings to give insight into patients' circumstances so that decisions about care can be tailored to be more appropriate and effective.
Skjerdal, Taran; Gefferth, Andras; Spajic, Miroslav; Estanga, Edurne Gaston; de Cecare, Alessandra; Vitali, Silvia; Pasquali, Frederique; Bovo, Federica; Manfreda, Gerardo; Mancusi, Rocco; Trevisiani, Marcello; Tessema, Girum Tadesse; Fagereng, Tone; Moen, Lena Haugland; Lyshaug, Lars; Koidis, Anastasios; Delgado-Pando, Gonzalo; Stratakos, Alexandros Ch; Boeri, Marco; From, Cecilie; Syed, Hyat; Muccioli, Mirko; Mulazzani, Roberto; Halbert, Catherine
2017-01-01
A prototype decision support IT-tool for the food industry was developed in the STARTEC project. Typical processes and decision steps were mapped using real life production scenarios of participating food companies manufacturing complex ready-to-eat foods. Companies looked for a more integrated approach when making food safety decisions that would align with existing HACCP systems. The tool was designed with shelf life assessments and data on safety, quality, and costs, using a pasta salad meal as a case product. The process flow chart was used as starting point, with simulation options at each process step. Key parameters like pH, water activity, costs of ingredients and salaries, and default models for calculations of Listeria monocytogenes , quality scores, and vitamin C, were placed in an interactive database. Customization of the models and settings was possible on the user-interface. The simulation module outputs were provided as detailed curves or categorized as "good"; "sufficient"; or "corrective action needed" based on threshold limit values set by the user. Possible corrective actions were suggested by the system. The tool was tested and approved by end-users based on selected ready-to-eat food products. Compared to other decision support tools, the STARTEC-tool is product-specific and multidisciplinary and includes interpretation and targeted recommendations for end-users.
Gefferth, Andras; Spajic, Miroslav; Estanga, Edurne Gaston; Vitali, Silvia; Pasquali, Frederique; Bovo, Federica; Manfreda, Gerardo; Mancusi, Rocco; Tessema, Girum Tadesse; Fagereng, Tone; Moen, Lena Haugland; Lyshaug, Lars; Koidis, Anastasios; Delgado-Pando, Gonzalo; Stratakos, Alexandros Ch.; Boeri, Marco; From, Cecilie; Syed, Hyat; Muccioli, Mirko; Mulazzani, Roberto; Halbert, Catherine
2017-01-01
A prototype decision support IT-tool for the food industry was developed in the STARTEC project. Typical processes and decision steps were mapped using real life production scenarios of participating food companies manufacturing complex ready-to-eat foods. Companies looked for a more integrated approach when making food safety decisions that would align with existing HACCP systems. The tool was designed with shelf life assessments and data on safety, quality, and costs, using a pasta salad meal as a case product. The process flow chart was used as starting point, with simulation options at each process step. Key parameters like pH, water activity, costs of ingredients and salaries, and default models for calculations of Listeria monocytogenes, quality scores, and vitamin C, were placed in an interactive database. Customization of the models and settings was possible on the user-interface. The simulation module outputs were provided as detailed curves or categorized as “good”; “sufficient”; or “corrective action needed” based on threshold limit values set by the user. Possible corrective actions were suggested by the system. The tool was tested and approved by end-users based on selected ready-to-eat food products. Compared to other decision support tools, the STARTEC-tool is product-specific and multidisciplinary and includes interpretation and targeted recommendations for end-users. PMID:29457031
A literature search tool for intelligent extraction of disease-associated genes.
Jung, Jae-Yoon; DeLuca, Todd F; Nelson, Tristan H; Wall, Dennis P
2014-01-01
To extract disorder-associated genes from the scientific literature in PubMed with greater sensitivity for literature-based support than existing methods. We developed a PubMed query to retrieve disorder-related, original research articles. Then we applied a rule-based text-mining algorithm with keyword matching to extract target disorders, genes with significant results, and the type of study described by the article. We compared our resulting candidate disorder genes and supporting references with existing databases. We demonstrated that our candidate gene set covers nearly all genes in manually curated databases, and that the references supporting the disorder-gene link are more extensive and accurate than other general purpose gene-to-disorder association databases. We implemented a novel publication search tool to find target articles, specifically focused on links between disorders and genotypes. Through comparison against gold-standard manually updated gene-disorder databases and comparison with automated databases of similar functionality we show that our tool can search through the entirety of PubMed to extract the main gene findings for human diseases rapidly and accurately.
Email recruitment to use web decision support tools for pneumonia.
Flanagan, James R; Peterson, Michael; Dayton, Charles; Strommer Pace, Lori; Plank, Andrew; Walker, Kristy; Carlson, William S
2002-01-01
Application of guidelines to improve clinical decisions for Community Acquired Pneumonia (CAP) patients depends on accurate information about specific facts of each case and on presenting guideline support at the time decisions are being made. We report here on a system designed to solicit information from physicians about their CAP patients in order to classify CAP and present appropriate guidelines for type of care, length of stay, and use of antibiotics. We used elements of three existing information systems to create a achieve these goals: professionals coding diagnoses captured by the existing clinical information system (CIS), email, and web-based decision support tools including a pneumonia severity evaluation tool (SET). The non-secure IS components (email and web) were able to link to information in the CIS using tokens that do not reveal confidential patient-identifiable information. We examined their response to this strategy and the accuracy of pneumonia classification using this approach compared to chart review as a gold standard. On average physicians responded to email solicitations 50% of the time over the 14 month study. Also using this standard, we examined various information triggers for case finding. Professional coding of the primary reason for admission as pneumonia was fairly sensitive as an indicator of CAP. Physician use of the web SET was insensitive but fairly specific. Pneumonia classification using the SET was very reliable compared to experts' chart review using the same algorithm. We examined the distribution of severity of pneumonia for cases of pneumonia found by the various information triggers and for each severity the average length of stay. The distribution found by both chart review and by SET has demonstrated a shift toward more severe cases being admitted compared to only 3 years ago. The length of stay for level of severity is above expectations published by guidelines even for cases of true CAP by chart review. We suggest that the Fine classification system may not adequately describe patients in this setting. Physicians frequently responded that the guidelines presented did not fit their patients.
Schmidt-Hansen, Mia; Berendse, Sabine; Hamilton, Willie; Baldwin, David R
2017-01-01
Background Lung cancer is the leading cause of cancer deaths. Around 70% of patients first presenting to specialist care have advanced disease, at which point current treatments have little effect on survival. The issue for primary care is how to recognise patients earlier and investigate appropriately. This requires an assessment of the risk of lung cancer. Aim The aim of this study was to systematically review the existing risk prediction tools for patients presenting in primary care with symptoms that may indicate lung cancer Design and setting Systematic review of primary care data. Method Medline, PreMedline, Embase, the Cochrane Library, Web of Science, and ISI Proceedings (1980 to March 2016) were searched. The final list of included studies was agreed between two of the authors, who also appraised and summarised them. Results Seven studies with between 1482 and 2 406 127 patients were included. The tools were all based on UK primary care data, but differed in complexity of development, number/type of variables examined/included, and outcome time frame. There were four multivariable tools with internal validation area under the curves between 0.88 and 0.92. The tools all had a number of limitations, and none have been externally validated, or had their clinical and cost impact examined. Conclusion There is insufficient evidence for the recommendation of any one of the available risk prediction tools. However, some multivariable tools showed promising discrimination. What is needed to guide clinical practice is both external validation of the existing tools and a comparative study, so that the best tools can be incorporated into clinical decision tools used in primary care. PMID:28483820
The development of an online decision support tool for organizational readiness for change.
Khan, Sobia; Timmings, Caitlyn; Moore, Julia E; Marquez, Christine; Pyka, Kasha; Gheihman, Galina; Straus, Sharon E
2014-05-10
Much importance has been placed on assessing readiness for change as one of the earliest steps of implementation, but measuring it can be a complex and daunting task. Organizations and individuals struggle with how to reliably and accurately measure readiness for change. Several measures have been developed to help organizations assess readiness, but these are often underused due to the difficulty of selecting the right measure. In response to this challenge, we will develop and test a prototype of a decision support tool that is designed to guide individuals interested in implementation in the selection of an appropriate readiness assessment measure for their setting. A multi-phase approach will be used to develop the decision support tool. First, we will identify key measures for assessing organizational readiness for change from a recently completed systematic review. Included measures will be those developed for healthcare settings (e.g., acute care, public health, mental health) and that have been deemed valid and reliable. Second, study investigators and field experts will engage in a mapping exercise to categorize individual items of included measures according to key readiness constructs from an existing framework. Third, a stakeholder panel will be recruited and consulted to determine the feasibility and relevance of the selected measures using a modified Delphi process. Fourth, findings from the mapping exercise and stakeholder consultation will inform the development of a decision support tool that will guide users in appropriately selecting change readiness measures. Fifth, the tool will undergo usability testing. Our proposed decision support tool will address current challenges in the field of organizational change readiness by aiding individuals in selecting a valid and reliable assessment measure that is relevant to user needs and practice settings. We anticipate that implementers and researchers who use our tool will be more likely to conduct readiness for change assessments in their settings when planning for implementation. This, in turn, may contribute to more successful implementation outcomes. We will test this tool in a future study to determine its efficacy and impact on implementation processes.
Wong, Fiona; Stevens, Denise; O'Connor-Duffany, Kathleen; Siegel, Karen; Gao, Yue
2011-03-07
Novel efforts and accompanying tools are needed to tackle the global burden of chronic disease. This paper presents an approach to describe the environments in which people live, work, and play. Community Health Environment Scan Survey (CHESS) is an empirical assessment tool that measures the availability and accessibility, of healthy lifestyle options lifestyle options. CHESS reveals existing community assets as well as opportunities for change, shaping community intervention planning efforts by focusing on community-relevant opportunities to address the three key risk factors for chronic disease (i.e. unhealthy diet, physical inactivity, and tobacco use). The CHESS tool was developed following a review of existing auditing tools and in consultation with experts. It is based on the social-ecological model and is adaptable to diverse settings in developed and developing countries throughout the world. For illustrative purposes, baseline results from the Community Interventions for Health (CIH) Mexico site are used, where the CHESS tool assessed 583 food stores and 168 restaurants. Comparisons between individual-level survey data from schools and community-level CHESS data are made to demonstrate the utility of the tool in strategically guiding intervention activities. The environments where people live, work, and play are key factors in determining their diet, levels of physical activity, and tobacco use. CHESS is the first tool of its kind that systematically and simultaneously examines how built environments encourage/discourage healthy eating, physical activity, and tobacco use. CHESS can help to design community interventions to prevent chronic disease and guide healthy urban planning. © 2011 Fiona Wong et al.
ERIC Educational Resources Information Center
Smiar, Karen; Mendez, J. D.
2016-01-01
Molecular model kits have been used in chemistry classrooms for decades but have seen very little recent innovation. Using 3D printing, three sets of physical models were created for a first semester, introductory chemistry course. Students manipulated these interactive models during class activities as a supplement to existing teaching tools for…
ERIC Educational Resources Information Center
Garcia, Jorge; Zeglin, Robert J.; Matray, Shari; Froehlich, Robert; Marable, Ronica; McGuire-Kuletz, Maureen
2016-01-01
Purpose: The purpose of this article was to gather descriptive data on the professional use of social media in public rehabilitation settings and to analyze existing social media policies in those agencies through content analysis. Methods: The authors sent a survey to all state administrators or directors of these agencies (N = 50) in the United…
Human Behavioral Representations with Realistic Personality and Cultural Characteristics
2005-06-01
personality factors as customizations to an underlying formally rational symbolic architecture, PAC uses dimensions of personality, emotion , and culture as...foundations for the cognitive process. The structure of PAC allows it to function as a personality/ emotional layer that can be used stand-alone or...integrated with existing constrained- rationality cognitive architectures. In addition, a set of tools was developed to support the authoring
NASA Astrophysics Data System (ADS)
Chen, Mingjun; Li, Ziang; Yu, Bo; Peng, Hui; Fang, Zhen
2013-09-01
In the grinding of high quality fused silica parts with complex surface or structure using ball-headed metal bonded diamond wheel with small diameter, the existing dressing methods are not suitable to dress the ball-headed diamond wheel precisely due to that they are either on-line in process dressing which may causes collision problem or without consideration for the effects of the tool setting error and electrode wear. An on-machine precision preparation and dressing method is proposed for ball-headed diamond wheel based on electrical discharge machining. By using this method the cylindrical diamond wheel with small diameter is manufactured to hemispherical-headed form. The obtained ball-headed diamond wheel is dressed after several grinding passes to recover geometrical accuracy and sharpness which is lost due to the wheel wear. A tool setting method based on high precision optical system is presented to reduce the wheel center setting error and dimension error. The effect of electrode tool wear is investigated by electrical dressing experiments, and the electrode tool wear compensation model is established based on the experimental results which show that the value of wear ratio coefficient K' tends to be constant with the increasing of the feed length of electrode and the mean value of K' is 0.156. Grinding experiments of fused silica are carried out on a test bench to evaluate the performance of the preparation and dressing method. The experimental results show that the surface roughness of the finished workpiece is 0.03 μm. The effect of the grinding parameter and dressing frequency on the surface roughness is investigated based on the measurement results of the surface roughness. This research provides an on-machine preparation and dressing method for ball-headed metal bonded diamond wheel used in the grinding of fused silica, which provides a solution to the tool setting method and the effect of electrode tool wear.
Preparing WIND for the STEREO Mission
NASA Astrophysics Data System (ADS)
Schroeder, P.; Ogilve, K.; Szabo, A.; Lin, R.; Luhmann, J.
2006-05-01
The upcoming STEREO mission's IMPACT and PLASTIC investigations will provide the first opportunity for long duration, detailed observations of 1 AU magnetic field structures, plasma ions and electrons, suprathermal electrons, and energetic particles at points bracketing Earth's heliospheric location. Stereoscopic/3D information from the STEREO SECCHI imagers and SWAVES radio experiment will make it possible to use both multipoint and quadrature studies to connect interplanetary Coronal Mass Ejections (ICME) and solar wind structures to CMEs and coronal holes observed at the Sun. To fully exploit these unique data sets, tight integration with similarly equipped missions at L1 will be essential, particularly WIND and ACE. The STEREO mission is building novel data analysis tools to take advantage of the mission's scientific potential. These tools will require reliable access and a well-documented interface to the L1 data sets. Such an interface already exists for ACE through the ACE Science Center. We plan to provide a similar service for the WIND mission that will supplement existing CDAWeb services. Building on tools also being developed for STEREO, we will create a SOAP application program interface (API) which will allow both our STEREO/WIND/ACE interactive browser and third-party software to access WIND data as a seamless and integral part of the STEREO mission. The API will also allow for more advanced forms of data mining than currently available through other data web services. Access will be provided to WIND-specific data analysis software as well. The development of cross-spacecraft data analysis tools will allow a larger scientific community to combine STEREO's unique in-situ data with those of other missions, particularly the L1 missions, and, therefore, to maximize STEREO's scientific potential in gaining a greater understanding of the heliosphere.
Comparing genome versus proteome-based identification of clinical bacterial isolates.
Galata, Valentina; Backes, Christina; Laczny, Cédric Christian; Hemmrich-Stanisak, Georg; Li, Howard; Smoot, Laura; Posch, Andreas Emanuel; Schmolke, Susanne; Bischoff, Markus; von Müller, Lutz; Plum, Achim; Franke, Andre; Keller, Andreas
2018-05-01
Whole-genome sequencing (WGS) is gaining importance in the analysis of bacterial cultures derived from patients with infectious diseases. Existing computational tools for WGS-based identification have, however, been evaluated on previously defined data relying thereby unwarily on the available taxonomic information.Here, we newly sequenced 846 clinical gram-negative bacterial isolates representing multiple distinct genera and compared the performance of five tools (CLARK, Kaiju, Kraken, DIAMOND/MEGAN and TUIT). To establish a faithful 'gold standard', the expert-driven taxonomy was compared with identifications based on matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry (MS) analysis. Additionally, the tools were also evaluated using a data set of 200 Staphylococcus aureus isolates.CLARK and Kraken (with k =31) performed best with 626 (100%) and 193 (99.5%) correct species classifications for the gram-negative and S. aureus isolates, respectively. Moreover, CLARK and Kraken demonstrated highest mean F-measure values (85.5/87.9% and 94.4/94.7% for the two data sets, respectively) in comparison with DIAMOND/MEGAN (71 and 85.3%), Kaiju (41.8 and 18.9%) and TUIT (34.5 and 86.5%). Finally, CLARK, Kaiju and Kraken outperformed the other tools by a factor of 30 to 170 fold in terms of runtime.We conclude that the application of nucleotide-based tools using k-mers-e.g. CLARK or Kraken-allows for accurate and fast taxonomic characterization of bacterial isolates from WGS data. Hence, our results suggest WGS-based genotyping to be a promising alternative to the MS-based biotyping in clinical settings. Moreover, we suggest that complementary information should be used for the evaluation of taxonomic classification tools, as public databases may suffer from suboptimal annotations.
Using Performance Tools to Support Experiments in HPC Resilience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naughton, III, Thomas J; Boehm, Swen; Engelmann, Christian
2014-01-01
The high performance computing (HPC) community is working to address fault tolerance and resilience concerns for current and future large scale computing platforms. This is driving enhancements in the programming environ- ments, specifically research on enhancing message passing libraries to support fault tolerant computing capabilities. The community has also recognized that tools for resilience experimentation are greatly lacking. However, we argue that there are several parallels between performance tools and resilience tools . As such, we believe the rich set of HPC performance-focused tools can be extended (repurposed) to benefit the resilience community. In this paper, we describe the initialmore » motivation to leverage standard HPC per- formance analysis techniques to aid in developing diagnostic tools to assist fault tolerance experiments for HPC applications. These diagnosis procedures help to provide context for the system when the errors (failures) occurred. We describe our initial work in leveraging an MPI performance trace tool to assist in provid- ing global context during fault injection experiments. Such tools will assist the HPC resilience community as they extend existing and new application codes to support fault tolerances.« less
CMS Configuration Editor: GUI based application for user analysis job
NASA Astrophysics Data System (ADS)
de Cosa, A.
2011-12-01
We present the user interface and the software architecture of the Configuration Editor for the CMS experiment. The analysis workflow is organized in a modular way integrated within the CMS framework that organizes in a flexible way user analysis code. The Python scripting language is adopted to define the job configuration that drives the analysis workflow. It could be a challenging task for users, especially for newcomers, to develop analysis jobs managing the configuration of many required modules. For this reason a graphical tool has been conceived in order to edit and inspect configuration files. A set of common analysis tools defined in the CMS Physics Analysis Toolkit (PAT) can be steered and configured using the Config Editor. A user-defined analysis workflow can be produced starting from a standard configuration file, applying and configuring PAT tools according to the specific user requirements. CMS users can adopt this tool, the Config Editor, to create their analysis visualizing in real time which are the effects of their actions. They can visualize the structure of their configuration, look at the modules included in the workflow, inspect the dependences existing among the modules and check the data flow. They can visualize at which values parameters are set and change them according to what is required by their analysis task. The integration of common tools in the GUI needed to adopt an object-oriented structure in the Python definition of the PAT tools and the definition of a layer of abstraction from which all PAT tools inherit.
MIA - A free and open source software for gray scale medical image analysis
2013-01-01
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed. PMID:24119305
MIA - A free and open source software for gray scale medical image analysis.
Wollny, Gert; Kellman, Peter; Ledesma-Carbayo, María-Jesus; Skinner, Matthew M; Hublin, Jean-Jaques; Hierl, Thomas
2013-10-11
Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large.Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers.One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development.Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don't provide an clear approach when one wants to shape a new command line tool from a prototype shell script. The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
General subspace learning with corrupted training data via graph embedding.
Bao, Bing-Kun; Liu, Guangcan; Hong, Richang; Yan, Shuicheng; Xu, Changsheng
2013-11-01
We address the following subspace learning problem: supposing we are given a set of labeled, corrupted training data points, how to learn the underlying subspace, which contains three components: an intrinsic subspace that captures certain desired properties of a data set, a penalty subspace that fits the undesired properties of the data, and an error container that models the gross corruptions possibly existing in the data. Given a set of data points, these three components can be learned by solving a nuclear norm regularized optimization problem, which is convex and can be efficiently solved in polynomial time. Using the method as a tool, we propose a new discriminant analysis (i.e., supervised subspace learning) algorithm called Corruptions Tolerant Discriminant Analysis (CTDA), in which the intrinsic subspace is used to capture the features with high within-class similarity, the penalty subspace takes the role of modeling the undesired features with high between-class similarity, and the error container takes charge of fitting the possible corruptions in the data. We show that CTDA can well handle the gross corruptions possibly existing in the training data, whereas previous linear discriminant analysis algorithms arguably fail in such a setting. Extensive experiments conducted on two benchmark human face data sets and one object recognition data set show that CTDA outperforms the related algorithms.
A review of training research and virtual reality simulators for the da Vinci surgical system.
Liu, May; Curet, Myriam
2015-01-01
PHENOMENON: Virtual reality simulators are the subject of several recent studies of skills training for robot-assisted surgery. Yet no consensus exists regarding what a core skill set comprises or how to measure skill performance. Defining a core skill set and relevant metrics would help surgical educators evaluate different simulators. This review draws from published research to propose a core technical skill set for using the da Vinci surgeon console. Publications on three commercial simulators were used to evaluate the simulators' content addressing these skills and associated metrics. An analysis of published research suggests that a core technical skill set for operating the surgeon console includes bimanual wristed manipulation, camera control, master clutching to manage hand position, use of third instrument arm, activating energy sources, appropriate depth perception, and awareness of forces applied by instruments. Validity studies of three commercial virtual reality simulators for robot-assisted surgery suggest that all three have comparable content and metrics. However, none have comprehensive content and metrics for all core skills. INSIGHTS: Virtual reality simulation remains a promising tool to support skill training for robot-assisted surgery, yet existing commercial simulator content is inadequate for performing and assessing a comprehensive basic skill set. The results of this evaluation help identify opportunities and challenges that exist for future developments in virtual reality simulation for robot-assisted surgery. Specifically, the inclusion of educational experts in the development cycle alongside clinical and technological experts is recommended.
BIG: a large-scale data integration tool for renal physiology
Zhao, Yue; Yang, Chin-Rang; Raghuram, Viswanathan; Parulekar, Jaya
2016-01-01
Due to recent advances in high-throughput techniques, we and others have generated multiple proteomic and transcriptomic databases to describe and quantify gene expression, protein abundance, or cellular signaling on the scale of the whole genome/proteome in kidney cells. The existence of so much data from diverse sources raises the following question: “How can researchers find information efficiently for a given gene product over all of these data sets without searching each data set individually?” This is the type of problem that has motivated the “Big-Data” revolution in Data Science, which has driven progress in fields such as marketing. Here we present an online Big-Data tool called BIG (Biological Information Gatherer) that allows users to submit a single online query to obtain all relevant information from all indexed databases. BIG is accessible at http://big.nhlbi.nih.gov/. PMID:27279488
Assessment of nursing workload in adult psychiatric inpatient units: a scoping review.
Sousa, C; Seabra, P
2018-05-16
No systematic reviews on measurement tools in adult psychiatric inpatient settings exist in the literature, and thus, further research is required on ways to identify approaches to calculate safe nurse staffing levels based on patients' care needs in adult psychiatric inpatient units. To identify instruments that enable an assessment of nursing workload in psychiatric settings. Method A scoping review was conducted. Four studies were identified, with five instruments used to support the calculation of staff needs and workload. All four studies present methodological limitations. Two instruments have already been adapted to this specific context, but validation studies are lacking. The findings indicate that the tools used to evaluate nursing workload in these settings require further development, with the concomitant need for more research to clarify the definition of nursing workload as well as to identify factors with the greatest impact on nursing workload. This review highlights the need to develop tools to assess workload in psychiatric inpatient units that embrace patient-related and non-patient-related activities. The great challenge is to enable a sensitive perception of workload resulting from nurses' psychotherapeutic interventions, an important component of treatment for many patients. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
EuPathDB: the eukaryotic pathogen genomics database resource
Aurrecoechea, Cristina; Barreto, Ana; Basenko, Evelina Y.; Brestelli, John; Brunk, Brian P.; Cade, Shon; Crouch, Kathryn; Doherty, Ryan; Falke, Dave; Fischer, Steve; Gajria, Bindu; Harb, Omar S.; Heiges, Mark; Hertz-Fowler, Christiane; Hu, Sufen; Iodice, John; Kissinger, Jessica C.; Lawrence, Cris; Li, Wei; Pinney, Deborah F.; Pulman, Jane A.; Roos, David S.; Shanmugasundram, Achchuthan; Silva-Franco, Fatima; Steinbiss, Sascha; Stoeckert, Christian J.; Spruill, Drew; Wang, Haiming; Warrenfeltz, Susanne; Zheng, Jie
2017-01-01
The Eukaryotic Pathogen Genomics Database Resource (EuPathDB, http://eupathdb.org) is a collection of databases covering 170+ eukaryotic pathogens (protists & fungi), along with relevant free-living and non-pathogenic species, and select pathogen hosts. To facilitate the discovery of meaningful biological relationships, the databases couple preconfigured searches with visualization and analysis tools for comprehensive data mining via intuitive graphical interfaces and APIs. All data are analyzed with the same workflows, including creation of gene orthology profiles, so data are easily compared across data sets, data types and organisms. EuPathDB is updated with numerous new analysis tools, features, data sets and data types. New tools include GO, metabolic pathway and word enrichment analyses plus an online workspace for analysis of personal, non-public, large-scale data. Expanded data content is mostly genomic and functional genomic data while new data types include protein microarray, metabolic pathways, compounds, quantitative proteomics, copy number variation, and polysomal transcriptomics. New features include consistent categorization of searches, data sets and genome browser tracks; redesigned gene pages; effective integration of alternative transcripts; and a EuPathDB Galaxy instance for private analyses of a user's data. Forthcoming upgrades include user workspaces for private integration of data with existing EuPathDB data and improved integration and presentation of host–pathogen interactions. PMID:27903906
Dziadzko, Mikhail A; Herasevich, Vitaly; Sen, Ayan; Pickering, Brian W; Knight, Ann-Marie A; Moreno Franco, Pablo
2016-04-01
Failure to rapidly identify high-value information due to inappropriate output may alter user acceptance and satisfaction. The information needs for different intensive care unit (ICU) providers are not the same. This can obstruct successful implementation of electronic medical record (EMR) systems. We evaluated the implementation experience and satisfaction of providers using a novel EMR interface-based on the information needs of ICU providers-in the context of an existing EMR system. This before-after study was performed in the ICU setting at two tertiary care hospitals from October 2013 through November 2014. Surveys were delivered to ICU providers before and after implementation of the novel EMR interface. Overall satisfaction and acceptance was reported for both interfaces. A total of 246 before (existing EMR) and 115 after (existing EMR+novel EMR interface) surveys were analyzed. 14% of respondents were prescribers and 86% were non-prescribers. Non-prescribers were more satisfied with the existing EMR, whereas prescribers were more satisfied with the novel EMR interface. Both groups reported easier data gathering, routine tasks & rounding, and fostering of team work with the novel EMR interface. This interface was the primary tool for 18% of respondents after implementation and 73% of respondents intended to use it further. Non-prescribers reported an intention to use this novel interface as their primary tool for information gathering. Compliance and acceptance of new system is not related to previous duration of work in ICU, but ameliorates with the length of EMR interface usage. Task-specific and role-specific considerations are necessary for design and successful implementation of a EMR interface. The difference in user workflows causes disparity of the way of EMR data usage. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Fellas, Antoni; Singh-Grewal, Davinder; Santos, Derek; Coda, Andrea
2018-01-01
Juvenile idiopathic arthritis (JIA) is the most common form of rheumatic disease in childhood and adolescents, affecting between 16 and 150 per 100,000 young persons below the age of 16. The lower limb is commonly affected in JIA, with joint swelling and tenderness often observed as a result of active synovitis. The objective of this scoping review is to identify the existence of physical examination (PE) tools to identify and record swollen and tender lower limb joints in children with JIA. Two reviewers individually screened the eligibility of titles and abstracts retrieved from the following online databases: MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, and CINAHL. Studies that proposed and validated a comprehensive lower limb PE tool were included in this scoping review. After removal of duplicates, 1232 citations were retrieved, in which twelve were identified as potentially eligible. No studies met the set criteria for inclusion. Further research is needed in developing and validating specific PE tools for clinicians such as podiatrists and other allied health professionals involved in the management of pathological lower limb joints in children diagnosed with JIA. These lower limb PE tools may be useful in conjunction with existing disease activity scores to optimise screening of the lower extremity and monitoring the efficacy of targeted interventions.
Field-friendly techniques for assessment of biomarkers of nutrition for development1234
Garrett, Dean A; Sangha, Jasbir K; Kothari, Monica T; Boyle, David
2011-01-01
Whereas cost-effective interventions exist for the control of micronutrient malnutrition (MN), in low-resource settings field-friendly tools to assess the effect of these interventions are underutilized or not readily available where they are most needed. Conventional approaches for MN measurement are expensive and require relatively sophisticated laboratory instrumentation, skilled technicians, good infrastructure, and reliable sources of clean water and electricity. Consequently, there is a need to develop and introduce innovative tools that are appropriate for MN assessment in low-resource settings. These diagnostics should be cost-effective, simple to perform, robust, accurate, and capable of being performed with basic laboratory equipment. Currently, such technologies either do not exist or have been applied to the assessment of a few micronutrients. In the Demographic and Health Surveys (DHS), a few such examples for which “biomarkers” of nutrition development have been assessed in low-resource settings using field-friendly approaches are hemoglobin (anemia), retinol-binding protein (vitamin A), and iron (transferrin receptor). In all of these examples, samples were collected mainly by nonmedical staff and analyses were conducted in the survey country by technicians from the local health or research facilities. This article provides information on how the DHS has been able to successfully adapt field-friendly techniques in challenging environments in population-based surveys for the assessment of micronutrient deficiencies. Special emphasis is placed on sample collection, processing, and testing in relation to the availability of local technology, resources, and capacity. PMID:21677055
Frailty in trauma: A systematic review of the surgical literature for clinical assessment tools.
McDonald, Victoria S; Thompson, Kimberly A; Lewis, Paul R; Sise, C Beth; Sise, Michael J; Shackford, Steven R
2016-05-01
Elderly trauma patients have outcomes worse than those of similarly injured younger patients. Although patient age and comorbidities explain some of the difference, the contribution of frailty to outcomes is largely unknown because of the lack of assessment tools developed specifically to assess frailty in the trauma population. This systematic review of the surgical literature identifies currently available frailty clinical assessment tools and evaluates the potential of each instrument to assess frailty in elderly patients with trauma. This review was registered with PROSPERO (the international prospective register of systematic reviews, registration number CRD42014015350). Publications in English from January 1995 to October 2014 were identified by a comprehensive search strategy in MEDLINE, EMBASE, and CINAHL, supplemented by manual screening of article bibliographies and subjected to three tiers of review. Forty-two studies reporting on frailty assessment tools were selected for analysis. Criteria for objectivity, feasibility in the trauma setting, and utility to predict trauma outcomes were formulated and used to evaluate the tools, including their subscales and individual items. Thirty-two unique frailty assessment tools were identified. Of those, 4 tools as a whole, 2 subscales, and 29 individual items qualified as objective, feasible, and useful in the clinical assessment of trauma patients. The single existing tool developed specifically to assess frailty in trauma did not meet evaluation criteria. Few frailty assessment tools in the surgical literature qualify as objective, feasible, and useful measures of frailty in the trauma population. However, a number of individual tool items and subscales could be combined to assess frailty in the trauma setting. Research to determine the accuracy of these measures and the magnitude of the contribution of frailty to trauma outcomes is needed. Systematic review, level III.
Visualising biological data: a semantic approach to tool and database integration
Pettifer, Steve; Thorne, David; McDermott, Philip; Marsh, James; Villéger, Alice; Kell, Douglas B; Attwood, Teresa K
2009-01-01
Motivation In the biological sciences, the need to analyse vast amounts of information has become commonplace. Such large-scale analyses often involve drawing together data from a variety of different databases, held remotely on the internet or locally on in-house servers. Supporting these tasks are ad hoc collections of data-manipulation tools, scripting languages and visualisation software, which are often combined in arcane ways to create cumbersome systems that have been customised for a particular purpose, and are consequently not readily adaptable to other uses. For many day-to-day bioinformatics tasks, the sizes of current databases, and the scale of the analyses necessary, now demand increasing levels of automation; nevertheless, the unique experience and intuition of human researchers is still required to interpret the end results in any meaningful biological way. Putting humans in the loop requires tools to support real-time interaction with these vast and complex data-sets. Numerous tools do exist for this purpose, but many do not have optimal interfaces, most are effectively isolated from other tools and databases owing to incompatible data formats, and many have limited real-time performance when applied to realistically large data-sets: much of the user's cognitive capacity is therefore focused on controlling the software and manipulating esoteric file formats rather than on performing the research. Methods To confront these issues, harnessing expertise in human-computer interaction (HCI), high-performance rendering and distributed systems, and guided by bioinformaticians and end-user biologists, we are building reusable software components that, together, create a toolkit that is both architecturally sound from a computing point of view, and addresses both user and developer requirements. Key to the system's usability is its direct exploitation of semantics, which, crucially, gives individual components knowledge of their own functionality and allows them to interoperate seamlessly, removing many of the existing barriers and bottlenecks from standard bioinformatics tasks. Results The toolkit, named Utopia, is freely available from . PMID:19534744
Prioritizing biological pathways by recognizing context in time-series gene expression data.
Lee, Jusang; Jo, Kyuri; Lee, Sunwon; Kang, Jaewoo; Kim, Sun
2016-12-23
The primary goal of pathway analysis using transcriptome data is to find significantly perturbed pathways. However, pathway analysis is not always successful in identifying pathways that are truly relevant to the context under study. A major reason for this difficulty is that a single gene is involved in multiple pathways. In the KEGG pathway database, there are 146 genes, each of which is involved in more than 20 pathways. Thus activation of even a single gene will result in activation of many pathways. This complex relationship often makes the pathway analysis very difficult. While we need much more powerful pathway analysis methods, a readily available alternative way is to incorporate the literature information. In this study, we propose a novel approach for prioritizing pathways by combining results from both pathway analysis tools and literature information. The basic idea is as follows. Whenever there are enough articles that provide evidence on which pathways are relevant to the context, we can be assured that the pathways are indeed related to the context, which is termed as relevance in this paper. However, if there are few or no articles reported, then we should rely on the results from the pathway analysis tools, which is termed as significance in this paper. We realized this concept as an algorithm by introducing Context Score and Impact Score and then combining the two into a single score. Our method ranked truly relevant pathways significantly higher than existing pathway analysis tools in experiments with two data sets. Our novel framework was implemented as ContextTRAP by utilizing two existing tools, TRAP and BEST. ContextTRAP will be a useful tool for the pathway based analysis of gene expression data since the user can specify the context of the biological experiment in a set of keywords. The web version of ContextTRAP is available at http://biohealth.snu.ac.kr/software/contextTRAP .
Pika: A snow science simulation tool built using the open-source framework MOOSE
NASA Astrophysics Data System (ADS)
Slaughter, A.; Johnson, M.
2017-12-01
The Department of Energy (DOE) is currently investing millions of dollars annually into various modeling and simulation tools for all aspects of nuclear energy. An important part of this effort includes developing applications based on the open-source Multiphysics Object Oriented Simulation Environment (MOOSE; mooseframework.org) from Idaho National Laboratory (INL).Thanks to the efforts of the DOE and outside collaborators, MOOSE currently contains a large set of physics modules, including phase-field, level set, heat conduction, tensor mechanics, Navier-Stokes, fracture and crack propagation (via the extended finite-element method), flow in porous media, and others. The heat conduction, tensor mechanics, and phase-field modules, in particular, are well-suited for snow science problems. Pika--an open-source MOOSE-based application--is capable of simulating both 3D, coupled nonlinear continuum heat transfer and large-deformation mechanics applications (such as settlement) and phase-field based micro-structure applications. Additionally, these types of problems may be coupled tightly in a single solve or across length and time scales using a loosely coupled Picard iteration approach. In addition to the wide range of physics capabilities, MOOSE-based applications also inherit an extensible testing framework, graphical user interface, and documentation system; tools that allow MOOSE and other applications to adhere to nuclear software quality standards. The snow science community can learn from the nuclear industry and harness the existing effort to build simulation tools that are open, modular, and share a common framework. In particular, MOOSE-based multiphysics solvers are inherently parallel, dimension agnostic, adaptive in time and space, fully coupled, and capable of interacting with other applications. The snow science community should build on existing tools to enable collaboration between researchers and practitioners throughout the world, and advance the state-of-the-art in line with other scientific research efforts.
Visualising biological data: a semantic approach to tool and database integration.
Pettifer, Steve; Thorne, David; McDermott, Philip; Marsh, James; Villéger, Alice; Kell, Douglas B; Attwood, Teresa K
2009-06-16
In the biological sciences, the need to analyse vast amounts of information has become commonplace. Such large-scale analyses often involve drawing together data from a variety of different databases, held remotely on the internet or locally on in-house servers. Supporting these tasks are ad hoc collections of data-manipulation tools, scripting languages and visualisation software, which are often combined in arcane ways to create cumbersome systems that have been customized for a particular purpose, and are consequently not readily adaptable to other uses. For many day-to-day bioinformatics tasks, the sizes of current databases, and the scale of the analyses necessary, now demand increasing levels of automation; nevertheless, the unique experience and intuition of human researchers is still required to interpret the end results in any meaningful biological way. Putting humans in the loop requires tools to support real-time interaction with these vast and complex data-sets. Numerous tools do exist for this purpose, but many do not have optimal interfaces, most are effectively isolated from other tools and databases owing to incompatible data formats, and many have limited real-time performance when applied to realistically large data-sets: much of the user's cognitive capacity is therefore focused on controlling the software and manipulating esoteric file formats rather than on performing the research. To confront these issues, harnessing expertise in human-computer interaction (HCI), high-performance rendering and distributed systems, and guided by bioinformaticians and end-user biologists, we are building reusable software components that, together, create a toolkit that is both architecturally sound from a computing point of view, and addresses both user and developer requirements. Key to the system's usability is its direct exploitation of semantics, which, crucially, gives individual components knowledge of their own functionality and allows them to interoperate seamlessly, removing many of the existing barriers and bottlenecks from standard bioinformatics tasks. The toolkit, named Utopia, is freely available from http://utopia.cs.man.ac.uk/.
Progress on the Multiphysics Capabilities of the Parallel Electromagnetic ACE3P Simulation Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kononenko, Oleksiy
2015-03-26
ACE3P is a 3D parallel simulation suite that is being developed at SLAC National Accelerator Laboratory. Effectively utilizing supercomputer resources, ACE3P has become a key tool for the coupled electromagnetic, thermal and mechanical research and design of particle accelerators. Based on the existing finite-element infrastructure, a massively parallel eigensolver is developed for modal analysis of mechanical structures. It complements a set of the multiphysics tools in ACE3P and, in particular, can be used for the comprehensive study of microphonics in accelerating cavities ensuring the operational reliability of a particle accelerator.
The center for causal discovery of biomedical knowledge from big data
Bahar, Ivet; Becich, Michael J; Benos, Panayiotis V; Berg, Jeremy; Espino, Jeremy U; Glymour, Clark; Jacobson, Rebecca Crowley; Kienholz, Michelle; Lee, Adrian V; Lu, Xinghua; Scheines, Richard
2015-01-01
The Big Data to Knowledge (BD2K) Center for Causal Discovery is developing and disseminating an integrated set of open source tools that support causal modeling and discovery of biomedical knowledge from large and complex biomedical datasets. The Center integrates teams of biomedical and data scientists focused on the refinement of existing and the development of new constraint-based and Bayesian algorithms based on causal Bayesian networks, the optimization of software for efficient operation in a supercomputing environment, and the testing of algorithms and software developed using real data from 3 representative driving biomedical projects: cancer driver mutations, lung disease, and the functional connectome of the human brain. Associated training activities provide both biomedical and data scientists with the knowledge and skills needed to apply and extend these tools. Collaborative activities with the BD2K Consortium further advance causal discovery tools and integrate tools and resources developed by other centers. PMID:26138794
Faust, Kyle; Faust, David
2015-08-12
Problematic or addictive digital gaming (including all types of electronic devices) can and has had extremely adverse impacts on the lives of many individuals across the world. The understanding of this phenomenon, and the effectiveness of treatment design and monitoring, can be improved considerably by continuing refinement of assessment tools. The present article briefly overviews tools designed to measure problematic or addictive use of digital gaming, the vast majority of which are founded on the Diagnostic and Statistical Manual of Mental Disorders (DSM) criteria for other addictive disorders, such as pathological gambling. Although adapting DSM content and strategies for measuring problematic digital gaming has proven valuable, there are some potential issues with this approach. We discuss the strengths and limitations of current methods for measuring problematic or addictive gaming and provide various recommendations that might help in enhancing or supplementing existing tools, or in developing new and even more effective tools.
NASA Technical Reports Server (NTRS)
Banks, David C.
1994-01-01
This talk features two simple and useful tools for digital image processing in the UNIX environment. They are xv and pbmplus. The xv image viewer which runs under the X window system reads images in a number of different file formats and writes them out in different formats. The view area supports a pop-up control panel. The 'algorithms' menu lets you blur an image. The xv control panel also activates the color editor which displays the image's color map (if one exists). The xv image viewer is available through the internet. The pbmplus package is a set of tools designed to perform image processing from within a UNIX shell. The acronym 'pbm' stands for portable bit map. Like xv, the pbm plus tool can convert images from and to many different file formats. The source code and manual pages for pbmplus are also available through the internet. This software is in the public domain.
Faust, Kyle; Faust, David
2015-01-01
Problematic or addictive digital gaming (including all types of electronic devices) can and has had extremely adverse impacts on the lives of many individuals across the world. The understanding of this phenomenon, and the effectiveness of treatment design and monitoring, can be improved considerably by continuing refinement of assessment tools. The present article briefly overviews tools designed to measure problematic or addictive use of digital gaming, the vast majority of which are founded on the Diagnostic and Statistical Manual of Mental Disorders (DSM) criteria for other addictive disorders, such as pathological gambling. Although adapting DSM content and strategies for measuring problematic digital gaming has proven valuable, there are some potential issues with this approach. We discuss the strengths and limitations of current methods for measuring problematic or addictive gaming and provide various recommendations that might help in enhancing or supplementing existing tools, or in developing new and even more effective tools. PMID:26274977
Roback, M G; Green, S M; Andolfatto, G; Leroy, P L; Mason, K P
2018-01-01
Many hospitals, and medical and dental clinics and offices, routinely monitor their procedural-sedation practices-tracking adverse events, outcomes, and efficacy in order to optimize the sedation delivery and practice. Currently, there exist substantial differences between settings in the content, collection, definition, and interpretation of such sedation outcomes, with resulting widespread reporting variation. With the objective of reducing such disparities, the International Committee for the Advancement of Procedural Sedation has herein developed a multidisciplinary, consensus-based, standardized tool intended to be applicable for all types of sedation providers in all locations worldwide. This tool is amenable for inclusion in either a paper or an electronic medical record. An additional, parallel research tool is presented to promote consistency and standardized data collection for procedural-sedation investigations. Copyright © 2017. Published by Elsevier Ltd.
Exploring Assessment Tools for Research and Evaluation in Astronomy Education and Outreach
NASA Astrophysics Data System (ADS)
Buxner, S. R.; Wenger, M. C.; Dokter, E. F. C.
2011-09-01
The ability to effectively measure knowledge, attitudes, and skills in formal and informal educational settings is an important aspect of astronomy education research and evaluation. Assessments may take the form of interviews, observations, surveys, exams, or other probes to help unpack people's understandings or beliefs. In this workshop, we discussed characteristics of a variety of tools that exist to assess understandings of different concepts in astronomy as well as attitudes towards science and science teaching; these include concept inventories, surveys, interview protocols, observation protocols, card sorting, reflection videos, and other methods currently being used in astronomy education research and EPO program evaluations. In addition, we discussed common questions in the selection of assessment tools including issues of reliability and validity, time to administer, format of implementation, analysis, and human subject concerns.
NASA Astrophysics Data System (ADS)
Mitchell, S. E.; Barbier, S. B.; Krishnamurthi, A.; Lochner, J. C.
2008-06-01
Many education and outreach programs face two daunting shortages: time and money. EPO professionals are frequently challenged to develop quality efforts for a variety of audiences and settings, all on a shoestring budget. How do you create a broad and cohesive education and outreach portfolio with limited resources? In this session, we discussed several effective strategies to make the most of your assets, such as adaptation of existing programs and materials, mutually beneficial partnerships, and innovative (and inexpensive) dissemination techniques. These approaches can fill in the gaps in your portfolio, increasing the scope and impact of your EPO efforts. There are a variety of cost-effective tools and techniques that can bring your EPO endeavors to a wide range of audiences and settings. Turn your program's EPO wish list into reality through savvy leveraging of existing personnel, funding, and materials... or find a partner that can help you fill any gaps in your portfolio.
A novel algorithm for simplification of complex gene classifiers in cancer
Wilson, Raphael A.; Teng, Ling; Bachmeyer, Karen M.; Bissonnette, Mei Lin Z.; Husain, Aliya N.; Parham, David M.; Triche, Timothy J.; Wing, Michele R.; Gastier-Foster, Julie M.; Barr, Frederic G.; Hawkins, Douglas S.; Anderson, James R.; Skapek, Stephen X.; Volchenboum, Samuel L.
2013-01-01
The clinical application of complex molecular classifiers as diagnostic or prognostic tools has been limited by the time and cost needed to apply them to patients. Using an existing fifty-gene expression signature known to separate two molecular subtypes of the pediatric cancer rhabdomyosarcoma, we show that an exhaustive iterative search algorithm can distill this complex classifier down to two or three features with equal discrimination. We validated the two-gene signatures using three separate and distinct data sets, including one that uses degraded RNA extracted from formalin-fixed, paraffin-embedded material. Finally, to demonstrate the generalizability of our algorithm, we applied it to a lung cancer data set to find minimal gene signatures that can distinguish survival. Our approach can easily be generalized and coupled to existing technical platforms to facilitate the discovery of simplified signatures that are ready for routine clinical use. PMID:23913937
Tailoring a software production environment for a large project
NASA Technical Reports Server (NTRS)
Levine, D. R.
1984-01-01
A software production environment was constructed to meet the specific goals of a particular large programming project. These goals, the specific solutions as implemented, and the experiences on a project of over 100,000 lines of source code are discussed. The base development environment for this project was an ordinary PWB Unix (tm) system. Several important aspects of the development process required support not available in the existing tool set.
ERIC Educational Resources Information Center
Hsiao, Hsien-Sheng; Chen, Jyun-Chen; Hong, Kunde
2016-01-01
Technical and vocational education emphasizes the development and training of hand motor skills. However, some problems exist in the current career and aptitude tests in that they do not truly measure the hand motor skills. This study used the Nintendo Wii Remote Controller as the testing device in developing a set of computerized testing tools to…
Coloc-stats: a unified web interface to perform colocalization analysis of genomic features.
Simovski, Boris; Kanduri, Chakravarthi; Gundersen, Sveinung; Titov, Dmytro; Domanska, Diana; Bock, Christoph; Bossini-Castillo, Lara; Chikina, Maria; Favorov, Alexander; Layer, Ryan M; Mironov, Andrey A; Quinlan, Aaron R; Sheffield, Nathan C; Trynka, Gosia; Sandve, Geir K
2018-06-05
Functional genomics assays produce sets of genomic regions as one of their main outputs. To biologically interpret such region-sets, researchers often use colocalization analysis, where the statistical significance of colocalization (overlap, spatial proximity) between two or more region-sets is tested. Existing colocalization analysis tools vary in the statistical methodology and analysis approaches, thus potentially providing different conclusions for the same research question. As the findings of colocalization analysis are often the basis for follow-up experiments, it is helpful to use several tools in parallel and to compare the results. We developed the Coloc-stats web service to facilitate such analyses. Coloc-stats provides a unified interface to perform colocalization analysis across various analytical methods and method-specific options (e.g. colocalization measures, resolution, null models). Coloc-stats helps the user to find a method that supports their experimental requirements and allows for a straightforward comparison across methods. Coloc-stats is implemented as a web server with a graphical user interface that assists users with configuring their colocalization analyses. Coloc-stats is freely available at https://hyperbrowser.uio.no/coloc-stats/.
Balhara, Kamna S; Peterson, Susan M; Elabd, Mohamed Moheb; Regan, Linda; Anton, Xavier; Al-Natour, Basil Ali; Hsieh, Yu-Hsiang; Scheulen, James; Stewart de Ramirez, Sarah A
2018-04-01
Standardized handoffs may reduce communication errors, but research on handoff in community and international settings is lacking. Our study at a community hospital in the United Arab Emirates characterizes existing handoff practices for admitted patients from emergency medicine (EM) to internal medicine (IM), develops a standardized handoff tool, and assesses its impact on communication and physician perceptions. EM physicians completed a survey regarding handoff practices and expectations. Trained observers utilized a checklist based on the Systems Engineering Initiative for Patient Safety model to observe 40 handoffs. EM and IM physicians collaboratively developed a written tool encouraging bedside handoff of admitted patients. After the intervention, surveys of EM physicians and 40 observations were subsequently repeated. 77.5% of initial observed handoffs occurred face-to-face, with 42.5% at bedside, and in four different languages. Most survey respondents considered face-to-face handoff ideal. Respondents noted 9-13 patients suffering harm due to handoff in the prior month. After handoff tool implementation, 97.5% of observed handoffs occurred face-to-face (versus 77.5%, p = 0.014), with 82.5% at bedside (versus 42.5%, p < 0.001), and all in English. Handoff was streamlined from 7 possible pathways to 3. Most post-intervention survey respondents reported improved workflow (77.8%) and safety (83.3%); none reported patient harm. Respondents and observers noted reduced inefficiency (p < 0.05). Our standardized tool increased face-to-face and bedside handoff, positively impacted workflow, and increased perceptions of safety by EM physicians in an international, non-academic setting. Our three-step approach can be applied towards developing standardized, context-specific inter-specialty handoff in a variety of settings.
Introducing GHOST: The Geospace/Heliosphere Observation & Simulation Tool-kit
NASA Astrophysics Data System (ADS)
Murphy, J. J.; Elkington, S. R.; Schmitt, P.; Wiltberger, M. J.; Baker, D. N.
2013-12-01
Simulation models of the heliospheric and geospace environments can provide key insights into the geoeffective potential of solar disturbances such as Coronal Mass Ejections and High Speed Solar Wind Streams. Advanced post processing of the results of these simulations greatly enhances the utility of these models for scientists and other researchers. Currently, no supported centralized tool exists for performing these processing tasks. With GHOST, we introduce a toolkit for the ParaView visualization environment that provides a centralized suite of tools suited for Space Physics post processing. Building on the work from the Center For Integrated Space Weather Modeling (CISM) Knowledge Transfer group, GHOST is an open-source tool suite for ParaView. The tool-kit plugin currently provides tools for reading LFM and Enlil data sets, and provides automated tools for data comparison with NASA's CDAweb database. As work progresses, many additional tools will be added and through open-source collaboration, we hope to add readers for additional model types, as well as any additional tools deemed necessary by the scientific public. The ultimate end goal of this work is to provide a complete Sun-to-Earth model analysis toolset.
Design and Analysis Tools for Supersonic Inlets
NASA Technical Reports Server (NTRS)
Slater, John W.; Folk, Thomas C.
2009-01-01
Computational tools are being developed for the design and analysis of supersonic inlets. The objective is to update existing tools and provide design and low-order aerodynamic analysis capability for advanced inlet concepts. The Inlet Tools effort includes aspects of creating an electronic database of inlet design information, a document describing inlet design and analysis methods, a geometry model for describing the shape of inlets, and computer tools that implement the geometry model and methods. The geometry model has a set of basic inlet shapes that include pitot, two-dimensional, axisymmetric, and stream-traced inlet shapes. The inlet model divides the inlet flow field into parts that facilitate the design and analysis methods. The inlet geometry model constructs the inlet surfaces through the generation and transformation of planar entities based on key inlet design factors. Future efforts will focus on developing the inlet geometry model, the inlet design and analysis methods, a Fortran 95 code to implement the model and methods. Other computational platforms, such as Java, will also be explored.
Screening and assessment tools for pediatric malnutrition.
Huysentruyt, Koen; Vandenplas, Yvan; De Schepper, Jean
2016-06-18
The ideal measures for screening and assessing undernutrition in children remain a point of discussion in literature. This review aims to provide an overview of recent advances in the nutritional screening and assessment methods in children. This review focuses on two major topics that emerged in literature since 2015: the practical endorsement of the new definition for pediatric undernutrition, with a focus on anthropometric measurements and the search for a consensus on pediatric nutritional screening tools in different settings. Few analytical tools exist for the assessment of the nutritional status in children. The subjective global nutritional assessment has been validated by anthropometric as well as clinical outcome parameters. Nutritional screening can help in selecting patients that benefit the most from a full nutritional assessment. Two new screening tools have been developed for use in a general (mixed) hospital population, and one for a population of children with cancer. The value of screening tools in different disease-specific and outpatient pediatric populations remains to be proven.
Gaussian process regression for tool wear prediction
NASA Astrophysics Data System (ADS)
Kong, Dongdong; Chen, Yongjie; Li, Ning
2018-05-01
To realize and accelerate the pace of intelligent manufacturing, this paper presents a novel tool wear assessment technique based on the integrated radial basis function based kernel principal component analysis (KPCA_IRBF) and Gaussian process regression (GPR) for real-timely and accurately monitoring the in-process tool wear parameters (flank wear width). The KPCA_IRBF is a kind of new nonlinear dimension-increment technique and firstly proposed for feature fusion. The tool wear predictive value and the corresponding confidence interval are both provided by utilizing the GPR model. Besides, GPR performs better than artificial neural networks (ANN) and support vector machines (SVM) in prediction accuracy since the Gaussian noises can be modeled quantitatively in the GPR model. However, the existence of noises will affect the stability of the confidence interval seriously. In this work, the proposed KPCA_IRBF technique helps to remove the noises and weaken its negative effects so as to make the confidence interval compressed greatly and more smoothed, which is conducive for monitoring the tool wear accurately. Moreover, the selection of kernel parameter in KPCA_IRBF can be easily carried out in a much larger selectable region in comparison with the conventional KPCA_RBF technique, which helps to improve the efficiency of model construction. Ten sets of cutting tests are conducted to validate the effectiveness of the presented tool wear assessment technique. The experimental results show that the in-process flank wear width of tool inserts can be monitored accurately by utilizing the presented tool wear assessment technique which is robust under a variety of cutting conditions. This study lays the foundation for tool wear monitoring in real industrial settings.
Aryanto, Kadek Y E; Broekema, André; Oudkerk, Matthijs; van Ooijen, Peter M A
2012-01-01
To present an adapted Clinical Trial Processor (CTP) test set-up for receiving, anonymising and saving Digital Imaging and Communications in Medicine (DICOM) data using external input from the original database of an existing clinical study information system to guide the anonymisation process. Two methods are presented for an adapted CTP test set-up. In the first method, images are pushed from the Picture Archiving and Communication System (PACS) using the DICOM protocol through a local network. In the second method, images are transferred through the internet using the HTTPS protocol. In total 25,000 images from 50 patients were moved from the PACS, anonymised and stored within roughly 2 h using the first method. In the second method, an average of 10 images per minute were transferred and processed over a residential connection. In both methods, no duplicated images were stored when previous images were retransferred. The anonymised images are stored in appropriate directories. The CTP can transfer and process DICOM images correctly in a very easy set-up providing a fast, secure and stable environment. The adapted CTP allows easy integration into an environment in which patient data are already included in an existing information system.
Multi-criteria development and incorporation into decision tools for health technology adoption.
Poulin, Paule; Austen, Lea; Scott, Catherine M; Waddell, Cameron D; Dixon, Elijah; Poulin, Michelle; Lafrenière, René
2013-01-01
When introducing new health technologies, decision makers must integrate research evidence with local operational management information to guide decisions about whether and under what conditions the technology will be used. Multi-criteria decision analysis can support the adoption or prioritization of health interventions by using criteria to explicitly articulate the health organization's needs, limitations, and values in addition to evaluating evidence for safety and effectiveness. This paper seeks to describe the development of a framework to create agreed-upon criteria and decision tools to enhance a pre-existing local health technology assessment (HTA) decision support program. The authors compiled a list of published criteria from the literature, consulted with experts to refine the criteria list, and used a modified Delphi process with a group of key stakeholders to review, modify, and validate each criterion. In a workshop setting, the criteria were used to create decision tools. A set of user-validated criteria for new health technology evaluation and adoption was developed and integrated into the local HTA decision support program. Technology evaluation and decision guideline tools were created using these criteria to ensure that the decision process is systematic, consistent, and transparent. This framework can be used by others to develop decision-making criteria and tools to enhance similar technology adoption programs. The development of clear, user-validated criteria for evaluating new technologies adds a critical element to improve decision-making on technology adoption, and the decision tools ensure consistency, transparency, and real-world relevance.
Lazaris, Charalampos; Kelly, Stephen; Ntziachristos, Panagiotis; Aifantis, Iannis; Tsirigos, Aristotelis
2017-01-05
Chromatin conformation capture techniques have evolved rapidly over the last few years and have provided new insights into genome organization at an unprecedented resolution. Analysis of Hi-C data is complex and computationally intensive involving multiple tasks and requiring robust quality assessment. This has led to the development of several tools and methods for processing Hi-C data. However, most of the existing tools do not cover all aspects of the analysis and only offer few quality assessment options. Additionally, availability of a multitude of tools makes scientists wonder how these tools and associated parameters can be optimally used, and how potential discrepancies can be interpreted and resolved. Most importantly, investigators need to be ensured that slight changes in parameters and/or methods do not affect the conclusions of their studies. To address these issues (compare, explore and reproduce), we introduce HiC-bench, a configurable computational platform for comprehensive and reproducible analysis of Hi-C sequencing data. HiC-bench performs all common Hi-C analysis tasks, such as alignment, filtering, contact matrix generation and normalization, identification of topological domains, scoring and annotation of specific interactions using both published tools and our own. We have also embedded various tasks that perform quality assessment and visualization. HiC-bench is implemented as a data flow platform with an emphasis on analysis reproducibility. Additionally, the user can readily perform parameter exploration and comparison of different tools in a combinatorial manner that takes into account all desired parameter settings in each pipeline task. This unique feature facilitates the design and execution of complex benchmark studies that may involve combinations of multiple tool/parameter choices in each step of the analysis. To demonstrate the usefulness of our platform, we performed a comprehensive benchmark of existing and new TAD callers exploring different matrix correction methods, parameter settings and sequencing depths. Users can extend our pipeline by adding more tools as they become available. HiC-bench consists an easy-to-use and extensible platform for comprehensive analysis of Hi-C datasets. We expect that it will facilitate current analyses and help scientists formulate and test new hypotheses in the field of three-dimensional genome organization.
Combining item response theory with multiple imputation to equate health assessment questionnaires.
Gu, Chenyang; Gutman, Roee
2017-09-01
The assessment of patients' functional status across the continuum of care requires a common patient assessment tool. However, assessment tools that are used in various health care settings differ and cannot be easily contrasted. For example, the Functional Independence Measure (FIM) is used to evaluate the functional status of patients who stay in inpatient rehabilitation facilities, the Minimum Data Set (MDS) is collected for all patients who stay in skilled nursing facilities, and the Outcome and Assessment Information Set (OASIS) is collected if they choose home health care provided by home health agencies. All three instruments or questionnaires include functional status items, but the specific items, rating scales, and instructions for scoring different activities vary between the different settings. We consider equating different health assessment questionnaires as a missing data problem, and propose a variant of predictive mean matching method that relies on Item Response Theory (IRT) models to impute unmeasured item responses. Using real data sets, we simulated missing measurements and compared our proposed approach to existing methods for missing data imputation. We show that, for all of the estimands considered, and in most of the experimental conditions that were examined, the proposed approach provides valid inferences, and generally has better coverages, relatively smaller biases, and shorter interval estimates. The proposed method is further illustrated using a real data set. © 2016, The International Biometric Society.
Quantifying multiple telecouplings using an integrated suite of spatially-explicit tools
NASA Astrophysics Data System (ADS)
Tonini, F.; Liu, J.
2016-12-01
Telecoupling is an interdisciplinary research umbrella concept that enables natural and social scientists to understand and generate information for managing how humans and nature can sustainably coexist worldwide. To systematically study telecoupling, it is essential to build a comprehensive set of spatially-explicit tools for describing and quantifying multiple reciprocal socioeconomic and environmental interactions between a focal area and other areas. Here we introduce the Telecoupling Toolbox, a new free and open-source set of tools developed to map and identify the five major interrelated components of the telecoupling framework: systems, flows, agents, causes, and effects. The modular design of the toolbox allows the integration of existing tools and software (e.g. InVEST) to assess synergies and tradeoffs associated with policies and other local to global interventions. We show applications of the toolbox using a number of representative studies that address a variety of scientific and management issues related to telecouplings throughout the world. The results suggest that the toolbox can thoroughly map and quantify multiple telecouplings under various contexts while providing users with an easy-to-use interface. It provides a powerful platform to address globally important issues, such as land use and land cover change, species invasion, migration, flows of ecosystem services, and international trade of goods and products.
Mabey, David C.; Chaudhri, Simran; Brown Epstein, Helen-Ann; Lawn, Stephen D.
2017-01-01
Abstract Primary health care workers (HCWs) in low- and middle-income settings (LMIC) often work in challenging conditions in remote, rural areas, in isolation from the rest of the health system and particularly specialist care. Much attention has been given to implementation of interventions to support quality and performance improvement for workers in such settings. However, little is known about the design of such initiatives and which approaches predominate, let alone those that are most effective. We aimed for a broad understanding of what distinguishes different approaches to primary HCW support and performance improvement and to clarify the existing evidence as well as gaps in evidence in order to inform decision-making and design of programs intended to support and improve the performance of health workers in these settings. We systematically searched the literature for articles addressing this topic, and undertook a comparative review to document the principal approaches to performance and quality improvement for primary HCWs in LMIC settings. We identified 40 eligible papers reporting on interventions that we categorized into five different approaches: (1) supervision and supportive supervision; (2) mentoring; (3) tools and aids; (4) quality improvement methods, and (5) coaching. The variety of study designs and quality/performance indicators precluded a formal quantitative data synthesis. The most extensive literature was on supervision, but there was little clarity on what defines the most effective approach to the supervision activities themselves, let alone the design and implementation of supervision programs. The mentoring literature was limited, and largely focused on clinical skills building and educational strategies. Further research on how best to incorporate mentorship into pre-service clinical training, while maintaining its function within the routine health system, is needed. There is insufficient evidence to draw conclusions about coaching in this setting, however a review of the corporate and the business school literature is warranted to identify transferrable approaches. A substantial literature exists on tools, but significant variation in approaches makes comparison challenging. We found examples of effective individual projects and designs in specific settings, but there was a lack of comparative research on tools across approaches or across settings, and no systematic analysis within specific approaches to provide evidence with clear generalizability. Future research should prioritize comparative intervention trials to establish clear global standards for performance and quality improvement initiatives. Such standards will be critical to creating and sustaining a well-functioning health workforce and for global initiatives such as universal health coverage. PMID:27993961
Hu, Jialu; Kehr, Birte; Reinert, Knut
2014-02-15
Owing to recent advancements in high-throughput technologies, protein-protein interaction networks of more and more species become available in public databases. The question of how to identify functionally conserved proteins across species attracts a lot of attention in computational biology. Network alignments provide a systematic way to solve this problem. However, most existing alignment tools encounter limitations in tackling this problem. Therefore, the demand for faster and more efficient alignment tools is growing. We present a fast and accurate algorithm, NetCoffee, which allows to find a global alignment of multiple protein-protein interaction networks. NetCoffee searches for a global alignment by maximizing a target function using simulated annealing on a set of weighted bipartite graphs that are constructed using a triplet approach similar to T-Coffee. To assess its performance, NetCoffee was applied to four real datasets. Our results suggest that NetCoffee remedies several limitations of previous algorithms, outperforms all existing alignment tools in terms of speed and nevertheless identifies biologically meaningful alignments. The source code and data are freely available for download under the GNU GPL v3 license at https://code.google.com/p/netcoffee/.
CATCh, an Ensemble Classifier for Chimera Detection in 16S rRNA Sequencing Studies
Mysara, Mohamed; Saeys, Yvan; Leys, Natalie; Raes, Jeroen
2014-01-01
In ecological studies, microbial diversity is nowadays mostly assessed via the detection of phylogenetic marker genes, such as 16S rRNA. However, PCR amplification of these marker genes produces a significant amount of artificial sequences, often referred to as chimeras. Different algorithms have been developed to remove these chimeras, but efforts to combine different methodologies are limited. Therefore, two machine learning classifiers (reference-based and de novo CATCh) were developed by integrating the output of existing chimera detection tools into a new, more powerful method. When comparing our classifiers with existing tools in either the reference-based or de novo mode, a higher performance of our ensemble method was observed on a wide range of sequencing data, including simulated, 454 pyrosequencing, and Illumina MiSeq data sets. Since our algorithm combines the advantages of different individual chimera detection tools, our approach produces more robust results when challenged with chimeric sequences having a low parent divergence, short length of the chimeric range, and various numbers of parents. Additionally, it could be shown that integrating CATCh in the preprocessing pipeline has a beneficial effect on the quality of the clustering in operational taxonomic units. PMID:25527546
Gender studies and the role of women in physics
NASA Astrophysics Data System (ADS)
Horton, K. Renee; Holbrook, J. C.
2013-03-01
While many physicists care about improving the success of women in physics, research on effective intervention strategies has been meager. What research that does exist focuses largely on the dynamics of under-representation: the factors that discourage women from choosing and remaining committed to the physics community. Rather than focusing on these deficits, this workshop set out to provide tools physicists can use to produce, analyze, and apply evidence about what works for women.
Liver Rapid Reference Set Application: Kevin Qu-Quest (2011) — EDRN Public Portal
We propose to evaluate the performance of a novel serum biomarker panel for early detection of hepatocellular carcinoma (HCC). This panel is based on markers from the ubiquitin-proteasome system (UPS) in combination with the existing known HCC biomarkers, namely, alpha-fetoprotein (AFP), AFP-L3%, and des-y-carboxy prothrombin (DCP). To this end, we applied multivariate logistic regression analysis to optimize this biomarker algorithm tool.
Ada Programming Support Environment (APSE) Evaluation and Validation (E&V) Team
1991-12-31
standards. The purpose of the team was to assist the project in several ways. Raymond Szymanski of Wright Research Iand Development Center (WRDC, now...debuggers, program library systems, and compiler diagnostics. The test suite does not include explicit tests for the existence of language features . The...support software is a set of tools and procedures which assist in preparing and executing the test suite, in extracting data from the results of
On the Genealogy of Asexual Diploids
NASA Astrophysics Data System (ADS)
Lam, Fumei; Langley, Charles H.; Song, Yun S.
Given molecular genetic data from diploid individuals that, at present, reproduce mostly or exclusively asexually without recombination, an important problem in evolutionary biology is detecting evidence of past sexual reproduction (i.e., meiosis and mating) and recombination (both meiotic and mitotic). However, currently there is a lack of computational tools for carrying out such a study. In this paper, we formulate a new problem of reconstructing diploid genealogies under the assumption of no sexual reproduction or recombination, with the ultimate goal being to devise genealogy-based tools for testing deviation from these assumptions. We first consider the infinite-sites model of mutation and develop linear-time algorithms to test the existence of an asexual diploid genealogy compatible with the infinite-sites model of mutation, and to construct one if it exists. Then, we relax the infinite-sites assumption and develop an integer linear programming formulation to reconstruct asexual diploid genealogies with the minimum number of homoplasy (back or recurrent mutation) events. We apply our algorithms on simulated data sets with sizes of biological interest.
Applications of automatic differentiation in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Carle, A.; Bischof, C.; Haigler, Kara J.; Newman, Perry A.
1994-01-01
Automatic differentiation (AD) is a powerful computational method that provides for computing exact sensitivity derivatives (SD) from existing computer programs for multidisciplinary design optimization (MDO) or in sensitivity analysis. A pre-compiler AD tool for FORTRAN programs called ADIFOR has been developed. The ADIFOR tool has been easily and quickly applied by NASA Langley researchers to assess the feasibility and computational impact of AD in MDO with several different FORTRAN programs. These include a state-of-the-art three dimensional multigrid Navier-Stokes flow solver for wings or aircraft configurations in transonic turbulent flow. With ADIFOR the user specifies sets of independent and dependent variables with an existing computer code. ADIFOR then traces the dependency path throughout the code, applies the chain rule to formulate derivative expressions, and generates new code to compute the required SD matrix. The resulting codes have been verified to compute exact non-geometric and geometric SD for a variety of cases. in less time than is required to compute the SD matrix using centered divided differences.
Droc, Gaëtan; Larivière, Delphine; Guignon, Valentin; Yahiaoui, Nabila; This, Dominique; Garsmeur, Olivier; Dereeper, Alexis; Hamelin, Chantal; Argout, Xavier; Dufayard, Jean-François; Lengelle, Juliette; Baurens, Franc-Christophe; Cenci, Alberto; Pitollat, Bertrand; D’Hont, Angélique; Ruiz, Manuel; Rouard, Mathieu; Bocs, Stéphanie
2013-01-01
Banana is one of the world’s favorite fruits and one of the most important crops for developing countries. The banana reference genome sequence (Musa acuminata) was recently released. Given the taxonomic position of Musa, the completed genomic sequence has particular comparative value to provide fresh insights about the evolution of the monocotyledons. The study of the banana genome has been enhanced by a number of tools and resources that allows harnessing its sequence. First, we set up essential tools such as a Community Annotation System, phylogenomics resources and metabolic pathways. Then, to support post-genomic efforts, we improved banana existing systems (e.g. web front end, query builder), we integrated available Musa data into generic systems (e.g. markers and genetic maps, synteny blocks), we have made interoperable with the banana hub, other existing systems containing Musa data (e.g. transcriptomics, rice reference genome, workflow manager) and finally, we generated new results from sequence analyses (e.g. SNP and polymorphism analysis). Several uses cases illustrate how the Banana Genome Hub can be used to study gene families. Overall, with this collaborative effort, we discuss the importance of the interoperability toward data integration between existing information systems. Database URL: http://banana-genome.cirad.fr/ PMID:23707967
The live donor assessment tool: a psychosocial assessment tool for live organ donors.
Iacoviello, Brian M; Shenoy, Akhil; Braoude, Jenna; Jennings, Tiane; Vaidya, Swapna; Brouwer, Julianna; Haydel, Brandy; Arroyo, Hansel; Thakur, Devendra; Leinwand, Joseph; Rudow, Dianne LaPointe
2015-01-01
Psychosocial evaluation is an important part of the live organ donor evaluation process, yet it is not standardized across institutions, and although tools exist for the psychosocial evaluation of organ recipients, none exist to assess donors. We set out to develop a semistructured psychosocial evaluation tool (the Live Donor Assessment Tool, LDAT) to assess potential live organ donors and to conduct preliminary analyses of the tool's reliability and validity. Review of the literature on the psychosocial variables associated with treatment adherence, quality of life, live organ donation outcome, and resilience, as well as review of the procedures for psychosocial evaluation at our center and other centers around the country, identified 9 domains to address; these domains were distilled into several items each, in collaboration with colleagues at transplant centers across the country, for a total of 29 items. Four raters were trained to use the LDAT, and they retrospectively scored 99 psychosocial evaluations conducted on live organ donor candidates. Reliability of the LDAT was assessed by calculating the internal consistency of the items in the scale and interrater reliability between raters; validity was estimated by comparing LDAT scores between those with a "positive" evaluation outcome and "negative" outcome. The LDAT was found to have good internal consistency, inter-rater reliability, and showed signs of validity: LDAT scores differentiated the positive vs. negative outcome groups. The LDAT demonstrated good reliability and validity, but future research on the LDAT and the ability to implement the LDAT prospectively is warranted. Copyright © 2015 The Academy of Psychosomatic Medicine. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Burns, K. Lee; Altino, Karen
2008-01-01
The Marshall Space Flight Center Natural Environments Branch has a long history of expertise in the modeling and computation of statistical launch availabilities with respect to weather conditions. Their existing data analysis product, the Atmospheric Parametric Risk Assessment (APRA) tool, computes launch availability given an input set of vehicle hardware and/or operational weather constraints by calculating the climatological probability of exceeding the specified constraint limits, APRA has been used extensively to provide the Space Shuttle program the ability to estimate impacts that various proposed design modifications would have to overall launch availability. The model accounts for both seasonal and diurnal variability at a single geographic location and provides output probabilities for a single arbitrary launch attempt. Recently, the Shuttle program has shown interest in having additional capabilities added to the APRA model, including analysis of humidity parameters, inclusion of landing site weather to produce landing availability, and concurrent analysis of multiple sites, to assist in operational landing site selection. In addition, the Constellation program has also expressed interest in the APRA tool, and has requested several additional capabilities to address some Constellation-specific issues, both in the specification and verification of design requirements and in the development of operations concepts. The combined scope of the requested capability enhancements suggests an evolution of the model beyond a simple revision process. Development has begun for a new data analysis tool that will satisfy the requests of both programs. This new tool, Probabilities of Atmospheric Conditions and Environmental Risk (PACER), will provide greater flexibility and significantly enhanced functionality compared to the currently existing tool.
Mohammad Al Alfy, Ibrahim
2018-01-01
A set of three pads was constructed from primary materials (sand, gravel and cement) to calibrate the gamma-gamma density tool. A simple equation was devised to convert the qualitative cps values to quantitative g/cc values. The neutron-neutron porosity tool measures the qualitative cps porosity values. A direct equation was derived to calculate the porosity percentage from the cps porosity values. Cement-bond log illustrates the cement quantities, which surround well pipes. This log needs a difficult process due to the existence of various parameters, such as: drilling well diameter as well as internal diameter, thickness and type of well pipes. An equation was invented to calculate the cement percentage at standard conditions. This equation can be modified according to varying conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Planetary Data System— Archiving Planetary Data for the use of the Planetary Science Community
NASA Astrophysics Data System (ADS)
Morgan, Thomas H.; McLaughlin, Stephanie A.; Grayzeck, Edwin J.; Vilas, Faith; Knopf, William P.; Crichton, Daniel J.
2014-11-01
NASA’s Planetary Data System (PDS) archives, curates, and distributes digital data from NASA’s planetary missions. PDS provides the planetary science community convenient online access to data from NASA’s missions so that they can continue to mine these rich data sets for new discoveries. The PDS is a federated system consisting of nodes for specific discipline areas ranging from planetary geology to space physics. Our federation includes an engineering node that provides systems engineering support to the entire PDS.In order to adequately capture complete mission data sets containing not only raw and reduced instrument data, but also calibration and documentation and geometry data required to interpret and use these data sets both singly and together (data from multiple instruments, or from multiple missions), PDS personnel work with NASA missions from the initial AO through the end of mission to define, organize, and document the data. This process includes peer-review of data sets by members of the science community to ensure that the data sets are scientifically useful, effectively organized, and well documented. PDS makes the data in PDS easily searchable so that members of the planetary community can both query the archive to find data relevant to specific scientific investigations and easily retrieve the data for analysis. To ensure long-term preservation of data and to make data sets more easily searchable with the new capabilities in Information Technology now available (and as existing technologies become obsolete), the PDS (together with the COSPAR sponsored IPDA) developed and deployed a new data archiving system known as PDS4, released in 2013. The LADEE, MAVEN, OSIRIS REx, InSight, and Mars2020 missions are using PDS4. ESA has adopted PDS4 for the upcoming BepiColumbo mission. The PDS is actively migrating existing data records into PDS4 and developing tools to aid data providers and users. The PDS is also incorporating challenge-based competitions to rapidly and economically develop new tools for both users and data providers.Please visit our User Support Area at the meeting (Booth #114) if you have questions accessing our data sets or providing data to the PDS.
Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation
NASA Technical Reports Server (NTRS)
DePriest, Douglas; Morgan, Carolyn
2003-01-01
The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.
The importance of measuring customer satisfaction in palliative care.
Turriziani, Adriana; Attanasio, Gennaro; Scarcella, Francesco; Sangalli, Luisa; Scopa, Anna; Genualdo, Alessandra; Quici, Stefano; Nazzicone, Giulia; Ricciotti, Maria Adelaide; La Commare, Francesco
2016-03-01
In the last decades, palliative care has been more and more focused on the evaluation of patients' and families' satisfaction with care. However, the evaluation of customer satisfaction in palliative care presents a number of issues such as the presence of both patients and their families, the frail condition of the patients and the complexity of their needs, and the lack of standard quality indicators and appropriate measurement tools. In this manuscript, we critically review existing evidence and literature on the evaluation of satisfaction in the palliative care context. Moreover, we provide - as a practical example - the preliminary results of our experience in this setting with the development of a dedicated tool for the measurement of satisfaction.
Communication strategies and volunteer management for the IAU-OAD
NASA Astrophysics Data System (ADS)
Sankatsing Nava, Tibisay
2015-08-01
The IAU Office of Astronomy for Development will be developing a new communication strategy to promote its projects in a way that is relevant to stakeholders and the general public. Ideas include a magazine featuring best practices within the field of astronomy for development and setting up a workflow of communication that integrates the different outputs of the office and effectively uses the information collection tools developed by OAD team members.To accomplish these tasks the OAD will also develop a community management strategy with existing tools to effectively harness the skills of OAD volunteers for communication purposes. This talk will discuss the new communication strategy of the OAD as well the expanded community management plans.
New approaches for real time decision support systems
NASA Technical Reports Server (NTRS)
Hair, D. Charles; Pickslay, Kent
1994-01-01
NCCOSC RDT&E Division (NRaD) is conducting research into ways of improving decision support systems (DSS) that are used in tactical Navy decision making situations. The research has focused on the incorporation of findings about naturalistic decision-making processes into the design of the DSS. As part of that research, two computer tools were developed that model the two primary naturalistic decision-making strategies used by Navy experts in tactical settings. Current work is exploring how best to incorporate the information produced by those tools into an existing simulation of current Navy decision support systems. This work has implications for any applications involving the need to make decisions under time constraints, based on incomplete or ambiguous data.
The Knowledge-Based Software Assistant: Beyond CASE
NASA Technical Reports Server (NTRS)
Carozzoni, Joseph A.
1993-01-01
This paper will outline the similarities and differences between two paradigms of software development. Both support the whole software life cycle and provide automation for most of the software development process, but have different approaches. The CASE approach is based on a set of tools linked by a central data repository. This tool-based approach is data driven and views software development as a series of sequential steps, each resulting in a product. The Knowledge-Based Software Assistant (KBSA) approach, a radical departure from existing software development practices, is knowledge driven and centers around a formalized software development process. KBSA views software development as an incremental, iterative, and evolutionary process with development occurring at the specification level.
NASA Astrophysics Data System (ADS)
Rajib, M. A.; Merwade, V.; Song, C.; Zhao, L.; Kim, I. L.; Zhe, S.
2014-12-01
Setting up of any hydrologic model requires a large amount of efforts including compilation of all the data, creation of input files, calibration and validation. Given the amount of efforts involved, it is possible that models for a watershed get created multiple times by multiple groups or organizations to accomplish different research, educational or policy goals. To reduce the duplication of efforts and enable collaboration among different groups or organizations around an already existing hydrology model, a platform is needed where anyone can search for existing models, perform simple scenario analysis and visualize model results. The creator and users of a model on such a platform can then collaborate to accomplish new research or educational objectives. From this perspective, a prototype cyber-infrastructure (CI), called SWATShare, is developed for sharing, running and visualizing Soil Water Assessment Tool (SWAT) models in an interactive GIS-enabled web environment. Users can utilize SWATShare to publish or upload their own models, search and download existing SWAT models developed by others, run simulations including calibration using high performance resources provided by XSEDE and Cloud. Besides running and sharing, SWATShare hosts a novel spatio-temporal visualization system for SWAT model outputs. In temporal scale, the system creates time-series plots for all the hydrology and water quality variables available along the reach as well as in watershed-level. In spatial scale, the system can dynamically generate sub-basin level thematic maps for any variable at any user-defined date or date range; and thereby, allowing users to run animations or download the data for subsequent analyses. In addition to research, SWATShare can also be used within a classroom setting as an educational tool for modeling and comparing the hydrologic processes under different geographic and climatic settings. SWATShare is publicly available at https://www.water-hub.org/swatshare.
ASDF: An Adaptable Seismic Data Format with Full Provenance
NASA Astrophysics Data System (ADS)
Smith, J. A.; Krischer, L.; Tromp, J.; Lefebvre, M. P.
2015-12-01
In order for seismologists to maximize their knowledge of how the Earth works, they must extract the maximum amount of useful information from all recorded seismic data available for their research. This requires assimilating large sets of waveform data, keeping track of vast amounts of metadata, using validated standards for quality control, and automating the workflow in a careful and efficient manner. In addition, there is a growing gap between CPU/GPU speeds and disk access speeds that leads to an I/O bottleneck in seismic workflows. This is made even worse by existing seismic data formats that were not designed for performance and are limited to a few fixed headers for storing metadata.The Adaptable Seismic Data Format (ASDF) is a new data format for seismology that solves the problems with existing seismic data formats and integrates full provenance into the definition. ASDF is a self-describing format that features parallel I/O using the parallel HDF5 library. This makes it a great choice for use on HPC clusters. The format integrates the standards QuakeML for seismic sources and StationXML for receivers. ASDF is suitable for storing earthquake data sets, where all waveforms for a single earthquake are stored in a one file, ambient noise cross-correlations, and adjoint sources. The format comes with a user-friendly Python reader and writer that gives seismologists access to a full set of Python tools for seismology. There is also a faster C/Fortran library for integrating ASDF into performance-focused numerical wave solvers, such as SPECFEM3D_GLOBE. Finally, a GUI tool designed for visually exploring the format exists that provides a flexible interface for both research and educational applications. ASDF is a new seismic data format that offers seismologists high-performance parallel processing, organized and validated contents, and full provenance tracking for automated seismological workflows.
Brereton, Louise; Clark, Joseph; Ingleton, Christine; Gardiner, Clare; Preston, Louise; Ryan, Tony; Goyder, Elizabeth
2017-10-01
A wide range of organisational models of palliative care exist. However, decision makers need more information about which models are likely to be most effective in different settings and for different patient groups. To identify the existing range of models of palliative care that have been evaluated, what is already known and what further information is essential if the most effective and cost-effective models are to be identified and replicated more widely. A review of systematic and narrative reviews according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Study quality was assessed using the AMSTAR (A MeaSurement Tool to Assess Reviews) tool. MEDLINE, EMBASE, PsycINFO, CINAHL, Cochrane Library, Web of Science and ASSIA were searched for reviews about models of service provision from 2000 to 2014 and supplemented with Google searches of the grey literature. Much of the evidence relates to home-based palliative care, although some models are delivered across care settings. Reviews report several potential advantages and few disadvantages of models of palliative care delivery. However, under-reporting of the components of intervention and comparator models are major barriers to the evaluation and implementation of models of palliative care. Irrespective of setting or patient characteristics, models of palliative care appear to show benefits and some models of palliative care may reduce total healthcare costs. However, much more detailed and systematic reporting of components and agreement about outcome measures is essential in order to understand the key components and successfully replicate effective organisational models.
Coates, Laura C; Walsh, Jessica; Haroon, Muhammad; FitzGerald, Oliver; Aslam, Tariq; Al Balushi, Farida; Burden, A D; Burden-Teh, Esther; Caperon, Anna R; Cerio, Rino; Chattopadhyay, Chandrabhusan; Chinoy, Hector; Goodfield, Mark J D; Kay, Lesley; Kelly, Stephen; Kirkham, Bruce W; Lovell, Christopher R; Marzo-Ortega, Helena; McHugh, Neil; Murphy, Ruth; Reynolds, Nick J; Smith, Catherine H; Stewart, Elizabeth J C; Warren, Richard B; Waxman, Robin; Wilson, Hilary E; Helliwell, Philip S
2014-09-01
Several questionnaires have been developed to screen for psoriatic arthritis (PsA), but head-to-head studies have found limitations. This study aimed to develop new questionnaires encompassing the most discriminative questions from existing instruments. Data from the CONTEST study, a head-to-head comparison of 3 existing questionnaires, were used to identify items with a Youden index score of ≥0.1. These were combined using 4 approaches: CONTEST (simple additions of questions), CONTESTw (weighting using logistic regression), CONTESTjt (addition of a joint manikin), and CONTESTtree (additional questions identified by classification and regression tree [CART] analysis). These candidate questionnaires were tested in independent data sets. Twelve individual questions with a Youden index score of ≥0.1 were identified, but 4 of these were excluded due to duplication and redundancy. Weighting for 2 of these questions was included in CONTESTw. Receiver operating characteristic (ROC) curve analysis showed that involvement in 6 joint areas on the manikin was predictive of PsA for inclusion in CONTESTjt. CART analysis identified a further 5 questions for inclusion in CONTESTtree. CONTESTtree was not significant on ROC curve analysis and discarded. The other 3 questionnaires were significant in all data sets, although CONTESTw was slightly inferior to the others in the validation data sets. Potential cut points for referral were also discussed. Of 4 candidate questionnaires combining existing discriminatory items to identify PsA in people with psoriasis, 3 were found to be significant on ROC curve analysis. Testing in independent data sets identified 2 questionnaires (CONTEST and CONTESTjt) that should be pursued for further prospective testing. Copyright © 2014 by the American College of Rheumatology.
Boomerang: A Method for Recursive Reclassification
Devlin, Sean M.; Ostrovnaya, Irina; Gönen, Mithat
2016-01-01
Summary While there are many validated prognostic classifiers used in practice, often their accuracy is modest and heterogeneity in clinical outcomes exists in one or more risk subgroups. Newly available markers, such as genomic mutations, may be used to improve the accuracy of an existing classifier by reclassifying patients from a heterogenous group into a higher or lower risk category. The statistical tools typically applied to develop the initial classifiers are not easily adapted towards this reclassification goal. In this paper, we develop a new method designed to refine an existing prognostic classifier by incorporating new markers. The two-stage algorithm called Boomerang first searches for modifications of the existing classifier that increase the overall predictive accuracy and then merges to a pre-specified number of risk groups. Resampling techniques are proposed to assess the improvement in predictive accuracy when an independent validation data set is not available. The performance of the algorithm is assessed under various simulation scenarios where the marker frequency, degree of censoring, and total sample size are varied. The results suggest that the method selects few false positive markers and is able to improve the predictive accuracy of the classifier in many settings. Lastly, the method is illustrated on an acute myeloid leukemia dataset where a new refined classifier incorporates four new mutations into the existing three category classifier and is validated on an independent dataset. PMID:26754051
Testing the Birth Unit Design Spatial Evaluation Tool (BUDSET) in Australia: a pilot study.
Foureur, Maralyn J; Leap, Nicky; Davis, Deborah L; Forbes, Ian F; Homer, Caroline E S
2011-01-01
To pilot test the Birth Unit Design Spatial Evaluation Tool (BUDSET) in an Australian maternity care setting to determine whether such an instrument can measure the optimality of different birth settings. Optimally designed spaces to give birth are likely to influence a woman's ability to experience physiologically normal labor and birth. This is important in the current industrialized environment, where increased caesarean section rates are causing concerns. The measurement of an optimal birth space is currently impossible, because there are limited tools available. A quantitative study was undertaken to pilot test the discriminant ability of the BUDSET in eight maternity units in New South Wales, Australia. Five auditors trained in the use of the BUDSET assessed the birth units using the BUDSET, which is based on 18 design principles and is divided into four domains (Fear Cascade, Facility, Aesthetics, and Support) with three to eight assessable items in each. Data were independently collected in eight birth units. Values for each of the domains were aggregated to provide an overall Optimality Score for each birth unit. A range of Optimality Scores was derived for each of the birth units (from 51 to 77 out of a possible 100 points). The BUDSET identified units with low-scoring domains. Essentially these were older units and conventional labor ward settings. The BUDSET provides a way to assess the optimality of birth units and determine which domain areas may need improvement. There is potential for improvements to existing birth spaces, and considerable improvement can be made with simple low-cost modifications. Further research is needed to validate the tool.
Towards sets of hazardous waste indicators. Essential tools for modern industrial management.
Peterson, Peter J; Granados, Asa
2002-01-01
Decision-makers require useful tools, such as indicators, to help them make environmentally sound decisions leading to effective management of hazardous wastes. Four hazardous waste indicators are being tested for such a purpose by several countries within the Sustainable Development Indicator Programme of the United Nations Commission for Sustainable Development. However, these indicators only address the 'down-stream' end-of-pipe industrial situation. More creative thinking is clearly needed to develop a wider range of indicators that not only reflects all aspects of industrial production that generates hazardous waste but considers socio-economic implications of the waste as well. Sets of useful and innovative indicators are proposed that could be applied to the emerging paradigm shift away from conventional end-of-pipe management actions and towards preventive strategies that are being increasingly adopted by industry often in association with local and national governments. A methodological and conceptual framework for the development of a core-set of hazardous waste indicators has been developed. Some of the indicator sets outlined quantify preventive waste management strategies (including indicators for cleaner production, hazardous waste reduction/minimization and life cycle analysis), whilst other sets address proactive strategies (including changes in production and consumption patterns, eco-efficiency, eco-intensity and resource productivity). Indicators for quantifying transport of hazardous wastes are also described. It was concluded that a number of the indicators proposed could now be usefully implemented as management tools using existing industrial and economic data. As cleaner production technologies and waste minimization approaches are more widely deployed, and industry integrates environmental concerns at all levels of decision-making, it is expected that the necessary data for construction of the remaining indicators will soon become available.
Beckers, Matthew; Mohorianu, Irina; Stocks, Matthew; Applegate, Christopher; Dalmay, Tamas; Moulton, Vincent
2017-01-01
Recently, high-throughput sequencing (HTS) has revealed compelling details about the small RNA (sRNA) population in eukaryotes. These 20 to 25 nt noncoding RNAs can influence gene expression by acting as guides for the sequence-specific regulatory mechanism known as RNA silencing. The increase in sequencing depth and number of samples per project enables a better understanding of the role sRNAs play by facilitating the study of expression patterns. However, the intricacy of the biological hypotheses coupled with a lack of appropriate tools often leads to inadequate mining of the available data and thus, an incomplete description of the biological mechanisms involved. To enable a comprehensive study of differential expression in sRNA data sets, we present a new interactive pipeline that guides researchers through the various stages of data preprocessing and analysis. This includes various tools, some of which we specifically developed for sRNA analysis, for quality checking and normalization of sRNA samples as well as tools for the detection of differentially expressed sRNAs and identification of the resulting expression patterns. The pipeline is available within the UEA sRNA Workbench, a user-friendly software package for the processing of sRNA data sets. We demonstrate the use of the pipeline on a H. sapiens data set; additional examples on a B. terrestris data set and on an A. thaliana data set are described in the Supplemental Information. A comparison with existing approaches is also included, which exemplifies some of the issues that need to be addressed for sRNA analysis and how the new pipeline may be used to do this. PMID:28289155
Key elements of high-quality practice organisation in primary health care: a systematic review.
Crossland, Lisa; Janamian, Tina; Jackson, Claire L
2014-08-04
To identify elements that are integral to high-quality practice and determine considerations relating to high-quality practice organisation in primary care. A narrative systematic review of published and grey literature. Electronic databases (PubMed, CINAHL, the Cochrane Library, Embase, Emerald Insight, PsycInfo, the Primary Health Care Research and Information Service website, Google Scholar) were searched in November 2013 and used to identify articles published in English from 2002 to 2013. Reference lists of included articles were searched for relevant unpublished articles and reports. Data were configured at the study level to allow for the inclusion of findings from a broad range of study types. Ten elements were most often included in the existing organisational assessment tools. A further three elements were identified from an inductive thematic analysis of descriptive articles, and were noted as important considerations in effective quality improvement in primary care settings. Although there are some validated tools available to primary care that identify and build quality, most are single-strategy approaches developed outside health care settings. There are currently no validated organisational improvement tools, designed specifically for primary health care, which combine all elements of practice improvement and whose use does not require extensive external facilitation.
Chaudhuri, Rima; Sadrieh, Arash; Hoffman, Nolan J; Parker, Benjamin L; Humphrey, Sean J; Stöckli, Jacqueline; Hill, Adam P; James, David E; Yang, Jean Yee Hwa
2015-08-19
Most biological processes are influenced by protein post-translational modifications (PTMs). Identifying novel PTM sites in different organisms, including humans and model organisms, has expedited our understanding of key signal transduction mechanisms. However, with increasing availability of deep, quantitative datasets in diverse species, there is a growing need for tools to facilitate cross-species comparison of PTM data. This is particularly important because functionally important modification sites are more likely to be evolutionarily conserved; yet cross-species comparison of PTMs is difficult since they often lie in structurally disordered protein domains. Current tools that address this can only map known PTMs between species based on known orthologous phosphosites, and do not enable the cross-species mapping of newly identified modification sites. Here, we addressed this by developing a web-based software tool, PhosphOrtholog ( www.phosphortholog.com ) that accurately maps protein modification sites between different species. This facilitates the comparison of datasets derived from multiple species, and should be a valuable tool for the proteomics community. Here we describe PhosphOrtholog, a web-based application for mapping known and novel orthologous PTM sites from experimental data obtained from different species. PhosphOrtholog is the only generic and automated tool that enables cross-species comparison of large-scale PTM datasets without relying on existing PTM databases. This is achieved through pairwise sequence alignment of orthologous protein residues. To demonstrate its utility we apply it to two sets of human and rat muscle phosphoproteomes generated following insulin and exercise stimulation, respectively, and one publicly available mouse phosphoproteome following cellular stress revealing high mapping and coverage efficiency. Although coverage statistics are dataset dependent, PhosphOrtholog increased the number of cross-species mapped sites in all our example data sets by more than double when compared to those recovered using existing resources such as PhosphoSitePlus. PhosphOrtholog is the first tool that enables mapping of thousands of novel and known protein phosphorylation sites across species, accessible through an easy-to-use web interface. Identification of conserved PTMs across species from large-scale experimental data increases our knowledgebase of functional PTM sites. Moreover, PhosphOrtholog is generic being applicable to other PTM datasets such as acetylation, ubiquitination and methylation.
NASA Astrophysics Data System (ADS)
Nusawardhana
2007-12-01
Recent developments indicate a changing perspective on how systems or vehicles should be designed. Such transition comes from the way decision makers in defense related agencies address complex problems. Complex problems are now often posed in terms of the capabilities desired, rather than in terms of requirements for a single systems. As a result, the way to provide a set of capabilities is through a collection of several individual, independent systems. This collection of individual independent systems is often referred to as a "System of Systems'' (SoS). Because of the independent nature of the constituent systems in an SoS, approaches to design an SoS, and more specifically, approaches to design a new system as a member of an SoS, will likely be different than the traditional design approaches for complex, monolithic (meaning the constituent parts have no ability for independent operation) systems. Because a system of system evolves over time, this simultaneous system design and resource allocation problem should be investigated in a dynamic context. Such dynamic optimization problems are similar to conventional control problems. However, this research considers problems which not only seek optimizing policies but also seek the proper system or vehicle to operate under these policies. This thesis presents a framework and a set of analytical tools to solve a class of SoS problems that involves the simultaneous design of a new system and allocation of the new system along with existing systems. Such a class of problems belongs to the problems of concurrent design and control of a new systems with solutions consisting of both optimal system design and optimal control strategy. Rigorous mathematical arguments show that the proposed framework solves the concurrent design and control problems. Many results exist for dynamic optimization problems of linear systems. In contrary, results on optimal nonlinear dynamic optimization problems are rare. The proposed framework is equipped with the set of analytical tools to solve several cases of nonlinear optimal control problems: continuous- and discrete-time nonlinear problems with applications on both optimal regulation and tracking. These tools are useful when mathematical descriptions of dynamic systems are available. In the absence of such a mathematical model, it is often necessary to derive a solution based on computer simulation. For this case, a set of parameterized decision may constitute a solution. This thesis presents a method to adjust these parameters based on the principle of stochastic approximation simultaneous perturbation using continuous measurements. The set of tools developed here mostly employs the methods of exact dynamic programming. However, due to the complexity of SoS problems, this research also develops suboptimal solution approaches, collectively recognized as approximate dynamic programming solutions, for large scale problems. The thesis presents, explores, and solves problems from an airline industry, in which a new aircraft is to be designed and allocated along with an existing fleet of aircraft. Because the life cycle of an aircraft is on the order of 10 to 20 years, this problem is to be addressed dynamically so that the new aircraft design is the best design for the fleet over a given time horizon.
Milano, Giulia; Saenz, Elizabeth; Clark, Nicolas; Busse, Anja; Gale, John; Campello, Giovanna; Mattfeld, Elizabeth; Maalouf, Wadih; Heikkila, Hanna; Martelli, Antonietta; Morales, Brian; Gerra, Gilberto
2017-11-10
Very little evidence has been reported in literature regarding the misuse of substances in rural areas. Despite the common perception of rural communities as a protective and risk-mitigating environment, the scientific literature demonstrated the existence of many risk factors in rural communities. The Drug Prevention and Health Branch (DHB) of the United Nations Office on Drugs and Crime (UNODC), and the World Health Organization (WHO), in June 2016, organized a meeting of experts in treatment and prevention of SUDs in rural settings. The content presented during the meeting and the related discussion have provided materials for the preparation of an outline document, which is the basis to create a technical tool on SUDs prevention and treatment in rural settings. The UNODC framework for interventions in rural settings is a technical tool aimed to assist policy makers and managers at the national level. This paper is a report on UNODC/WHO efforts to improve the clinical conditions of people affected by SUDs and living in rural areas. The purpose of this article is to draw attention on a severe clinical and social problem in a reality forgotten by everyone.
Tillmar, Andreas O; Phillips, Chris
2017-01-01
Advances in massively parallel sequencing technology have enabled the combination of a much-expanded number of DNA markers (notably STRs and SNPs in one or combined multiplexes), with the aim of increasing the weight of evidence in forensic casework. However, when data from multiple loci on the same chromosome are used, genetic linkage can affect the final likelihood calculation. In order to study the effect of linkage for different sets of markers we developed the biostatistical tool ILIR, (Impact of Linkage on forensic markers for Identity and Relationship tests). The ILIR tool can be used to study the overall impact of genetic linkage for an arbitrary set of markers used in forensic testing. Application of ILIR can be useful during marker selection and design of new marker panels, as well as being highly relevant for existing marker sets as a way to properly evaluate the effects of linkage on a case-by-case basis. ILIR, implemented via the open source platform R, includes variation and genomic position reference data for over 40 STRs and 140 SNPs, combined with the ability to include additional forensic markers of interest. The use of the software is demonstrated with examples from several different established marker sets (such as the expanded CODIS core loci) including a review of the interpretation of linked genetic data. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Virtual biomedical universities and e-learning.
Beux, P Le; Fieschi, M
2007-01-01
In this special issue on virtual biomedical universities and e-learning we will make a survey on the principal existing teaching applications of ICT used in medical Schools around the world. In the following we identify five types of research and experiments in this field of medical e-learning and virtual medical universities. The topics of this special issue goes from educational computer program to create and simulate virtual patients with a wide variety of medical conditions in different clinical settings and over different time frames to using distance learning in developed and developing countries program training medical informatics of clinicians. We also present the necessity of good indexing and research tools for training resources together with workflows to manage the multiple source content of virtual campus or universities and the virtual digital video resources. A special attention is given to training new generations of clinicians in ICT tools and methods to be used in clinical settings as well as in medical schools.
Tian, Jing; Varga, Boglarka; Tatrai, Erika; Fanni, Palya; Somfai, Gabor Mark; Smiddy, William E.
2016-01-01
Over the past two decades a significant number of OCT segmentation approaches have been proposed in the literature. Each methodology has been conceived for and/or evaluated using specific datasets that do not reflect the complexities of the majority of widely available retinal features observed in clinical settings. In addition, there does not exist an appropriate OCT dataset with ground truth that reflects the realities of everyday retinal features observed in clinical settings. While the need for unbiased performance evaluation of automated segmentation algorithms is obvious, the validation process of segmentation algorithms have been usually performed by comparing with manual labelings from each study and there has been a lack of common ground truth. Therefore, a performance comparison of different algorithms using the same ground truth has never been performed. This paper reviews research-oriented tools for automated segmentation of the retinal tissue on OCT images. It also evaluates and compares the performance of these software tools with a common ground truth. PMID:27159849
Alsalem, Gheed; Bowie, Paul; Morrison, Jillian
2018-05-10
The perceived importance of safety culture in improving patient safety and its impact on patient outcomes has led to a growing interest in the assessment of safety climate in healthcare organizations; however, the rigour with which safety climate tools were developed and psychometrically tested was shown to be variable. This paper aims to identify and review questionnaire studies designed to measure safety climate in acute hospital settings, in order to assess the adequacy of reported psychometric properties of identified tools. A systematic review of published empirical literature was undertaken to examine sample characteristics and instrument details including safety climate dimensions, origin and theoretical basis, and extent of psychometric evaluation (content validity, criterion validity, construct validity and internal reliability). Five questionnaire tools, designed for general evaluation of safety climate in acute hospital settings, were included. Detailed inspection revealed ambiguity around concepts of safety culture and climate, safety climate dimensions and the methodological rigour associated with the design of these measures. Standard reporting of the psychometric properties of developed questionnaires was variable, although evidence of an improving trend in the quality of the reported psychometric properties of studies was noted. Evidence of the theoretical underpinnings of climate tools was limited, while a lack of clarity in the relationship between safety culture and patient outcome measures still exists. Evidence of the adequacy of the psychometric development of safety climate questionnaire tools is still limited. Research is necessary to resolve the controversies in the definitions and dimensions of safety culture and climate in healthcare and identify related inconsistencies. More importance should be given to the appropriate validation of safety climate questionnaires before extending their usage in healthcare contexts different from those in which they were originally developed. Mixed methods research to understand why psychometric assessment and measurement reporting practices can be inadequate and lacking in a theoretical basis is also necessary.
BGFit: management and automated fitting of biological growth curves.
Veríssimo, André; Paixão, Laura; Neves, Ana Rute; Vinga, Susana
2013-09-25
Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity.
Improving overly manufacturing metrics through application of feedforward mask-bias
NASA Astrophysics Data System (ADS)
Joubert, Etienne; Pellegrini, Joseph C.; Misra, Manish; Sturtevant, John L.; Bernhard, John M.; Ong, Phu; Crawshaw, Nathan K.; Puchalski, Vern
2003-06-01
Traditional run-to-run controllers that rely on highly correlated historical events to forecast process corrections have been shown to provide substantial benefit over manual control in the case of a fab that is primarily manufacturing high volume, frequent running parts (i.e., DRAM, MPU, and similar operations). However, a limitation of the traditional controller emerges when it is applied to a fab whose work in process (WIP) is composed of primarily short-running, high part count products (typical of foundries and ASIC fabs). This limitation exists because there is a strong likelihood that each reticle has a unique set of process corrections different from other reticles at the same process layer. Further limitations exist when it is realized that each reticle is loaded and aligned differently on multiple exposure tools.A structural change in how the run-to-run controller manages the frequent reticle changes associated with the high part count environment has allowed for breakthrough performance to be achieved. This breakthrough was mad possible by the realization that; 1. Reticle sourced errors were highly stable over long periods of time, thus allowing them to be deconvolved from the day to day tool and process drifts. 2. Reticle sourced errors can be modeled as a feedforward disturbance rather than as discriminates in defining and dividing process streams. In this paper, we show how to deconvolve the static (reticle) and dynamic (day to day tool and process) components from the overall error vector to better forecast feedback for existing products as well as how to compute or learn these values for new product introductions - or new tool startups. Manufacturing data will presented to support this discussion with some real world success stories.
Gunshot identification system by integration of open source consumer electronics
NASA Astrophysics Data System (ADS)
López R., Juan Manuel; Marulanda B., Jose Ignacio
2014-05-01
This work presents a prototype of low-cost gunshots identification system that uses consumer electronics in order to ensure the existence of gunshots and then classify it according to a previously established database. The implementation of this tool in the urban areas is to set records that support the forensics, hence improving law enforcement also on developing countries. An analysis of its effectiveness is presented in comparison with theoretical results obtained with numerical simulations.
Nonlinear projection methods for visualizing Barcode data and application on two data sets.
Olteanu, Madalina; Nicolas, Violaine; Schaeffer, Brigitte; Denys, Christiane; Missoup, Alain-Didier; Kennis, Jan; Larédo, Catherine
2013-11-01
Developing tools for visualizing DNA sequences is an important issue in the Barcoding context. Visualizing Barcode data can be put in a purely statistical context, unsupervised learning. Clustering methods combined with projection methods have two closely linked objectives, visualizing and finding structure in the data. Multidimensional scaling (MDS) and Self-organizing maps (SOM) are unsupervised statistical tools for data visualization. Both algorithms map data onto a lower dimensional manifold: MDS looks for a projection that best preserves pairwise distances while SOM preserves the topology of the data. Both algorithms were initially developed for Euclidean data and the conditions necessary to their good implementation were not satisfied for Barcode data. We developed a workflow consisting in four steps: collapse data into distinct sequences; compute a dissimilarity matrix; run a modified version of SOM for dissimilarity matrices to structure the data and reduce dimensionality; project the results using MDS. This methodology was applied to Astraptes fulgerator and Hylomyscus, an African rodent with debated taxonomy. We obtained very good results for both data sets. The results were robust against unbalanced species. All the species in Astraptes were well displayed in very distinct groups in the various visualizations, except for LOHAMP and FABOV that were mixed up. For Hylomyscus, our findings were consistent with known species, confirmed the existence of four unnamed taxa and suggested the existence of potentially new species. © 2013 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Campbell, William J.; Goettsche, Craig
1989-01-01
Earth Scientists lack adequate tools for quantifying complex relationships between existing data layers and studying and modeling the dynamic interactions of these data layers. There is a need for an earth systems tool to manipulate multi-layered, heterogeneous data sets that are spatially indexed, such as sensor imagery and maps, easily and intelligently in a single system. The system can access and manipulate data from multiple sensor sources, maps, and from a learned object hierarchy using an advanced knowledge-based geographical information system. A prototype Knowledge-Based Geographic Information System (KBGIS) was recently constructed. Many of the system internals are well developed, but the system lacks an adequate user interface. A methodology is described for developing an intelligent user interface and extending KBGIS to interconnect with existing NASA systems, such as imagery from the Land Analysis System (LAS), atmospheric data in Common Data Format (CDF), and visualization of complex data with the National Space Science Data Center Graphics System. This would allow NASA to quickly explore the utility of such a system, given the ability to transfer data in and out of KBGIS easily. The use and maintenance of the object hierarchies as polymorphic data types brings, to data management, a while new set of problems and issues, few of which have been explored above the prototype level.
Integrated Computational Solution for Predicting Skin Sensitization Potential of Molecules
Desai, Aarti; Singh, Vivek K.; Jere, Abhay
2016-01-01
Introduction Skin sensitization forms a major toxicological endpoint for dermatology and cosmetic products. Recent ban on animal testing for cosmetics demands for alternative methods. We developed an integrated computational solution (SkinSense) that offers a robust solution and addresses the limitations of existing computational tools i.e. high false positive rate and/or limited coverage. Results The key components of our solution include: QSAR models selected from a combinatorial set, similarity information and literature-derived sub-structure patterns of known skin protein reactive groups. Its prediction performance on a challenge set of molecules showed accuracy = 75.32%, CCR = 74.36%, sensitivity = 70.00% and specificity = 78.72%, which is better than several existing tools including VEGA (accuracy = 45.00% and CCR = 54.17% with ‘High’ reliability scoring), DEREK (accuracy = 72.73% and CCR = 71.44%) and TOPKAT (accuracy = 60.00% and CCR = 61.67%). Although, TIMES-SS showed higher predictive power (accuracy = 90.00% and CCR = 92.86%), the coverage was very low (only 10 out of 77 molecules were predicted reliably). Conclusions Owing to improved prediction performance and coverage, our solution can serve as a useful expert system towards Integrated Approaches to Testing and Assessment for skin sensitization. It would be invaluable to cosmetic/ dermatology industry for pre-screening their molecules, and reducing time, cost and animal testing. PMID:27271321
Use of Semantic Technology to Create Curated Data Albums
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Kulkarni, Ajinkya; Li, Xiang; Sainju, Roshan; Bakare, Rohan; Basyal, Sabin
2014-01-01
One of the continuing challenges in any Earth science investigation is the discovery and access of useful science content from the increasingly large volumes of Earth science data and related information available online. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. Those who know exactly the data sets they need can obtain the specific files using these systems. However, in cases where researchers are interested in studying an event of research interest, they must manually assemble a variety of relevant data sets by searching the different distributed data systems. Consequently, there is a need to design and build specialized search and discover tools in Earth science that can filter through large volumes of distributed online data and information and only aggregate the relevant resources needed to support climatology and case studies. This paper presents a specialized search and discovery tool that automatically creates curated Data Albums. The tool was designed to enable key elements of the search process such as dynamic interaction and sense-making. The tool supports dynamic interaction via different modes of interactivity and visual presentation of information. The compilation of information and data into a Data Album is analogous to a shoebox within the sense-making framework. This tool automates most of the tedious information/data gathering tasks for researchers. Data curation by the tool is achieved via an ontology-based, relevancy ranking algorithm that filters out nonrelevant information and data. The curation enables better search results as compared to the simple keyword searches provided by existing data systems in Earth science.
Use of Semantic Technology to Create Curated Data Albums
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Kulkarni, Ajinkya; Li, Xiang; Sainju, Roshan; Bakare, Rohan; Basyal, Sabin; Fox, Peter (Editor); Norack, Tom (Editor)
2014-01-01
One of the continuing challenges in any Earth science investigation is the discovery and access of useful science content from the increasingly large volumes of Earth science data and related information available online. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. Those who know exactly the data sets they need can obtain the specific files using these systems. However, in cases where researchers are interested in studying an event of research interest, they must manually assemble a variety of relevant data sets by searching the different distributed data systems. Consequently, there is a need to design and build specialized search and discovery tools in Earth science that can filter through large volumes of distributed online data and information and only aggregate the relevant resources needed to support climatology and case studies. This paper presents a specialized search and discovery tool that automatically creates curated Data Albums. The tool was designed to enable key elements of the search process such as dynamic interaction and sense-making. The tool supports dynamic interaction via different modes of interactivity and visual presentation of information. The compilation of information and data into a Data Album is analogous to a shoebox within the sense-making framework. This tool automates most of the tedious information/data gathering tasks for researchers. Data curation by the tool is achieved via an ontology-based, relevancy ranking algorithm that filters out non-relevant information and data. The curation enables better search results as compared to the simple keyword searches provided by existing data systems in Earth science.
NASA Astrophysics Data System (ADS)
Wakil, K.; Hussnain, MQ; Tahir, A.; Naeem, M. A.
2016-06-01
Unmanaged placement, size, location, structure and contents of outdoor advertisement boards have resulted in severe urban visual pollution and deterioration of the socio-physical living environment in urban centres of Pakistan. As per the regulatory instruments, the approval decision for a new advertisement installation is supposed to be based on the locational density of existing boards and their proximity or remoteness to certain land- uses. In cities, where regulatory tools for the control of advertisement boards exist, responsible authorities are handicapped in effective implementation due to the absence of geospatial analysis capacity. This study presents the development of a spatial decision support system (SDSS) for regularization of advertisement boards in terms of their location and placement. The knowledge module of the proposed SDSS is based on provisions and restrictions prescribed in regulatory documents. While the user interface allows visualization and scenario evaluation to understand if the new board will affect existing linear density on a particular road and if it violates any buffer restrictions around a particular land use. Technically the structure of the proposed SDSS is a web-based solution which includes open geospatial tools such as OpenGeo Suite, GeoExt, PostgreSQL, and PHP. It uses three key data sets including road network, locations of existing billboards and building parcels with land use information to perform the analysis. Locational suitability has been calculated using pairwise comparison through analytical hierarchy process (AHP) and weighted linear combination (WLC). Our results indicate that open geospatial tools can be helpful in developing an SDSS which can assist solving space related iterative decision challenges on outdoor advertisements. Employing such a system will result in effective implementation of regulations resulting in visual harmony and aesthetic improvement in urban communities.
Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J S; Tsui, Benjamin M W
2008-07-01
The authors developed and validated an efficient Monte Carlo simulation (MCS) workflow to facilitate small animal pinhole SPECT imaging research. This workflow seamlessly integrates two existing MCS tools: simulation system for emission tomography (SimSET) and GEANT4 application for emission tomography (GATE). Specifically, we retained the strength of GATE in describing complex collimator/detector configurations to meet the anticipated needs for studying advanced pinhole collimation (e.g., multipinhole) geometry, while inserting the fast SimSET photon history generator (PHG) to circumvent the relatively slow GEANT4 MCS code used by GATE in simulating photon interactions inside voxelized phantoms. For validation, data generated from this new SimSET-GATE workflow were compared with those from GATE-only simulations as well as experimental measurements obtained using a commercial small animal pinhole SPECT system. Our results showed excellent agreement (e.g., in system point response functions and energy spectra) between SimSET-GATE and GATE-only simulations, and, more importantly, a significant computational speedup (up to approximately 10-fold) provided by the new workflow. Satisfactory agreement between MCS results and experimental data were also observed. In conclusion, the authors have successfully integrated SimSET photon history generator in GATE for fast and realistic pinhole SPECT simulations, which can facilitate research in, for example, the development and application of quantitative pinhole and multipinhole SPECT for small animal imaging. This integrated simulation tool can also be adapted for studying other preclinical and clinical SPECT techniques.
Compliance monitoring in business processes: Functionalities, application, and tool-support.
Ly, Linh Thao; Maggi, Fabrizio Maria; Montali, Marco; Rinderle-Ma, Stefanie; van der Aalst, Wil M P
2015-12-01
In recent years, monitoring the compliance of business processes with relevant regulations, constraints, and rules during runtime has evolved as major concern in literature and practice. Monitoring not only refers to continuously observing possible compliance violations, but also includes the ability to provide fine-grained feedback and to predict possible compliance violations in the future. The body of literature on business process compliance is large and approaches specifically addressing process monitoring are hard to identify. Moreover, proper means for the systematic comparison of these approaches are missing. Hence, it is unclear which approaches are suitable for particular scenarios. The goal of this paper is to define a framework for Compliance Monitoring Functionalities (CMF) that enables the systematic comparison of existing and new approaches for monitoring compliance rules over business processes during runtime. To define the scope of the framework, at first, related areas are identified and discussed. The CMFs are harvested based on a systematic literature review and five selected case studies. The appropriateness of the selection of CMFs is demonstrated in two ways: (a) a systematic comparison with pattern-based compliance approaches and (b) a classification of existing compliance monitoring approaches using the CMFs. Moreover, the application of the CMFs is showcased using three existing tools that are applied to two realistic data sets. Overall, the CMF framework provides powerful means to position existing and future compliance monitoring approaches.
Compliance monitoring in business processes: Functionalities, application, and tool-support
Ly, Linh Thao; Maggi, Fabrizio Maria; Montali, Marco; Rinderle-Ma, Stefanie; van der Aalst, Wil M.P.
2015-01-01
In recent years, monitoring the compliance of business processes with relevant regulations, constraints, and rules during runtime has evolved as major concern in literature and practice. Monitoring not only refers to continuously observing possible compliance violations, but also includes the ability to provide fine-grained feedback and to predict possible compliance violations in the future. The body of literature on business process compliance is large and approaches specifically addressing process monitoring are hard to identify. Moreover, proper means for the systematic comparison of these approaches are missing. Hence, it is unclear which approaches are suitable for particular scenarios. The goal of this paper is to define a framework for Compliance Monitoring Functionalities (CMF) that enables the systematic comparison of existing and new approaches for monitoring compliance rules over business processes during runtime. To define the scope of the framework, at first, related areas are identified and discussed. The CMFs are harvested based on a systematic literature review and five selected case studies. The appropriateness of the selection of CMFs is demonstrated in two ways: (a) a systematic comparison with pattern-based compliance approaches and (b) a classification of existing compliance monitoring approaches using the CMFs. Moreover, the application of the CMFs is showcased using three existing tools that are applied to two realistic data sets. Overall, the CMF framework provides powerful means to position existing and future compliance monitoring approaches. PMID:26635430
Smartphones and the plastic surgeon.
Al-Hadithy, Nada; Ghosh, Sudip
2013-06-01
Surgical trainees are facing limited training opportunities since the introduction of the European Working Time Directive. Smartphone sales are increasing and have usurped computer sales for the first time. In this context, smartphones are an important portable reference and educational tool, already in the possession of the majority of surgeons in training. Technology in the palm of our hands has led to a revolution of accessible information for the plastic surgery trainee and surgeon. This article reviews the uses of smartphones and applications for plastic surgeons in education, telemedicine and global health. A comprehensive guide to existing and upcoming learning materials and clinical tools for the plastic surgeon is included. E-books, podcasts, educational videos, guidelines, work-based assessment tools and online logbooks are presented. In the limited resource setting of modern clinical practice, savvy plastic surgeons can select technological tools to democratise access to education and best clinical care. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
A software tool for analyzing multichannel cochlear implant signals.
Lai, Wai Kong; Bögli, Hans; Dillier, Norbert
2003-10-01
A useful and convenient means to analyze the radio frequency (RF) signals being sent by a speech processor to a cochlear implant would be to actually capture and display them with appropriate software. This is particularly useful for development or diagnostic purposes. sCILab (Swiss Cochlear Implant Laboratory) is such a PC-based software tool intended for the Nucleus family of Multichannel Cochlear Implants. Its graphical user interface provides a convenient and intuitive means for visualizing and analyzing the signals encoding speech information. Both numerical and graphic displays are available for detailed examination of the captured CI signals, as well as an acoustic simulation of these CI signals. sCILab has been used in the design and verification of new speech coding strategies, and has also been applied as an analytical tool in studies of how different parameter settings of existing speech coding strategies affect speech perception. As a diagnostic tool, it is also useful for troubleshooting problems with the external equipment of the cochlear implant systems.
FASTER - A tool for DSN forecasting and scheduling
NASA Technical Reports Server (NTRS)
Werntz, David; Loyola, Steven; Zendejas, Silvino
1993-01-01
FASTER (Forecasting And Scheduling Tool for Earth-based Resources) is a suite of tools designed for forecasting and scheduling JPL's Deep Space Network (DSN). The DSN is a set of antennas and other associated resources that must be scheduled for satellite communications, astronomy, maintenance, and testing. FASTER consists of MS-Windows based programs that replace two existing programs (RALPH and PC4CAST). FASTER was designed to be more flexible, maintainable, and user friendly. FASTER makes heavy use of commercial software to allow for customization by users. FASTER implements scheduling as a two pass process: the first pass calculates a predictive profile of resource utilization; the second pass uses this information to calculate a cost function used in a dynamic programming optimization step. This information allows the scheduler to 'look ahead' at activities that are not as yet scheduled. FASTER has succeeded in allowing wider access to data and tools, reducing the amount of effort expended and increasing the quality of analysis.
Development of a research ethics knowledge and analytical skills assessment tool.
Taylor, Holly A; Kass, Nancy E; Ali, Joseph; Sisson, Stephen; Bertram, Amanda; Bhan, Anant
2012-04-01
The goal of this project was to develop and validate a new tool to evaluate learners' knowledge and skills related to research ethics. A core set of 50 questions from existing computer-based online teaching modules were identified, refined and supplemented to create a set of 74 multiple-choice, true/false and short answer questions. The questions were pilot-tested and item discrimination was calculated for each question. Poorly performing items were eliminated or refined. Two comparable assessment tools were created. These assessment tools were administered as a pre-test and post-test to a cohort of 58 Indian junior health research investigators before and after exposure to a new course on research ethics. Half of the investigators were exposed to the course online, the other half in person. Item discrimination was calculated for each question and Cronbach's α for each assessment tool. A final version of the assessment tool that incorporated the best questions from the pre-/post-test phase was used to assess retention of research ethics knowledge and skills 3 months after course delivery. The final version of the REKASA includes 41 items and had a Cronbach's α of 0.837. The results illustrate, in one sample of learners, the successful, systematic development and use of a knowledge and skills assessment tool in research ethics capable of not only measuring basic knowledge in research ethics and oversight but also assessing learners' ability to apply ethics knowledge to the analytical task of reasoning through research ethics cases, without reliance on essay or discussion-based examination. These promising preliminary findings should be confirmed with additional groups of learners.
Sentiment Analysis of Health Care Tweets: Review of the Methods Used.
Gohil, Sunir; Vuik, Sabine; Darzi, Ara
2018-04-23
Twitter is a microblogging service where users can send and read short 140-character messages called "tweets." There are several unstructured, free-text tweets relating to health care being shared on Twitter, which is becoming a popular area for health care research. Sentiment is a metric commonly used to investigate the positive or negative opinion within these messages. Exploring the methods used for sentiment analysis in Twitter health care research may allow us to better understand the options available for future research in this growing field. The first objective of this study was to understand which tools would be available for sentiment analysis of Twitter health care research, by reviewing existing studies in this area and the methods they used. The second objective was to determine which method would work best in the health care settings, by analyzing how the methods were used to answer specific health care questions, their production, and how their accuracy was analyzed. A review of the literature was conducted pertaining to Twitter and health care research, which used a quantitative method of sentiment analysis for the free-text messages (tweets). The study compared the types of tools used in each case and examined methods for tool production, tool training, and analysis of accuracy. A total of 12 papers studying the quantitative measurement of sentiment in the health care setting were found. More than half of these studies produced tools specifically for their research, 4 used open source tools available freely, and 2 used commercially available software. Moreover, 4 out of the 12 tools were trained using a smaller sample of the study's final data. The sentiment method was trained against, on an average, 0.45% (2816/627,024) of the total sample data. One of the 12 papers commented on the analysis of accuracy of the tool used. Multiple methods are used for sentiment analysis of tweets in the health care setting. These range from self-produced basic categorizations to more complex and expensive commercial software. The open source and commercial methods are developed on product reviews and generic social media messages. None of these methods have been extensively tested against a corpus of health care messages to check their accuracy. This study suggests that there is a need for an accurate and tested tool for sentiment analysis of tweets trained using a health care setting-specific corpus of manually annotated tweets first. ©Sunir Gohil, Sabine Vuik, Ara Darzi. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 23.04.2018.
A RESTful API for accessing microbial community data for MG-RAST.
Wilke, Andreas; Bischof, Jared; Harrison, Travis; Brettin, Tom; D'Souza, Mark; Gerlach, Wolfgang; Matthews, Hunter; Paczian, Tobias; Wilkening, Jared; Glass, Elizabeth M; Desai, Narayan; Meyer, Folker
2015-01-01
Metagenomic sequencing has produced significant amounts of data in recent years. For example, as of summer 2013, MG-RAST has been used to annotate over 110,000 data sets totaling over 43 Terabases. With metagenomic sequencing finding even wider adoption in the scientific community, the existing web-based analysis tools and infrastructure in MG-RAST provide limited capability for data retrieval and analysis, such as comparative analysis between multiple data sets. Moreover, although the system provides many analysis tools, it is not comprehensive. By opening MG-RAST up via a web services API (application programmers interface) we have greatly expanded access to MG-RAST data, as well as provided a mechanism for the use of third-party analysis tools with MG-RAST data. This RESTful API makes all data and data objects created by the MG-RAST pipeline accessible as JSON objects. As part of the DOE Systems Biology Knowledgebase project (KBase, http://kbase.us) we have implemented a web services API for MG-RAST. This API complements the existing MG-RAST web interface and constitutes the basis of KBase's microbial community capabilities. In addition, the API exposes a comprehensive collection of data to programmers. This API, which uses a RESTful (Representational State Transfer) implementation, is compatible with most programming environments and should be easy to use for end users and third parties. It provides comprehensive access to sequence data, quality control results, annotations, and many other data types. Where feasible, we have used standards to expose data and metadata. Code examples are provided in a number of languages both to show the versatility of the API and to provide a starting point for users. We present an API that exposes the data in MG-RAST for consumption by our users, greatly enhancing the utility of the MG-RAST service.
A meta-model for computer executable dynamic clinical safety checklists.
Nan, Shan; Van Gorp, Pieter; Lu, Xudong; Kaymak, Uzay; Korsten, Hendrikus; Vdovjak, Richard; Duan, Huilong
2017-12-12
Safety checklist is a type of cognitive tool enforcing short term memory of medical workers with the purpose of reducing medical errors caused by overlook and ignorance. To facilitate the daily use of safety checklists, computerized systems embedded in the clinical workflow and adapted to patient-context are increasingly developed. However, the current hard-coded approach of implementing checklists in these systems increase the cognitive efforts of clinical experts and coding efforts for informaticists. This is due to the lack of a formal representation format that is both understandable by clinical experts and executable by computer programs. We developed a dynamic checklist meta-model with a three-step approach. Dynamic checklist modeling requirements were extracted by performing a domain analysis. Then, existing modeling approaches and tools were investigated with the purpose of reusing these languages. Finally, the meta-model was developed by eliciting domain concepts and their hierarchies. The feasibility of using the meta-model was validated by two case studies. The meta-model was mapped to specific modeling languages according to the requirements of hospitals. Using the proposed meta-model, a comprehensive coronary artery bypass graft peri-operative checklist set and a percutaneous coronary intervention peri-operative checklist set have been developed in a Dutch hospital and a Chinese hospital, respectively. The result shows that it is feasible to use the meta-model to facilitate the modeling and execution of dynamic checklists. We proposed a novel meta-model for the dynamic checklist with the purpose of facilitating creating dynamic checklists. The meta-model is a framework of reusing existing modeling languages and tools to model dynamic checklists. The feasibility of using the meta-model is validated by implementing a use case in the system.
Conducting systematic reviews of economic evaluations.
Gomersall, Judith Streak; Jadotte, Yuri Tertilus; Xue, Yifan; Lockwood, Suzi; Riddle, Dru; Preda, Alin
2015-09-01
In 2012, a working group was established to review and enhance the Joanna Briggs Institute (JBI) guidance for conducting systematic review of evidence from economic evaluations addressing a question(s) about health intervention cost-effectiveness. The objective is to present the outcomes of the working group. The group conducted three activities to inform the new guidance: review of literature on the utility/futility of systematic reviews of economic evaluations and consideration of its implications for updating the existing methodology; assessment of the critical appraisal tool in the existing guidance against criteria that promotes validity in economic evaluation research and two other commonly used tools; and a workshop. The debate in the literature on the limitations/value of systematic review of economic evidence cautions that systematic reviews of economic evaluation evidence are unlikely to generate one size fits all answers to questions about the cost-effectiveness of interventions and their comparators. Informed by this finding, the working group adjusted the framing of the objectives definition in the existing JBI methodology. The shift is away from defining the objective as to determine one cost-effectiveness measure toward summarizing study estimates of cost-effectiveness and informed by consideration of the included study characteristics (patient, setting, intervention component, etc.), identifying conditions conducive to lowering costs and maximizing health benefits. The existing critical appraisal tool was included in the new guidance. The new guidance includes the recommendation that a tool designed specifically for the purpose of appraising model-based studies be used together with the generic appraisal tool for economic evaluations assessment to evaluate model-based evaluations. The guidance produced by the group offers reviewers guidance for each step of the systematic review process, which are the same steps followed in JBI reviews of other types of evidence. The updated JBI guidance will be useful for researchers wanting to synthesize evidence about economic questions, either as stand-alone reviews or part of comprehensive or mixed method evidence reviews. Although the updated methodology produced by the work of the working group has improved the JBI guidance for systematic reviews of economic evaluations, there are areas where further work is required. These include adjusting the critical appraisal tool to separate out questions addressing intervention cost and effectiveness measurement; providing more explicit guidance for assessing generalizability of findings; and offering a more robust method for evidence synthesis that facilitates achieving the more ambitious review objectives.
PlantTFDB 4.0: toward a central hub for transcription factors and regulatory interactions in plants.
Jin, Jinpu; Tian, Feng; Yang, De-Chang; Meng, Yu-Qi; Kong, Lei; Luo, Jingchu; Gao, Ge
2017-01-04
With the goal of providing a comprehensive, high-quality resource for both plant transcription factors (TFs) and their regulatory interactions with target genes, we upgraded plant TF database PlantTFDB to version 4.0 (http://planttfdb.cbi.pku.edu.cn/). In the new version, we identified 320 370 TFs from 165 species, presenting a more comprehensive genomic TF repertoires of green plants. Besides updating the pre-existing abundant functional and evolutionary annotation for identified TFs, we generated three new types of annotation which provide more directly clues to investigate functional mechanisms underlying: (i) a set of high-quality, non-redundant TF binding motifs derived from experiments; (ii) multiple types of regulatory elements identified from high-throughput sequencing data; (iii) regulatory interactions curated from literature and inferred by combining TF binding motifs and regulatory elements. In addition, we upgraded previous TF prediction server, and set up four novel tools for regulation prediction and functional enrichment analyses. Finally, we set up a novel companion portal PlantRegMap (http://plantregmap.cbi.pku.edu.cn) for users to access the regulation resource and analysis tools conveniently. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Water Quality Analysis Tool (WQAT) | Science Inventory | US ...
The purpose of the Water Quality Analysis Tool (WQAT) software is to provide a means for analyzing and producing useful remotely sensed data products for an entire estuary, a particular point or area of interest (AOI or POI) in estuaries, or water bodies of interest where pre-processed and geographically gridded remotely sensed images are available. A graphical user interface (GUI), was created to enable the user to select and display imagery from a variety of remote sensing data sources. The user can select a date (or date range) and location to extract pixels from the remotely sensed imagery. The GUI is used to obtain all available pixel values (i.e. pixel from all available bands of all available satellites) for a given location on a given date and time. The resultant data set can be analyzed or saved to a file for future use. The WQAT software provides users with a way to establish algorithms between remote sensing reflectance (Rrs) and any available in situ parameters, as well as statistical and regression analysis. The combined data sets can be used to improve water quality research and studies. Satellites provide spatially synoptic data at high frequency (daily to weekly). These characteristics are desirable for supplementing existing water quality observations and for providing information for large aquatic ecosystems that are historically under-sampled by field programs. Thus, the Water Quality Assessment Tool (WQAT) software tool was developed to suppo
St-Louis, Etienne; Séguin, Jade; Roizblatt, Daniel; Deckelbaum, Dan Leon; Baird, Robert; Razek, Tarek
2017-03-01
Trauma is a leading cause of mortality and disability in children worldwide. The World Health Organization reports that 95% of all childhood injury deaths occur in Low-Middle-Income Countries (LMIC). Injury scores have been developed to facilitate risk stratification, clinical decision making, and research. Trauma registries in LMIC depend on adapted trauma scores that do not rely on investigations that require unavailable material or human resources. We sought to review and assess the existing trauma scores used in pediatric patients. Our objective is to determine their wideness of use, validity, setting of use, outcome measures, and criticisms. We believe that there is a need for an adapted trauma score developed specifically for pediatric patients in low-resource settings. A systematic review of the literature was conducted to identify and compare existing injury scores used in pediatric patients. We constructed a search strategy in collaboration with a senior hospital librarian. Multiple databases were searched, including Embase, Medline, and the Cochrane Central Register of Controlled Trials. Articles were selected based on predefined inclusion criteria by two reviewers and underwent qualitative analysis. The scores identified are suboptimal for use in pediatric patients in low-resource settings due to various factors, including reliance on precise anatomic diagnosis, physiologic parameters maladapted to pediatric patients, or laboratory data with inconsistent accessibility in LMIC. An important gap exists in our ability to simply and reliably estimate injury severity in pediatric patients and predict their associated probability of outcomes in settings, where resources are limited. An ideal score should be easy to calculate using point-of-care data that are readily available in LMIC, and can be easily adapted to the specific physiologic variations of different age groups.
Salgia, Ravi; Mambetsariev, Isa; Hewelt, Blake; Achuthan, Srisairam; Li, Haiqing; Poroyko, Valeriy; Wang, Yingyu; Sattler, Martin
2018-05-25
Mathematical cancer models are immensely powerful tools that are based in part on the fractal nature of biological structures, such as the geometry of the lung. Cancers of the lung provide an opportune model to develop and apply algorithms that capture changes and disease phenotypes. We reviewed mathematical models that have been developed for biological sciences and applied them in the context of small cell lung cancer (SCLC) growth, mutational heterogeneity, and mechanisms of metastasis. The ultimate goal is to develop the stochastic and deterministic nature of this disease, to link this comprehensive set of tools back to its fractalness and to provide a platform for accurate biomarker development. These techniques may be particularly useful in the context of drug development research, such as combination with existing omics approaches. The integration of these tools will be important to further understand the biology of SCLC and ultimately develop novel therapeutics.
A novel adjuvant to the resident selection process: the hartman value profile.
Cone, Jeffrey D; Byrum, C Stephen; Payne, Wyatt G; Smith, David J
2012-01-01
The goal of resident selection is twofold: (1) select candidates who will be successful residents and eventually successful practitioners and (2) avoid selecting candidates who will be unsuccessful residents and/or eventually unsuccessful practitioners. Traditional tools used to select residents have well-known limitations. The Hartman Value Profile (HVP) is a proven adjuvant tool to predicting future performance in candidates for advanced positions in the corporate setting. No literature exists to indicate use of the HVP for resident selection. The HVP evaluates the structure and the dynamics of an individual value system. Given the potential impact, we implemented its use beginning in 2007 as an adjuvant tool to the traditional selection process. Experience gained from incorporating the HVP into the residency selection process suggests that it may add objectivity and refinement in predicting resident performance. Further evaluation is warranted with longer follow-up times.
A Novel Adjuvant to the Resident Selection Process: the Hartman Value Profile
Cone, Jeffrey D.; Byrum, C. Stephen; Payne, Wyatt G.; Smith, David J.
2012-01-01
Objectives: The goal of resident selection is twofold: (1) select candidates who will be successful residents and eventually successful practitioners and (2) avoid selecting candidates who will be unsuccessful residents and/or eventually unsuccessful practitioners. Traditional tools used to select residents have well-known limitations. The Hartman Value Profile (HVP) is a proven adjuvant tool to predicting future performance in candidates for advanced positions in the corporate setting. Methods: No literature exists to indicate use of the HVP for resident selection. Results: The HVP evaluates the structure and the dynamics of an individual value system. Given the potential impact, we implemented its use beginning in 2007 as an adjuvant tool to the traditional selection process. Conclusions: Experience gained from incorporating the HVP into the residency selection process suggests that it may add objectivity and refinement in predicting resident performance. Further evaluation is warranted with longer follow-up times. PMID:22720114
NASA Astrophysics Data System (ADS)
Demigha, Souâd.
2016-03-01
The paper presents a Case-Based Reasoning Tool for Breast Cancer Knowledge Management to improve breast cancer screening. To develop this tool, we combine both concepts and techniques of Case-Based Reasoning (CBR) and Data Mining (DM). Physicians and radiologists ground their diagnosis on their expertise (past experience) based on clinical cases. Case-Based Reasoning is the process of solving new problems based on the solutions of similar past problems and structured as cases. CBR is suitable for medical use. On the other hand, existing traditional hospital information systems (HIS), Radiological Information Systems (RIS) and Picture Archiving Information Systems (PACS) don't allow managing efficiently medical information because of its complexity and heterogeneity. Data Mining is the process of mining information from a data set and transform it into an understandable structure for further use. Combining CBR to Data Mining techniques will facilitate diagnosis and decision-making of medical experts.
An overview of suite for automated global electronic biosurveillance (SAGES)
NASA Astrophysics Data System (ADS)
Lewis, Sheri L.; Feighner, Brian H.; Loschen, Wayne A.; Wojcik, Richard A.; Skora, Joseph F.; Coberly, Jacqueline S.; Blazes, David L.
2012-06-01
Public health surveillance is undergoing a revolution driven by advances in the field of information technology. Many countries have experienced vast improvements in the collection, ingestion, analysis, visualization, and dissemination of public health data. Resource-limited countries have lagged behind due to challenges in information technology infrastructure, public health resources, and the costs of proprietary software. The Suite for Automated Global Electronic bioSurveillance (SAGES) is a collection of modular, flexible, freely-available software tools for electronic disease surveillance in resource-limited settings. One or more SAGES tools may be used in concert with existing surveillance applications or the SAGES tools may be used en masse for an end-to-end biosurveillance capability. This flexibility allows for the development of an inexpensive, customized, and sustainable disease surveillance system. The ability to rapidly assess anomalous disease activity may lead to more efficient use of limited resources and better compliance with World Health Organization International Health Regulations.
GEM1: First-year modeling and IT activities for the Global Earthquake Model
NASA Astrophysics Data System (ADS)
Anderson, G.; Giardini, D.; Wiemer, S.
2009-04-01
GEM is a public-private partnership initiated by the Organisation for Economic Cooperation and Development (OECD) to build an independent standard for modeling and communicating earthquake risk worldwide. GEM is aimed at providing authoritative, open information about seismic risk and decision tools to support mitigation. GEM will also raise risk awareness and help post-disaster economic development, with the ultimate goal of reducing the toll of future earthquakes. GEM will provide a unified set of seismic hazard, risk, and loss modeling tools based on a common global IT infrastructure and consensus standards. These tools, systems, and standards will be developed in partnership with organizations around the world, with coordination by the GEM Secretariat and its Secretary General. GEM partners will develop a variety of global components, including a unified earthquake catalog, fault database, and ground motion prediction equations. To ensure broad representation and community acceptance, GEM will include local knowledge in all modeling activities, incorporate existing detailed models where possible, and independently test all resulting tools and models. When completed in five years, GEM will have a versatile, penly accessible modeling environment that can be updated as necessary, and will provide the global standard for seismic hazard, risk, and loss models to government ministers, scientists and engineers, financial institutions, and the public worldwide. GEM is now underway with key support provided by private sponsors (Munich Reinsurance Company, Zurich Financial Services, AIR Worldwide Corporation, and Willis Group Holdings); countries including Belgium, Germany, Italy, Singapore, Switzerland, and Turkey; and groups such as the European Commission. The GEM Secretariat has been selected by the OECD and will be hosted at the Eucentre at the University of Pavia in Italy; the Secretariat is now formalizing the creation of the GEM Foundation. Some of GEM's global components are in the planning stages, such as the developments of a unified active fault database and earthquake catalog. The flagship activity of GEM's first year is GEM1, a focused pilot project to develop GEM's first hazard and risk modeling products and initial IT infrastructure, starting in January 2009 and ending in March 2010. GEM1 will provide core capabilities for the present and key knowledge for future development of the full GEM computing Environment and product set. We will build GEM1 largely using existing tools and datasets, connected through a unified IT infrastructure, in order to bring GEM's initial capabilities online as rapidly as possible. The Swiss Seismological Service at ETH-Zurich is leading the GEM1 effort in cooperation with partners around the world. We anticipate that GEM1's products will include: • A global compilation of regional seismic source zone models in one or more common representations • Global synthetic earthquake catalogs for use in hazard calculations • Initial set of regional and global catalogues for validation • Global hazard models in map and database forms • First compilation of global vulnerabilities and fragilities • Tools for exposure and loss assessment • Validation of results and software for existing risk assessment tools to be used in future GEM stages • Demonstration risk scenarios for target cities • First version of GEM IT infrastructure All these products will be made freely available to the greatest extent possible. For more information on GEM and GEM1, please visit http://www.globalquakemodel.org.
Survival models for harvest management of mourning dove populations
Otis, D.L.
2002-01-01
Quantitative models of the relationship between annual survival and harvest rate of migratory game-bird populations are essential to science-based harvest management strategies. I used the best available band-recovery and harvest data for mourning doves (Zenaida macroura) to build a set of models based on different assumptions about compensatory harvest mortality. Although these models suffer from lack of contemporary data, they can be used in development of an initial set of population models that synthesize existing demographic data on a management-unit scale, and serve as a tool for prioritization of population demographic information needs. Credible harvest management plans for mourning dove populations will require a long-term commitment to population monitoring and iterative population analysis.
A fuzzy case based reasoning tool for model based approach to rocket engine health monitoring
NASA Technical Reports Server (NTRS)
Krovvidy, Srinivas; Nolan, Adam; Hu, Yong-Lin; Wee, William G.
1992-01-01
In this system we develop a fuzzy case based reasoner that can build a case representation for several past anomalies detected, and we develop case retrieval methods that can be used to index a relevant case when a new problem (case) is presented using fuzzy sets. The choice of fuzzy sets is justified by the uncertain data. The new problem can be solved using knowledge of the model along with the old cases. This system can then be used to generalize the knowledge from previous cases and use this generalization to refine the existing model definition. This in turn can help to detect failures using the model based algorithms.
Radiation Detection Computational Benchmark Scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.
2013-09-24
Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing differentmore » techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for compilation. This is a report describing the details of the selected Benchmarks and results from various transport codes.« less
Complex fuzzy soft expert sets
NASA Astrophysics Data System (ADS)
Selvachandran, Ganeshsree; Hafeed, Nisren A.; Salleh, Abdul Razak
2017-04-01
Complex fuzzy sets and its accompanying theory although at its infancy, has proven to be superior to classical type-1 fuzzy sets, due its ability in representing time-periodic problem parameters and capturing the seasonality of the fuzziness that exists in the elements of a set. These are important characteristics that are pervasive in most real world problems. However, there are two major problems that are inherent in complex fuzzy sets: it lacks a sufficient parameterization tool and it does not have a mechanism to validate the values assigned to the membership functions of the elements in a set. To overcome these problems, we propose the notion of complex fuzzy soft expert sets which is a hybrid model of complex fuzzy sets and soft expert sets. This model incorporates the advantages of complex fuzzy sets and soft sets, besides having the added advantage of allowing the users to know the opinion of all the experts in a single model without the need for any additional cumbersome operations. As such, this model effectively improves the accuracy of representation of problem parameters that are periodic in nature, besides having a higher level of computational efficiency compared to similar models in literature.
On extending parallelism to serial simulators
NASA Technical Reports Server (NTRS)
Nicol, David; Heidelberger, Philip
1994-01-01
This paper describes an approach to discrete event simulation modeling that appears to be effective for developing portable and efficient parallel execution of models of large distributed systems and communication networks. In this approach, the modeler develops submodels using an existing sequential simulation modeling tool, using the full expressive power of the tool. A set of modeling language extensions permit automatically synchronized communication between submodels; however, the automation requires that any such communication must take a nonzero amount off simulation time. Within this modeling paradigm, a variety of conservative synchronization protocols can transparently support conservative execution of submodels on potentially different processors. A specific implementation of this approach, U.P.S. (Utilitarian Parallel Simulator), is described, along with performance results on the Intel Paragon.
Kate's Model Verification Tools
NASA Technical Reports Server (NTRS)
Morgan, Steve
1991-01-01
Kennedy Space Center's Knowledge-based Autonomous Test Engineer (KATE) is capable of monitoring electromechanical systems, diagnosing their errors, and even repairing them when they crash. A survey of KATE's developer/modelers revealed that they were already using a sophisticated set of productivity enhancing tools. They did request five more, however, and those make up the body of the information presented here: (1) a transfer function code fitter; (2) a FORTRAN-Lisp translator; (3) three existing structural consistency checkers to aid in syntax checking their modeled device frames; (4) an automated procedure for calibrating knowledge base admittances to protect KATE's hardware mockups from inadvertent hand valve twiddling; and (5) three alternatives for the 'pseudo object', a programming patch that currently apprises KATE's modeling devices of their operational environments.
Eppig, Janan T; Smith, Cynthia L; Blake, Judith A; Ringwald, Martin; Kadin, James A; Richardson, Joel E; Bult, Carol J
2017-01-01
The Mouse Genome Informatics (MGI), resource ( www.informatics.jax.org ) has existed for over 25 years, and over this time its data content, informatics infrastructure, and user interfaces and tools have undergone dramatic changes (Eppig et al., Mamm Genome 26:272-284, 2015). Change has been driven by scientific methodological advances, rapid improvements in computational software, growth in computer hardware capacity, and the ongoing collaborative nature of the mouse genomics community in building resources and sharing data. Here we present an overview of the current data content of MGI, describe its general organization, and provide examples using simple and complex searches, and tools for mining and retrieving sets of data.
NASA Technical Reports Server (NTRS)
Fordyce, Jess
1996-01-01
Work carried out to re-engineer the mission analysis segment of JPL's mission planning ground system architecture is reported on. The aim is to transform the existing software tools, originally developed for specific missions on different support environments, into an integrated, general purpose, multi-mission tool set. The issues considered are: the development of a partnership between software developers and users; the definition of key mission analysis functions; the development of a consensus based architecture; the move towards evolutionary change instead of revolutionary replacement; software reusability, and the minimization of future maintenance costs. The current status and aims of new developments are discussed and specific examples of cost savings and improved productivity are presented.
Gottvall, Maria; Vaez, Marjan
2017-01-01
A high proportion of refugees have been subjected to potentially traumatic experiences (PTEs), including torture. PTEs, and torture in particular, are powerful predictors of mental ill health. This paper reports the development and preliminary validation of a brief refugee trauma checklist applicable for survey studies. Methods: A pool of 232 items was generated based on pre-existing instruments. Conceptualization, item selection and item refinement was conducted based on existing literature and in collaboration with experts. Ten cognitive interviews using a Think Aloud Protocol (TAP) were performed in a clinical setting, and field testing of the proposed checklist was performed in a total sample of n = 137 asylum seekers from Syria. Results: The proposed refugee trauma history checklist (RTHC) consists of 2 × 8 items, concerning PTEs that occurred before and during the respondents’ flight, respectively. Results show low item non-response and adequate psychometric properties Conclusions: RTHC is a usable tool for providing self-report data on refugee trauma history surveys of community samples. The core set of included events can be augmented and slight modifications can be applied to RTHC for use also in other refugee populations and settings. PMID:28976937
Kubota, Chika; Okada, Takashi; Aleksic, Branko; Nakamura, Yukako; Kunimoto, Shohko; Morikawa, Mako; Shiino, Tomoko; Tamaji, Ai; Ohoka, Harue; Banno, Naomi; Morita, Tokiko; Murase, Satomi; Goto, Setsuko; Kanai, Atsuko; Masuda, Tomoko; Ando, Masahiko; Ozaki, Norio
2014-01-01
The Edinburgh Postnatal Depression Scale (EPDS) is a widely used screening tool for postpartum depression (PPD). Although the reliability and validity of EPDS in Japanese has been confirmed and the prevalence of PPD is found to be about the same as Western countries, the factor structure of the Japanese version of EPDS has not been elucidated yet. 690 Japanese mothers completed all items of the EPDS at 1 month postpartum. We divided them randomly into two sample sets. The first sample set (n = 345) was used for exploratory factor analysis, and the second sample set was used (n = 345) for confirmatory factor analysis. The result of exploratory factor analysis indicated a three-factor model consisting of anxiety, depression and anhedonia. The results of confirmatory factor analysis suggested that the anxiety and anhedonia factors existed for EPDS in a sample of Japanese women at 1 month postpartum. The depression factor varies by the models of acceptable fit. We examined EPDS scores. As a result, "anxiety" and "anhedonia" exist for EPDS among postpartum women in Japan as already reported in Western countries. Cross-cultural research is needed for future research.
Sodano, M J
1991-01-01
The author describes an innovative "work unit compensation" system that acts as an adjunct to existing personnel payment structures. The process, developed as a win-win alternative for both employees and their institution, includes a reward system for the entire department and insures a team atmosphere. The Community Medical Center in Toms River, New Jersey developed the plan which sets the four basic goals: to be fair, economical, lasting and transferable (FELT). The plan has proven to be a useful tool in retention and recruitment of qualified personnel.
Functional specifications for AI software tools for electric power applications. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faught, W.S.
1985-08-01
The principle barrier to the introduction of artificial intelligence (AI) technology to the electric power industry has not been a lack of interest or appropriate problems, for the industry abounds in both. Like most others, however, the electric power industry lacks the personnel - knowledge engineers - with the special combination of training and skills AI programming demands. Conversely, very few AI specialists are conversant with electric power industry problems and applications. The recent availability of sophisticated AI programming environments is doing much to alleviate this shortage. These products provide a set of powerful and usable software tools that enablemore » even non-AI scientists to rapidly develop AI applications. The purpose of this project was to develop functional specifications for programming tools that, when integrated with existing general-purpose knowledge engineering tools, would expedite the production of AI applications for the electric power industry. Twelve potential applications, representative of major problem domains within the nuclear power industry, were analyzed in order to identify those tools that would be of greatest value in application development. Eight tools were specified, including facilities for power plant modeling, data base inquiry, simulation and machine-machine interface.« less
Yazel-Smith, Lisa G; Pike, Julie; Lynch, Dustin; Moore, Courtney; Haberlin, Kathryn; Taylor, Jennifer; Hannon, Tamara S
2018-05-01
The obesity epidemic has led to an increase in prediabetes in youth, causing a serious public health concern. Education on diabetes risk and initiation of lifestyle change are the primary treatment modalities. There are few existing age-appropriate health education tools to address diabetes prevention for high-risk youth. To develop an age-appropriate health education tool(s) to help youth better understand type 2 diabetes risk factors and the reversibility of risk. Health education tool development took place in five phases: exploration, design, analysis, refinement, and process evaluation. The project resulted in (1) booklet designed to increase knowledge of risk, (2) meme generator that mirrors the booklet graphics and allows youth to create their own meme based on their pancreas' current mood, (3) environmental posters for clinic, and (4) brief self-assessment that acts as a conversation starter for the health educators. Patients reported high likability and satisfaction with the health education tools, with the majority of patients giving the materials an "A" rating. The process evaluation indicated a high level of fidelity and related measures regarding how the health education tools were intended to be used and how they were actually used in the clinic setting.
Tool making, hand morphology and fossil hominins.
Marzke, Mary W
2013-11-19
Was stone tool making a factor in the evolution of human hand morphology? Is it possible to find evidence in fossil hominin hands for this capability? These questions are being addressed with increasingly sophisticated studies that are testing two hypotheses; (i) that humans have unique patterns of grip and hand movement capabilities compatible with effective stone tool making and use of the tools and, if this is the case, (ii) that there exist unique patterns of morphology in human hands that are consistent with these capabilities. Comparative analyses of human stone tool behaviours and chimpanzee feeding behaviours have revealed a distinctive set of forceful pinch grips by humans that are effective in the control of stones by one hand during manufacture and use of the tools. Comparative dissections, kinematic analyses and biomechanical studies indicate that humans do have a unique pattern of muscle architecture and joint surface form and functions consistent with the derived capabilities. A major remaining challenge is to identify skeletal features that reflect the full morphological pattern, and therefore may serve as clues to fossil hominin manipulative capabilities. Hominin fossils are evaluated for evidence of patterns of derived human grip and stress-accommodation features.
Audio signal analysis for tool wear monitoring in sheet metal stamping
NASA Astrophysics Data System (ADS)
Ubhayaratne, Indivarie; Pereira, Michael P.; Xiang, Yong; Rolfe, Bernard F.
2017-02-01
Stamping tool wear can significantly degrade product quality, and hence, online tool condition monitoring is a timely need in many manufacturing industries. Even though a large amount of research has been conducted employing different sensor signals, there is still an unmet demand for a low-cost easy to set up condition monitoring system. Audio signal analysis is a simple method that has the potential to meet this demand, but has not been previously used for stamping process monitoring. Hence, this paper studies the existence and the significance of the correlation between emitted sound signals and the wear state of sheet metal stamping tools. The corrupting sources generated by the tooling of the stamping press and surrounding machinery have higher amplitudes compared to that of the sound emitted by the stamping operation itself. Therefore, a newly developed semi-blind signal extraction technique was employed as a pre-processing technique to mitigate the contribution of these corrupting sources. The spectral analysis results of the raw and extracted signals demonstrate a significant qualitative relationship between wear progression and the emitted sound signature. This study lays the basis for employing low-cost audio signal analysis in the development of a real-time industrial tool condition monitoring system.
Tool making, hand morphology and fossil hominins
Marzke, Mary W.
2013-01-01
Was stone tool making a factor in the evolution of human hand morphology? Is it possible to find evidence in fossil hominin hands for this capability? These questions are being addressed with increasingly sophisticated studies that are testing two hypotheses; (i) that humans have unique patterns of grip and hand movement capabilities compatible with effective stone tool making and use of the tools and, if this is the case, (ii) that there exist unique patterns of morphology in human hands that are consistent with these capabilities. Comparative analyses of human stone tool behaviours and chimpanzee feeding behaviours have revealed a distinctive set of forceful pinch grips by humans that are effective in the control of stones by one hand during manufacture and use of the tools. Comparative dissections, kinematic analyses and biomechanical studies indicate that humans do have a unique pattern of muscle architecture and joint surface form and functions consistent with the derived capabilities. A major remaining challenge is to identify skeletal features that reflect the full morphological pattern, and therefore may serve as clues to fossil hominin manipulative capabilities. Hominin fossils are evaluated for evidence of patterns of derived human grip and stress-accommodation features. PMID:24101624
Gene Selection and Cancer Classification: A Rough Sets Based Approach
NASA Astrophysics Data System (ADS)
Sun, Lijun; Miao, Duoqian; Zhang, Hongyun
Indentification of informative gene subsets responsible for discerning between available samples of gene expression data is an important task in bioinformatics. Reducts, from rough sets theory, corresponding to a minimal set of essential genes for discerning samples, is an efficient tool for gene selection. Due to the compuational complexty of the existing reduct algoritms, feature ranking is usually used to narrow down gene space as the first step and top ranked genes are selected . In this paper,we define a novel certierion based on the expression level difference btween classes and contribution to classification of the gene for scoring genes and present a algorithm for generating all possible reduct from informative genes.The algorithm takes the whole attribute sets into account and find short reduct with a significant reduction in computational complexity. An exploration of this approach on benchmark gene expression data sets demonstrates that this approach is successful for selecting high discriminative genes and the classification accuracy is impressive.
On asphericity of convex bodies in linear normed spaces.
Faried, Nashat; Morsy, Ahmed; Hussein, Aya M
2018-01-01
In 1960, Dvoretzky proved that in any infinite dimensional Banach space X and for any [Formula: see text] there exists a subspace L of X of arbitrary large dimension ϵ -iometric to Euclidean space. A main tool in proving this deep result was some results concerning asphericity of convex bodies. In this work, we introduce a simple technique and rigorous formulas to facilitate calculating the asphericity for each set that has a nonempty boundary set with respect to the flat space generated by it. We also give a formula to determine the center and the radius of the smallest ball containing a nonempty nonsingleton set K in a linear normed space, and the center and the radius of the largest ball contained in it provided that K has a nonempty boundary set with respect to the flat space generated by it. As an application we give lower and upper estimations for the asphericity of infinite and finite cross products of these sets in certain spaces, respectively.
Bauer, Matthias R; Ibrahim, Tamer M; Vogel, Simon M; Boeckler, Frank M
2013-06-24
The application of molecular benchmarking sets helps to assess the actual performance of virtual screening (VS) workflows. To improve the efficiency of structure-based VS approaches, the selection and optimization of various parameters can be guided by benchmarking. With the DEKOIS 2.0 library, we aim to further extend and complement the collection of publicly available decoy sets. Based on BindingDB bioactivity data, we provide 81 new and structurally diverse benchmark sets for a wide variety of different target classes. To ensure a meaningful selection of ligands, we address several issues that can be found in bioactivity data. We have improved our previously introduced DEKOIS methodology with enhanced physicochemical matching, now including the consideration of molecular charges, as well as a more sophisticated elimination of latent actives in the decoy set (LADS). We evaluate the docking performance of Glide, GOLD, and AutoDock Vina with our data sets and highlight existing challenges for VS tools. All DEKOIS 2.0 benchmark sets will be made accessible at http://www.dekois.com.
Vasan, Ashwin; Mabey, David C; Chaudhri, Simran; Brown Epstein, Helen-Ann; Lawn, Stephen D
2017-04-01
Primary health care workers (HCWs) in low- and middle-income settings (LMIC) often work in challenging conditions in remote, rural areas, in isolation from the rest of the health system and particularly specialist care. Much attention has been given to implementation of interventions to support quality and performance improvement for workers in such settings. However, little is known about the design of such initiatives and which approaches predominate, let alone those that are most effective. We aimed for a broad understanding of what distinguishes different approaches to primary HCW support and performance improvement and to clarify the existing evidence as well as gaps in evidence in order to inform decision-making and design of programs intended to support and improve the performance of health workers in these settings. We systematically searched the literature for articles addressing this topic, and undertook a comparative review to document the principal approaches to performance and quality improvement for primary HCWs in LMIC settings. We identified 40 eligible papers reporting on interventions that we categorized into five different approaches: (1) supervision and supportive supervision; (2) mentoring; (3) tools and aids; (4) quality improvement methods, and (5) coaching. The variety of study designs and quality/performance indicators precluded a formal quantitative data synthesis. The most extensive literature was on supervision, but there was little clarity on what defines the most effective approach to the supervision activities themselves, let alone the design and implementation of supervision programs. The mentoring literature was limited, and largely focused on clinical skills building and educational strategies. Further research on how best to incorporate mentorship into pre-service clinical training, while maintaining its function within the routine health system, is needed. There is insufficient evidence to draw conclusions about coaching in this setting, however a review of the corporate and the business school literature is warranted to identify transferrable approaches. A substantial literature exists on tools, but significant variation in approaches makes comparison challenging. We found examples of effective individual projects and designs in specific settings, but there was a lack of comparative research on tools across approaches or across settings, and no systematic analysis within specific approaches to provide evidence with clear generalizability. Future research should prioritize comparative intervention trials to establish clear global standards for performance and quality improvement initiatives. Such standards will be critical to creating and sustaining a well-functioning health workforce and for global initiatives such as universal health coverage. © The Author 2016. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine.
Satellite orbital conjunction reports assessing threatening encounters in space (SOCRATES)
NASA Astrophysics Data System (ADS)
Kelso, T. S.; Alfano, S.
2006-05-01
While many satellite operators are aware of the possibility of a collision between their satellite and another object in earth orbit, most seem unaware of the frequency of near misses occurring each day. Until recently, no service existed to advise satellite operators of an impending conjunction of a satellite payload with another satellite, putting the responsibility for determining these occurrences squarely on the satellite operator's shoulders. This problem has been further confounded by the lack of a timely, comprehensive data set of satellite orbital element sets and computationally efficient tools to provide predictions using industry-standard software. As a result, hundreds of conjunctions within 1 km occur each week, with little or no intervention, putting billions of dollars of space hardware at risk, along with their associated missions. As a service to the satellite operator community, the Center for Space Standards & Innovation (CSSI) offers SOCRATES-Satellite Orbital Conjunction Reports Assessing Threatening Encounters in Space. Twice each day, CSSI runs a list of all satellite payloads on orbit against a list of all objects on orbit using the catalog of all unclassified NORAD two-line element sets to look for conjunctions over the next seven days. The runs are made using STK/CAT-Satellite Tool Kit's Conjunction Analysis Tools-together with the NORAD SGP4 propagator in STK. This paper will discuss how SOCRATES works and how it can help satellite operators avoid undesired close approaches through advanced mission planning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarthy, J.M.; Arnett, R.C.; Neupauer, R.M.
This report documents a study conducted to develop a regional groundwater flow model for the Eastern Snake River Plain Aquifer in the area of the Idaho National Engineering Laboratory. The model was developed to support Waste Area Group 10, Operable Unit 10-04 groundwater flow and transport studies. The products of this study are this report and a set of computational tools designed to numerically model the regional groundwater flow in the Eastern Snake River Plain aquifer. The objective of developing the current model was to create a tool for defining the regional groundwater flow at the INEL. The model wasmore » developed to (a) support future transport modeling for WAG 10-04 by providing the regional groundwater flow information needed for the WAG 10-04 risk assessment, (b) define the regional groundwater flow setting for modeling groundwater contaminant transport at the scale of the individual WAGs, (c) provide a tool for improving the understanding of the groundwater flow system below the INEL, and (d) consolidate the existing regional groundwater modeling information into one usable model. The current model is appropriate for defining the regional flow setting for flow submodels as well as hypothesis testing to better understand the regional groundwater flow in the area of the INEL. The scale of the submodels must be chosen based on accuracy required for the study.« less
Enrichr: interactive and collaborative HTML5 gene list enrichment analysis tool
2013-01-01
Background System-wide profiling of genes and proteins in mammalian cells produce lists of differentially expressed genes/proteins that need to be further analyzed for their collective functions in order to extract new knowledge. Once unbiased lists of genes or proteins are generated from such experiments, these lists are used as input for computing enrichment with existing lists created from prior knowledge organized into gene-set libraries. While many enrichment analysis tools and gene-set libraries databases have been developed, there is still room for improvement. Results Here, we present Enrichr, an integrative web-based and mobile software application that includes new gene-set libraries, an alternative approach to rank enriched terms, and various interactive visualization approaches to display enrichment results using the JavaScript library, Data Driven Documents (D3). The software can also be embedded into any tool that performs gene list analysis. We applied Enrichr to analyze nine cancer cell lines by comparing their enrichment signatures to the enrichment signatures of matched normal tissues. We observed a common pattern of up regulation of the polycomb group PRC2 and enrichment for the histone mark H3K27me3 in many cancer cell lines, as well as alterations in Toll-like receptor and interlukin signaling in K562 cells when compared with normal myeloid CD33+ cells. Such analyses provide global visualization of critical differences between normal tissues and cancer cell lines but can be applied to many other scenarios. Conclusions Enrichr is an easy to use intuitive enrichment analysis web-based tool providing various types of visualization summaries of collective functions of gene lists. Enrichr is open source and freely available online at: http://amp.pharm.mssm.edu/Enrichr. PMID:23586463
Enrichr: interactive and collaborative HTML5 gene list enrichment analysis tool.
Chen, Edward Y; Tan, Christopher M; Kou, Yan; Duan, Qiaonan; Wang, Zichen; Meirelles, Gabriela Vaz; Clark, Neil R; Ma'ayan, Avi
2013-04-15
System-wide profiling of genes and proteins in mammalian cells produce lists of differentially expressed genes/proteins that need to be further analyzed for their collective functions in order to extract new knowledge. Once unbiased lists of genes or proteins are generated from such experiments, these lists are used as input for computing enrichment with existing lists created from prior knowledge organized into gene-set libraries. While many enrichment analysis tools and gene-set libraries databases have been developed, there is still room for improvement. Here, we present Enrichr, an integrative web-based and mobile software application that includes new gene-set libraries, an alternative approach to rank enriched terms, and various interactive visualization approaches to display enrichment results using the JavaScript library, Data Driven Documents (D3). The software can also be embedded into any tool that performs gene list analysis. We applied Enrichr to analyze nine cancer cell lines by comparing their enrichment signatures to the enrichment signatures of matched normal tissues. We observed a common pattern of up regulation of the polycomb group PRC2 and enrichment for the histone mark H3K27me3 in many cancer cell lines, as well as alterations in Toll-like receptor and interlukin signaling in K562 cells when compared with normal myeloid CD33+ cells. Such analyses provide global visualization of critical differences between normal tissues and cancer cell lines but can be applied to many other scenarios. Enrichr is an easy to use intuitive enrichment analysis web-based tool providing various types of visualization summaries of collective functions of gene lists. Enrichr is open source and freely available online at: http://amp.pharm.mssm.edu/Enrichr.
Walker, Gemma M; Carter, Tim; Aubeeluck, Aimee; Witchell, Miranda; Coad, Jane
2018-01-01
Introduction Currently, no standardised, evidence-based assessment tool for assessing immediate self-harm and suicide in acute paediatric inpatient settings exists. Aim The aim of this study is to develop and test the psychometric properties of an assessment tool that identifies immediate risk of self-harm and suicide in children and young people (10–19 years) in acute paediatric hospital settings. Methods and analysis Development phase: This phase involved a scoping review of the literature to identify and extract items from previously published suicide and self-harm risk assessment scales. Using a modified electronic Delphi approach, these items will then be rated according to their relevance for assessment of immediate suicide or self-harm risk by expert professionals. Inclusion of items will be determined by 65%–70% consensus between raters. Subsequently, a panel of expert members will convene to determine the face validity, appropriate phrasing, item order and response format for the finalised items. Psychometric testing phase: The finalised items will be tested for validity and reliability through a multicentre, psychometric evaluation. Psychometric testing will be undertaken to determine the following: internal consistency, inter-rater reliability, convergent, divergent validity and concurrent validity. Ethics and dissemination Ethical approval was provided by the National Health Service East Midlands—Derby Research Ethics Committee (17/EM/0347) and full governance clearance received by the Health Research Authority and local participating sites. Findings from this study will be disseminated to professionals and the public via peer-reviewed journal publications, popular social media and conference presentations. PMID:29654046
Evaluation of PHI Hunter in Natural Language Processing Research.
Redd, Andrew; Pickard, Steve; Meystre, Stephane; Scehnet, Jeffrey; Bolton, Dan; Heavirland, Julia; Weaver, Allison Lynn; Hope, Carol; Garvin, Jennifer Hornung
2015-01-01
We introduce and evaluate a new, easily accessible tool using a common statistical analysis and business analytics software suite, SAS, which can be programmed to remove specific protected health information (PHI) from a text document. Removal of PHI is important because the quantity of text documents used for research with natural language processing (NLP) is increasing. When using existing data for research, an investigator must remove all PHI not needed for the research to comply with human subjects' right to privacy. This process is similar, but not identical, to de-identification of a given set of documents. PHI Hunter removes PHI from free-form text. It is a set of rules to identify and remove patterns in text. PHI Hunter was applied to 473 Department of Veterans Affairs (VA) text documents randomly drawn from a research corpus stored as unstructured text in VA files. PHI Hunter performed well with PHI in the form of identification numbers such as Social Security numbers, phone numbers, and medical record numbers. The most commonly missed PHI items were names and locations. Incorrect removal of information occurred with text that looked like identification numbers. PHI Hunter fills a niche role that is related to but not equal to the role of de-identification tools. It gives research staff a tool to reasonably increase patient privacy. It performs well for highly sensitive PHI categories that are rarely used in research, but still shows possible areas for improvement. More development for patterns of text and linked demographic tables from electronic health records (EHRs) would improve the program so that more precise identifiable information can be removed. PHI Hunter is an accessible tool that can flexibly remove PHI not needed for research. If it can be tailored to the specific data set via linked demographic tables, its performance will improve in each new document set.
Evaluation of PHI Hunter in Natural Language Processing Research
Redd, Andrew; Pickard, Steve; Meystre, Stephane; Scehnet, Jeffrey; Bolton, Dan; Heavirland, Julia; Weaver, Allison Lynn; Hope, Carol; Garvin, Jennifer Hornung
2015-01-01
Objectives We introduce and evaluate a new, easily accessible tool using a common statistical analysis and business analytics software suite, SAS, which can be programmed to remove specific protected health information (PHI) from a text document. Removal of PHI is important because the quantity of text documents used for research with natural language processing (NLP) is increasing. When using existing data for research, an investigator must remove all PHI not needed for the research to comply with human subjects’ right to privacy. This process is similar, but not identical, to de-identification of a given set of documents. Materials and methods PHI Hunter removes PHI from free-form text. It is a set of rules to identify and remove patterns in text. PHI Hunter was applied to 473 Department of Veterans Affairs (VA) text documents randomly drawn from a research corpus stored as unstructured text in VA files. Results PHI Hunter performed well with PHI in the form of identification numbers such as Social Security numbers, phone numbers, and medical record numbers. The most commonly missed PHI items were names and locations. Incorrect removal of information occurred with text that looked like identification numbers. Discussion PHI Hunter fills a niche role that is related to but not equal to the role of de-identification tools. It gives research staff a tool to reasonably increase patient privacy. It performs well for highly sensitive PHI categories that are rarely used in research, but still shows possible areas for improvement. More development for patterns of text and linked demographic tables from electronic health records (EHRs) would improve the program so that more precise identifiable information can be removed. Conclusions PHI Hunter is an accessible tool that can flexibly remove PHI not needed for research. If it can be tailored to the specific data set via linked demographic tables, its performance will improve in each new document set. PMID:26807078
A Java-based tool for the design of classification microarrays.
Meng, Da; Broschat, Shira L; Call, Douglas R
2008-08-04
Classification microarrays are used for purposes such as identifying strains of bacteria and determining genetic relationships to understand the epidemiology of an infectious disease. For these cases, mixed microarrays, which are composed of DNA from more than one organism, are more effective than conventional microarrays composed of DNA from a single organism. Selection of probes is a key factor in designing successful mixed microarrays because redundant sequences are inefficient and limited representation of diversity can restrict application of the microarray. We have developed a Java-based software tool, called PLASMID, for use in selecting the minimum set of probe sequences needed to classify different groups of plasmids or bacteria. The software program was successfully applied to several different sets of data. The utility of PLASMID was illustrated using existing mixed-plasmid microarray data as well as data from a virtual mixed-genome microarray constructed from different strains of Streptococcus. Moreover, use of data from expression microarray experiments demonstrated the generality of PLASMID. In this paper we describe a new software tool for selecting a set of probes for a classification microarray. While the tool was developed for the design of mixed microarrays-and mixed-plasmid microarrays in particular-it can also be used to design expression arrays. The user can choose from several clustering methods (including hierarchical, non-hierarchical, and a model-based genetic algorithm), several probe ranking methods, and several different display methods. A novel approach is used for probe redundancy reduction, and probe selection is accomplished via stepwise discriminant analysis. Data can be entered in different formats (including Excel and comma-delimited text), and dendrogram, heat map, and scatter plot images can be saved in several different formats (including jpeg and tiff). Weights generated using stepwise discriminant analysis can be stored for analysis of subsequent experimental data. Additionally, PLASMID can be used to construct virtual microarrays with genomes from public databases, which can then be used to identify an optimal set of probes.
GSAC - Generic Seismic Application Computing
NASA Astrophysics Data System (ADS)
Herrmann, R. B.; Ammon, C. J.; Koper, K. D.
2004-12-01
With the success of the IRIS data management center, the use of large data sets in seismological research has become common. Such data sets, and especially the significantly larger data sets expected from EarthScope, present challenges for analysis with existing tools developed over the last 30 years. For much of the community, the primary format for data analysis is the Seismic Analysis Code (SAC) format developed by Lawrence Livermore National Laboratory. Although somewhat restrictive in meta-data storage, the simplicity and stability of the format has established it as an important component of seismological research. Tools for working with SAC files fall into two categories - custom research quality processing codes and shared display - processing tools such as SAC2000, MatSeis,etc., which were developed primarily for the needs of individual seismic research groups. While the current graphics display and platform dependence of SAC2000 may be resolved if the source code is released, the code complexity and the lack of large-data set analysis or even introductory tutorials could preclude code improvements and development of expertise in its use. We believe that there is a place for new, especially open source, tools. The GSAC effort is an approach that focuses on ease of use, computational speed, transportability, rapid addition of new features and openness so that new and advanced students, researchers and instructors can quickly browse and process large data sets. We highlight several approaches toward data processing under this model. gsac - part of the Computer Programs in Seismology 3.30 distribution has much of the functionality of SAC2000 and works on UNIX/LINUX/MacOS-X/Windows (CYGWIN). This is completely programmed in C from scratch, is small, fast, and easy to maintain and extend. It is command line based and is easily included within shell processing scripts. PySAC is a set of Python functions that allow easy access to SAC files and enable efficient manipulation of SAC files under a variety of operating systems. PySAC has proven to be valuable in organizing large data sets. An array processing package includes standard beamforming algorithms and a search based method for inference of slowness vectors. The search results can be visualized using GMT scripts output by the C programs, and the resulting snapshots can be combined into an animation of the time evolution of the 2D slowness field.
Bringing cancer care to the poor: experiences from Rwanda.
Shulman, Lawrence N; Mpunga, Tharcisse; Tapela, Neo; Wagner, Claire M; Fadelu, Temidayo; Binagwaho, Agnes
2014-12-01
The knowledge and tools to cure many cancer patients exist in developed countries but are unavailable to many who live in the developing world, resulting in unnecessary loss of life. Bringing cancer care to the poor, particularly to low-income countries, is a great challenge, but it is one that we believe can be met through partnerships, careful planning and a set of guiding principles. Alongside vaccinations, screening and other cancer-prevention efforts, treatment must be a central component of any cancer programme from the start. It is also critical that these programmes include implementation research to determine programmatic efficacy, where gaps in care still exist and where improvements can be made. This article discusses these issues using the example of Rwanda's expanding national cancer programme.
Fostering Team Awareness in Earth System Modeling Communities
NASA Astrophysics Data System (ADS)
Easterbrook, S. M.; Lawson, A.; Strong, S.
2009-12-01
Existing Global Climate Models are typically managed and controlled at a single site, with varied levels of participation by scientists outside the core lab. As these models evolve to encompass a wider set of earth systems, this central control of the modeling effort becomes a bottleneck. But such models cannot evolve to become fully distributed open source projects unless they address the imbalance in the availability of communication channels: scientists at the core site have access to regular face-to-face communication with one another, while those at remote sites have access to only a subset of these conversations - e.g. formally scheduled teleconferences and user meetings. Because of this imbalance, critical decision making can be hidden from many participants, their code contributions can interact in unanticipated ways, and the community loses awareness of who knows what. We have documented some of these problems in a field study at one climate modeling centre, and started to develop tools to overcome these problems. We report on one such tool, TracSNAP, which analyzes the social network of the scientists contributing code to the model by extracting the data in an existing project code repository. The tool presents the results of this analysis to modelers and model users in a number of ways: recommendation for who has expertise on particular code modules, suggestions for code sections that are related to files being worked on, and visualizations of team communication patterns. The tool is currently available as a plugin for the Trac bug tracking system.
MEvoLib v1.0: the first molecular evolution library for Python.
Álvarez-Jarreta, Jorge; Ruiz-Pesini, Eduardo
2016-10-28
Molecular evolution studies involve many different hard computational problems solved, in most cases, with heuristic algorithms that provide a nearly optimal solution. Hence, diverse software tools exist for the different stages involved in a molecular evolution workflow. We present MEvoLib, the first molecular evolution library for Python, providing a framework to work with different tools and methods involved in the common tasks of molecular evolution workflows. In contrast with already existing bioinformatics libraries, MEvoLib is focused on the stages involved in molecular evolution studies, enclosing the set of tools with a common purpose in a single high-level interface with fast access to their frequent parameterizations. The gene clustering from partial or complete sequences has been improved with a new method that integrates accessible external information (e.g. GenBank's features data). Moreover, MEvoLib adjusts the fetching process from NCBI databases to optimize the download bandwidth usage. In addition, it has been implemented using parallelization techniques to cope with even large-case scenarios. MEvoLib is the first library for Python designed to facilitate molecular evolution researches both for expert and novel users. Its unique interface for each common task comprises several tools with their most used parameterizations. It has also included a method to take advantage of biological knowledge to improve the gene partition of sequence datasets. Additionally, its implementation incorporates parallelization techniques to enhance computational costs when handling very large input datasets.
NASA Technical Reports Server (NTRS)
Xu, Xidong; Ulrey, Mike L.; Brown, John A.; Mast, James; Lapis, Mary B.
2013-01-01
NextGen is a complex socio-technical system and, in many ways, it is expected to be more complex than the current system. It is vital to assess the safety impact of the NextGen elements (technologies, systems, and procedures) in a rigorous and systematic way and to ensure that they do not compromise safety. In this study, the NextGen elements in the form of Operational Improvements (OIs), Enablers, Research Activities, Development Activities, and Policy Issues were identified. The overall hazard situation in NextGen was outlined; a high-level hazard analysis was conducted with respect to multiple elements in a representative NextGen OI known as OI-0349 (Automation Support for Separation Management); and the hazards resulting from the highly dynamic complexity involved in an OI-0349 scenario were illustrated. A selected but representative set of the existing safety methods, tools, processes, and regulations was then reviewed and analyzed regarding whether they are sufficient to assess safety in the elements of that OI and ensure that safety will not be compromised and whether they might incur intolerably high costs.
Competition Between Transients in the Rate of Approach to a Fixed Point
NASA Astrophysics Data System (ADS)
Day, Judy; Rubin, Jonathan E.; Chow, Carson C.
2009-01-01
The goal of this paper is to provide and apply tools for analyzing a specific aspect of transient dynamics not covered by previous theory. The question we address is whether one component of a perturbed solution to a system of differential equations can overtake the corresponding component of a reference solution as both converge to a stable node at the origin, given that the perturbed solution was initially farther away and that both solutions are nonnegative for all time. We call this phenomenon tolerance, for its relation to a biological effect. We show using geometric arguments that tolerance will exist in generic linear systems with a complete set of eigenvectors and in excitable nonlinear systems. We also define a notion of inhibition that may constrain the regions in phase space where the possibility of tolerance arises in general systems. However, these general existence theorems do not not yield an assessment of tolerance for specific initial conditions. To address that issue, we develop some analytical tools for determining if particular perturbed and reference solution initial conditions will exhibit tolerance.
Investigating Methods for Serving Visualizations of Vertical Profiles
NASA Astrophysics Data System (ADS)
Roberts, J. T.; Cechini, M. F.; Lanjewar, K.; Rodriguez, J.; Boller, R. A.; Baynes, K.
2017-12-01
Several geospatial web servers, web service standards, and mapping clients exist for the visualization of two-dimensional raster and vector-based Earth science data products. However, data products with a vertical component (i.e., vertical profiles) do not have the same mature set of technologies and pose a greater technical challenge when it comes to visualizations. There are a variety of tools and proposed standards, but no obvious solution that can handle the variety of visualizations found with vertical profiles. An effort is being led by members of the NASA Global Imagery Browse Services (GIBS) team to gather a list of technologies relevant to existing vertical profile data products and user stories. The goal is to find a subset of technologies, standards, and tools that can be used to build publicly accessible web services that can handle the greatest number of use cases for the widest audience possible. This presentation will describe results of the investigation and offer directions for moving forward with building a system that is capable of effectively and efficiently serving visualizations of vertical profiles.
ASDF - A Modern Data Format for Seismology
NASA Astrophysics Data System (ADS)
Krischer, Lion; Smith, James; Lei, Wenjie; Lefebvre, Matthieu; Ruan, Youyi; Sales de Andrade, Elliot; Podhorszki, Norbert; Bozdag, Ebru; Tromp, Jeroen
2017-04-01
Seismology as a science is driven by observing and understanding data and it is thus vital to make this as easy and accessible as possible. The growing volume of freely available data coupled with ever expanding computational power enables scientists to take on new and bigger problems. This evolution is to some part hindered as existing data formats have not been designed with it in mind. We present ASDF (http://seismic-data.org), the Adaptable Seismic Data Format, a novel, modern, and especially practical data format for all branches of seismology with particular focus on how it is incorporated into seismic full waveform inversion workflows. The format aims to solve five key issues: Efficiency: Fast I/O operations especially in high performance computing environments, especially limiting the total number of files. Data organization: Different types of data are needed for a variety of tasks. This results in ad hoc data organization and formats that are hard to maintain, integrate, reproduce, and exchange. Data exchange: We want to exchange complex and complete data sets. Reproducibility: Oftentimes just not existing but crucial to advance our science. Mining, visualization, and understanding of data: As data volumes grow, more complex, new techniques to query and visualize large datasets are needed. ASDF tackles these by defining a structure on top of HDF5 reusing as many existing standards (QuakeML, StationXML, PROV) as possible. An essential trait of ASDF is that it empowers the construction of completely self-describing data sets including waveform, station, and event data together with non-waveform data and a provenance description of everything. This for example for the first time enables the proper archival and exchange of processed or synthetic waveforms. To aid community adoption we developed mature tools in Python as well as in C and Fortran. Additionally we provide a formal definition of the format, a validation tool, and integration into widely used tools like ObsPy (http://obspy.org), SPECFEM GLOBE (https://geodynamics.org/cig/software/specfem3d_globe/), and Salvus (http://salvus.io).
Evolution inclusions governed by the difference of two subdifferentials in reflexive Banach spaces
NASA Astrophysics Data System (ADS)
Akagi, Goro; Ôtani, Mitsuharu
The existence of strong solutions of Cauchy problem for the following evolution equation du(t)/dt+∂ϕ1(u(t))-∂ϕ2(u(t))∋f(t) is considered in a real reflexive Banach space V, where ∂ϕ1 and ∂ϕ2 are subdifferential operators from V into its dual V*. The study for this type of problems has been done by several authors in the Hilbert space setting. The scope of our study is extended to the V- V* setting. The main tool employed here is a certain approximation argument in a Hilbert space and for this purpose we need to assume that there exists a Hilbert space H such that V⊂H≡H*⊂V* with densely defined continuous injections. The applicability of our abstract framework will be exemplified in discussing the existence of solutions for the nonlinear heat equation: ut(x,t)-Δpu(x,t)-|u|u(x,t)=f(x,t), x∈Ω, t>0, u|=0, where Ω is a bounded domain in RN. In particular, the existence of local (in time) weak solution is shown under the subcritical growth condition q
Becnel, Lauren B; Ochsner, Scott A; Darlington, Yolanda F; McOwiti, Apollo; Kankanamge, Wasula H; Dehart, Michael; Naumov, Alexey; McKenna, Neil J
2017-04-25
We previously developed a web tool, Transcriptomine, to explore expression profiling data sets involving small-molecule or genetic manipulations of nuclear receptor signaling pathways. We describe advances in biocuration, query interface design, and data visualization that enhance the discovery of uncharacterized biology in these pathways using this tool. Transcriptomine currently contains about 45 million data points encompassing more than 2000 experiments in a reference library of nearly 550 data sets retrieved from public archives and systematically curated. To make the underlying data points more accessible to bench biologists, we classified experimental small molecules and gene manipulations into signaling pathways and experimental tissues and cell lines into physiological systems and organs. Incorporation of these mappings into Transcriptomine enables the user to readily evaluate tissue-specific regulation of gene expression by nuclear receptor signaling pathways. Data points from animal and cell model experiments and from clinical data sets elucidate the roles of nuclear receptor pathways in gene expression events accompanying various normal and pathological cellular processes. In addition, data sets targeting non-nuclear receptor signaling pathways highlight transcriptional cross-talk between nuclear receptors and other signaling pathways. We demonstrate with specific examples how data points that exist in isolation in individual data sets validate each other when connected and made accessible to the user in a single interface. In summary, Transcriptomine allows bench biologists to routinely develop research hypotheses, validate experimental data, or model relationships between signaling pathways, genes, and tissues. Copyright © 2017, American Association for the Advancement of Science.
NASA Astrophysics Data System (ADS)
Varghese, Nishad G.
Knowledge management (KM) exists in various forms throughout organizations. Process documentation, training courses, and experience sharing are examples of KM activities performed daily. The goal of KM systems (KMS) is to provide a tool set which serves to standardize the creation, sharing, and acquisition of business critical information. Existing literature provides numerous examples of targeted evaluations of KMS, focusing on specific system attributes. This research serves to bridge the targeted evaluations with an industry-specific, holistic approach. The user preferences of aerospace employees in engineering and engineering-related fields were compared to profiles of existing aerospace KMS based on three attribute categories: technical features, system administration, and user experience. The results indicated there is a statistically significant difference between aerospace user preferences and existing profiles in the user experience attribute category, but no statistically significant difference in the technical features and system administration attribute categories. Additional analysis indicated in-house developed systems exhibit higher technical features and user experience ratings than commercial-off-the-self (COTS) systems.
A Clinical Tool for the Prediction of Venous Thromboembolism in Pediatric Trauma Patients.
Connelly, Christopher R; Laird, Amy; Barton, Jeffrey S; Fischer, Peter E; Krishnaswami, Sanjay; Schreiber, Martin A; Zonies, David H; Watters, Jennifer M
2016-01-01
Although rare, the incidence of venous thromboembolism (VTE) in pediatric trauma patients is increasing, and the consequences of VTE in children are significant. Studies have demonstrated increasing VTE risk in older pediatric trauma patients and improved VTE rates with institutional interventions. While national evidence-based guidelines for VTE screening and prevention are in place for adults, none exist for pediatric patients, to our knowledge. To develop a risk prediction calculator for VTE in children admitted to the hospital after traumatic injury to assist efforts in developing screening and prophylaxis guidelines for this population. Retrospective review of 536,423 pediatric patients 0 to 17 years old using the National Trauma Data Bank from January 1, 2007, to December 31, 2012. Five mixed-effects logistic regression models of varying complexity were fit on a training data set. Model validity was determined by comparison of the area under the receiver operating characteristic curve (AUROC) for the training and validation data sets from the original model fit. A clinical tool to predict the risk of VTE based on individual patient clinical characteristics was developed from the optimal model. Diagnosis of VTE during hospital admission. Venous thromboembolism was diagnosed in 1141 of 536,423 children (overall rate, 0.2%). The AUROCs in the training data set were high (range, 0.873-0.946) for each model, with minimal AUROC attenuation in the validation data set. A prediction tool was developed from a model that achieved a balance of high performance (AUROCs, 0.945 and 0.932 in the training and validation data sets, respectively; P = .048) and parsimony. Points are assigned to each variable considered (Glasgow Coma Scale score, age, sex, intensive care unit admission, intubation, transfusion of blood products, central venous catheter placement, presence of pelvic or lower extremity fractures, and major surgery), and the points total is converted to a VTE risk score. The predicted risk of VTE ranged from 0.0% to 14.4%. We developed a simple clinical tool to predict the risk of developing VTE in pediatric trauma patients. It is based on a model created using a large national database and was internally validated. The clinical tool requires external validation but provides an initial step toward the development of the specific VTE protocols for pediatric trauma patients.
Kirkpatrick, Sharon I; Gilsing, Anne M; Hobin, Erin; Solbak, Nathan M; Wallace, Angela; Haines, Jess; Mayhew, Alexandra J; Orr, Sarah K; Raina, Parminder; Robson, Paula J; Sacco, Jocelyn E; Whelan, Heather K
2017-01-31
With technological innovation, comprehensive dietary intake data can be collected in a wide range of studies and settings. The Automated Self-Administered 24-hour (ASA24) Dietary Assessment Tool is a web-based system that guides respondents through 24-h recalls. The purpose of this paper is to describe lessons learned from five studies that assessed the feasibility and validity of ASA24 for capturing recall data among several population subgroups in Canada. These studies were conducted within a childcare setting (preschool children with reporting by parents), in public schools (children in grades 5-8; aged 10-13 years), and with community-based samples drawn from existing cohorts of adults and older adults. Themes emerged across studies regarding receptivity to completing ASA24, user experiences with the interface, and practical considerations for different populations. Overall, we found high acceptance of ASA24 among these diverse samples. However, the ASA24 interface was not intuitive for some participants, particularly young children and older adults. As well, technological challenges were encountered. These observations underscore the importance of piloting protocols using online tools, as well as consideration of the potential need for tailored resources to support study participants. Lessons gleaned can inform the effective use of technology-enabled dietary assessment tools in research.
Korjonen, Helena
2011-01-01
Objectives: Develop a website, the OLC, which supports those people who work on promoting a healthy weight and tackling obesity. Research shows that original networks where sharing of information and peer interaction take place create solutions to current public health challenges. Methods: Considerations that are relevant when building a new information service as well as the technical set up and information needs of users were taken into account prior to building the OLC and during continuous development and maintenance. Results: The OLC provides global news, resources and tools and link out to other networks, websites and organisations providing similar useful information. The OLC also uses social networking tools to highlight new and important information. Discussion: Networks contribute to a stronger community that can respond to emerging challenges in public health. The OLC improves connections of people and services from different backgrounds and organisations. Some challenges exist in the technical set up and also because of other aspects, e.g. public health information and differing information needs. Conclusion: Public health work programmes should include networking opportunities where public policy can be disseminated. The provision of necessary tools and resources can lead to better decision-making, save time and money and lead to improved public health outcomes. PMID:23569599
Method Of Wire Insertion For Electric Machine Stators
Brown, David L; Stabel, Gerald R; Lawrence, Robert Anthony
2005-02-08
A method of inserting coils in slots of a stator is provided. The method includes interleaving a first set of first phase windings and a first set of second phase windings on an insertion tool. The method also includes activating the insertion tool to radially insert the first set of first phase windings and the first set of second phase windings in the slots of the stator. In one embodiment, interleaving the first set of first phase windings and the first set of second phase windings on the insertion tool includes forming the first set of first phase windings in first phase openings defined in the insertion tool, and forming the first set of second phase windings in second phase openings defined in the insertion tool.
Learning to recognize rat social behavior: Novel dataset and cross-dataset application.
Lorbach, Malte; Kyriakou, Elisavet I; Poppe, Ronald; van Dam, Elsbeth A; Noldus, Lucas P J J; Veltkamp, Remco C
2018-04-15
Social behavior is an important aspect of rodent models. Automated measuring tools that make use of video analysis and machine learning are an increasingly attractive alternative to manual annotation. Because machine learning-based methods need to be trained, it is important that they are validated using data from different experiment settings. To develop and validate automated measuring tools, there is a need for annotated rodent interaction datasets. Currently, the availability of such datasets is limited to two mouse datasets. We introduce the first, publicly available rat social interaction dataset, RatSI. We demonstrate the practical value of the novel dataset by using it as the training set for a rat interaction recognition method. We show that behavior variations induced by the experiment setting can lead to reduced performance, which illustrates the importance of cross-dataset validation. Consequently, we add a simple adaptation step to our method and improve the recognition performance. Most existing methods are trained and evaluated in one experimental setting, which limits the predictive power of the evaluation to that particular setting. We demonstrate that cross-dataset experiments provide more insight in the performance of classifiers. With our novel, public dataset we encourage the development and validation of automated recognition methods. We are convinced that cross-dataset validation enhances our understanding of rodent interactions and facilitates the development of more sophisticated recognition methods. Combining them with adaptation techniques may enable us to apply automated recognition methods to a variety of animals and experiment settings. Copyright © 2017 Elsevier B.V. All rights reserved.
Menon, K Venugopal; Kumar, Dinesh; Thomas, Tessamma
2014-02-01
Study Design Preliminary evaluation of new tool. Objective To ascertain whether the newly developed content-based image retrieval (CBIR) software can be used successfully to retrieve images of similar cases of adolescent idiopathic scoliosis (AIS) from a database to help plan treatment without adhering to a classification scheme. Methods Sixty-two operated cases of AIS were entered into the newly developed CBIR database. Five new cases of different curve patterns were used as query images. The images were fed into the CBIR database that retrieved similar images from the existing cases. These were analyzed by a senior surgeon for conformity to the query image. Results Within the limits of variability set for the query system, all the resultant images conformed to the query image. One case had no similar match in the series. The other four retrieved several images that were matching with the query. No matching case was left out in the series. The postoperative images were then analyzed to check for surgical strategies. Broad guidelines for treatment could be derived from the results. More precise query settings, inclusion of bending films, and a larger database will enhance accurate retrieval and better decision making. Conclusion The CBIR system is an effective tool for accurate documentation and retrieval of scoliosis images. Broad guidelines for surgical strategies can be made from the postoperative images of the existing cases without adhering to any classification scheme.
Knowledge Representation Standards and Interchange Formats for Causal Graphs
NASA Technical Reports Server (NTRS)
Throop, David R.; Malin, Jane T.; Fleming, Land
2005-01-01
In many domains, automated reasoning tools must represent graphs of causally linked events. These include fault-tree analysis, probabilistic risk assessment (PRA), planning, procedures, medical reasoning about disease progression, and functional architectures. Each of these fields has its own requirements for the representation of causation, events, actors and conditions. The representations include ontologies of function and cause, data dictionaries for causal dependency, failure and hazard, and interchange formats between some existing tools. In none of the domains has a generally accepted interchange format emerged. The paper makes progress towards interoperability across the wide range of causal analysis methodologies. We survey existing practice and emerging interchange formats in each of these fields. Setting forth a set of terms and concepts that are broadly shared across the domains, we examine the several ways in which current practice represents them. Some phenomena are difficult to represent or to analyze in several domains. These include mode transitions, reachability analysis, positive and negative feedback loops, conditions correlated but not causally linked and bimodal probability distributions. We work through examples and contrast the differing methods for addressing them. We detail recent work in knowledge interchange formats for causal trees in aerospace analysis applications in early design, safety and reliability. Several examples are discussed, with a particular focus on reachability analysis and mode transitions. We generalize the aerospace analysis work across the several other domains. We also recommend features and capabilities for the next generation of causal knowledge representation standards.
A Voronoi interior adjacency-based approach for generating a contour tree
NASA Astrophysics Data System (ADS)
Chen, Jun; Qiao, Chaofei; Zhao, Renliang
2004-05-01
A contour tree is a good graphical tool for representing the spatial relations of contour lines and has found many applications in map generalization, map annotation, terrain analysis, etc. A new approach for generating contour trees by introducing a Voronoi-based interior adjacency set concept is proposed in this paper. The immediate interior adjacency set is employed to identify all of the children contours of each contour without contour elevations. It has advantages over existing methods such as the point-in-polygon method and the region growing-based method. This new approach can be used for spatial data mining and knowledge discovering, such as the automatic extraction of terrain features and construction of multi-resolution digital elevation model.
Cyber-Attack Methods, Why They Work on Us, and What to Do
NASA Technical Reports Server (NTRS)
Byrne, DJ
2015-01-01
Basic cyber-attack methods are well documented, and even automated with user-friendly GUIs (Graphical User Interfaces). Entire suites of attack tools are legal, conveniently packaged, and freely downloadable to anyone; more polished versions are sold with vendor support. Our team ran some of these against a selected set of projects within our organization to understand what the attacks do so that we can design and validate defenses against them. Some existing defenses were effective against the attacks, some less so. On average, every machine had twelve easily identifiable vulnerabilities, two of them "critical". Roughly 5% of passwords in use were easily crack-able. We identified a clear set of recommendations for each project, and some common patterns that emerged among them all.
Airborne Turbulence Detection System Certification Tool Set
NASA Technical Reports Server (NTRS)
Hamilton, David W.; Proctor, Fred H.
2006-01-01
A methodology and a corresponding set of simulation tools for testing and evaluating turbulence detection sensors has been presented. The tool set is available to industry and the FAA for certification of radar based airborne turbulence detection systems. The tool set consists of simulated data sets representing convectively induced turbulence, an airborne radar simulation system, hazard tables to convert the radar observable to an aircraft load, documentation, a hazard metric "truth" algorithm, and criteria for scoring the predictions. Analysis indicates that flight test data supports spatial buffers for scoring detections. Also, flight data and demonstrations with the tool set suggest the need for a magnitude buffer.
Rapid SAW Sensor Development Tools
NASA Technical Reports Server (NTRS)
Wilson, William C.; Atkinson, Gary M.
2007-01-01
The lack of integrated design tools for Surface Acoustic Wave (SAW) devices has led us to develop tools for the design, modeling, analysis, and automatic layout generation of SAW devices. These tools enable rapid development of wireless SAW sensors. The tools developed have been designed to integrate into existing Electronic Design Automation (EDA) tools to take advantage of existing 3D modeling, and Finite Element Analysis (FEA). This paper presents the SAW design, modeling, analysis, and automated layout generation tools.
Rubin, Katrine Hass; Friis-Holmberg, Teresa; Hermann, Anne Pernille; Abrahamsen, Bo; Brixen, Kim
2013-08-01
A huge number of risk assessment tools have been developed. Far from all have been validated in external studies, more of them have absence of methodological and transparent evidence, and few are integrated in national guidelines. Therefore, we performed a systematic review to provide an overview of existing valid and reliable risk assessment tools for prediction of osteoporotic fractures. Additionally, we aimed to determine if the performance of each tool was sufficient for practical use, and last, to examine whether the complexity of the tools influenced their discriminative power. We searched PubMed, Embase, and Cochrane databases for papers and evaluated these with respect to methodological quality using the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS) checklist. A total of 48 tools were identified; 20 had been externally validated, however, only six tools had been tested more than once in a population-based setting with acceptable methodological quality. None of the tools performed consistently better than the others and simple tools (i.e., the Osteoporosis Self-assessment Tool [OST], Osteoporosis Risk Assessment Instrument [ORAI], and Garvan Fracture Risk Calculator [Garvan]) often did as well or better than more complex tools (i.e., Simple Calculated Risk Estimation Score [SCORE], WHO Fracture Risk Assessment Tool [FRAX], and Qfracture). No studies determined the effectiveness of tools in selecting patients for therapy and thus improving fracture outcomes. High-quality studies in randomized design with population-based cohorts with different case mixes are needed. Copyright © 2013 American Society for Bone and Mineral Research.
Open-source platform to benchmark fingerprints for ligand-based virtual screening
2013-01-01
Similarity-search methods using molecular fingerprints are an important tool for ligand-based virtual screening. A huge variety of fingerprints exist and their performance, usually assessed in retrospective benchmarking studies using data sets with known actives and known or assumed inactives, depends largely on the validation data sets used and the similarity measure used. Comparing new methods to existing ones in any systematic way is rather difficult due to the lack of standard data sets and evaluation procedures. Here, we present a standard platform for the benchmarking of 2D fingerprints. The open-source platform contains all source code, structural data for the actives and inactives used (drawn from three publicly available collections of data sets), and lists of randomly selected query molecules to be used for statistically valid comparisons of methods. This allows the exact reproduction and comparison of results for future studies. The results for 12 standard fingerprints together with two simple baseline fingerprints assessed by seven evaluation methods are shown together with the correlations between methods. High correlations were found between the 12 fingerprints and a careful statistical analysis showed that only the two baseline fingerprints were different from the others in a statistically significant way. High correlations were also found between six of the seven evaluation methods, indicating that despite their seeming differences, many of these methods are similar to each other. PMID:23721588
Ghitza, Udi E; Gore-Langton, Robert E; Lindblad, Robert; Shide, David; Subramaniam, Geetha; Tai, Betty
2013-01-01
Electronic health records (EHRs) are essential in improving quality and enhancing efficiency of health-care delivery. By 2015, medical care receiving service reimbursement from US Centers for Medicare and Medicaid Services (CMS) must show 'meaningful use' of EHRs. Substance use disorders (SUD) are grossly under-detected and under-treated in current US medical care settings. Hence, an urgent need exists for improved identification of and clinical intervention for SUD in medical settings. The National Institute on Drug Abuse Clinical Trials Network (NIDA CTN) has leveraged its infrastructure and expertise and brought relevant stakeholders together to develop consensus on brief screening and initial assessment tools for SUD in general medical settings, with the objective of incorporation into US EHRs. Stakeholders were identified and queried for input and consensus on validated screening and assessment for SUD in general medical settings to develop common data elements to serve as shared resources for EHRs on screening, brief intervention and referral to treatment (SBIRT), with the intent of supporting interoperability and data exchange in a developing Nationwide Health Information Network. Through consensus of input from stakeholders, a validated screening and brief assessment instrument, supported by Clinical Decision Support tools, was chosen to be used at out-patient general medical settings. The creation and adoption of a core set of validated common data elements and the inclusion of such consensus-based data elements for general medical settings will enable the integration of SUD treatment within mainstream health care, and support the adoption and 'meaningful use' of the US Office of the National Coordinator for Health Information Technology (ONC)-certified EHRs, as well as CMS reimbursement. Published 2012. This article is a U.S. Government work and is in the public domain in the USA.
ObspyDMT: a Python toolbox for retrieving and processing large seismological data sets
NASA Astrophysics Data System (ADS)
Hosseini, Kasra; Sigloch, Karin
2017-10-01
We present obspyDMT, a free, open-source software toolbox for the query, retrieval, processing and management of seismological data sets, including very large, heterogeneous and/or dynamically growing ones. ObspyDMT simplifies and speeds up user interaction with data centers, in more versatile ways than existing tools. The user is shielded from the complexities of interacting with different data centers and data exchange protocols and is provided with powerful diagnostic and plotting tools to check the retrieved data and metadata. While primarily a productivity tool for research seismologists and observatories, easy-to-use syntax and plotting functionality also make obspyDMT an effective teaching aid. Written in the Python programming language, it can be used as a stand-alone command-line tool (requiring no knowledge of Python) or can be integrated as a module with other Python codes. It facilitates data archiving, preprocessing, instrument correction and quality control - routine but nontrivial tasks that can consume much user time. We describe obspyDMT's functionality, design and technical implementation, accompanied by an overview of its use cases. As an example of a typical problem encountered in seismogram preprocessing, we show how to check for inconsistencies in response files of two example stations. We also demonstrate the fully automated request, remote computation and retrieval of synthetic seismograms from the Synthetics Engine (Syngine) web service of the Data Management Center (DMC) at the Incorporated Research Institutions for Seismology (IRIS).
SIproc: an open-source biomedical data processing platform for large hyperspectral images.
Berisha, Sebastian; Chang, Shengyuan; Saki, Sam; Daeinejad, Davar; He, Ziqi; Mankar, Rupali; Mayerich, David
2017-04-10
There has recently been significant interest within the vibrational spectroscopy community to apply quantitative spectroscopic imaging techniques to histology and clinical diagnosis. However, many of the proposed methods require collecting spectroscopic images that have a similar region size and resolution to the corresponding histological images. Since spectroscopic images contain significantly more spectral samples than traditional histology, the resulting data sets can approach hundreds of gigabytes to terabytes in size. This makes them difficult to store and process, and the tools available to researchers for handling large spectroscopic data sets are limited. Fundamental mathematical tools, such as MATLAB, Octave, and SciPy, are extremely powerful but require that the data be stored in fast memory. This memory limitation becomes impractical for even modestly sized histological images, which can be hundreds of gigabytes in size. In this paper, we propose an open-source toolkit designed to perform out-of-core processing of hyperspectral images. By taking advantage of graphical processing unit (GPU) computing combined with adaptive data streaming, our software alleviates common workstation memory limitations while achieving better performance than existing applications.
PPInterFinder--a mining tool for extracting causal relations on human proteins from literature.
Raja, Kalpana; Subramani, Suresh; Natarajan, Jeyakumar
2013-01-01
One of the most common and challenging problem in biomedical text mining is to mine protein-protein interactions (PPIs) from MEDLINE abstracts and full-text research articles because PPIs play a major role in understanding the various biological processes and the impact of proteins in diseases. We implemented, PPInterFinder--a web-based text mining tool to extract human PPIs from biomedical literature. PPInterFinder uses relation keyword co-occurrences with protein names to extract information on PPIs from MEDLINE abstracts and consists of three phases. First, it identifies the relation keyword using a parser with Tregex and a relation keyword dictionary. Next, it automatically identifies the candidate PPI pairs with a set of rules related to PPI recognition. Finally, it extracts the relations by matching the sentence with a set of 11 specific patterns based on the syntactic nature of PPI pair. We find that PPInterFinder is capable of predicting PPIs with the accuracy of 66.05% on AIMED corpus and outperforms most of the existing systems. DATABASE URL: http://www.biomining-bu.in/ppinterfinder/
Dubrowski, Adam; Alani, Sabrina; Bankovic, Tina; Crowe, Andrea; Pollard, Megan
2015-11-02
Simulation is an important training tool used in a variety of influential fields. However, development of simulation scenarios - the key component of simulation - occurs in isolation; sharing of scenarios is almost non-existent. This can make simulation use a costly task in terms of the resources and time and the possible redundancy of efforts. To alleviate these issues, the goal is to strive for an open communication of practice (CoP) surrounding simulation. To facilitate this goal, this report describes a set of guidelines for writing technical reports about simulation use for educating health professionals. Using an accepted set of guidelines will allow for homogeneity when building simulation scenarios and facilitate open sharing among simulation users. In addition to optimizing simulation efforts in institutions that are currently using simulation as an educational tool, the development of such a repository may have direct implications on developing countries, where simulation is only starting to be used systematically. Our project facilitates equivalent and global access to information, knowledge, and highest-caliber education - in this context, simulation - collectively, the building blocks of optimal healthcare.
Writing Technical Reports for Simulation in Education for Health Professionals: Suggested Guidelines
Alani, Sabrina; Bankovic, Tina; Crowe, Andrea; Pollard, Megan
2015-01-01
Simulation is an important training tool used in a variety of influential fields. However, development of simulation scenarios - the key component of simulation – occurs in isolation; sharing of scenarios is almost non-existent. This can make simulation use a costly task in terms of the resources and time and the possible redundancy of efforts. To alleviate these issues, the goal is to strive for an open communication of practice (CoP) surrounding simulation. To facilitate this goal, this report describes a set of guidelines for writing technical reports about simulation use for educating health professionals. Using an accepted set of guidelines will allow for homogeneity when building simulation scenarios and facilitate open sharing among simulation users. In addition to optimizing simulation efforts in institutions that are currently using simulation as an educational tool, the development of such a repository may have direct implications on developing countries, where simulation is only starting to be used systematically. Our project facilitates equivalent and global access to information, knowledge, and highest-caliber education - in this context, simulation – collectively, the building blocks of optimal healthcare. PMID:26677421
PPInterFinder—a mining tool for extracting causal relations on human proteins from literature
Raja, Kalpana; Subramani, Suresh; Natarajan, Jeyakumar
2013-01-01
One of the most common and challenging problem in biomedical text mining is to mine protein–protein interactions (PPIs) from MEDLINE abstracts and full-text research articles because PPIs play a major role in understanding the various biological processes and the impact of proteins in diseases. We implemented, PPInterFinder—a web-based text mining tool to extract human PPIs from biomedical literature. PPInterFinder uses relation keyword co-occurrences with protein names to extract information on PPIs from MEDLINE abstracts and consists of three phases. First, it identifies the relation keyword using a parser with Tregex and a relation keyword dictionary. Next, it automatically identifies the candidate PPI pairs with a set of rules related to PPI recognition. Finally, it extracts the relations by matching the sentence with a set of 11 specific patterns based on the syntactic nature of PPI pair. We find that PPInterFinder is capable of predicting PPIs with the accuracy of 66.05% on AIMED corpus and outperforms most of the existing systems. Database URL: http://www.biomining-bu.in/ppinterfinder/ PMID:23325628
What the Logs Can Tell You: Mediation to Implement Feedback in Training
NASA Technical Reports Server (NTRS)
Maluf, David; Wiederhold, Gio; Abou-Khalil, Ali; Norvig, Peter (Technical Monitor)
2000-01-01
The problem addressed by Mediation to Implement Feedback in Training (MIFT) is to customize the feedback from training exercizes by exploiting knowledge about the training scenario, training objectives, and specific student/teacher needs. We achieve this by inserting an intelligent mediation layer into the information flow from observations collected during training exercises to the display and user interface. Knowledge about training objectives, scenarios, and tasks is maintained in the mediating layer. A designer constraint is that domain experts must be able to extend mediators by adding domain-specific knowledge that supports additional aggregations, abstractions, and views of the results of training exercises. The MIFT mediation concept is intended to be integrated with existing military training exercise management tools and reduce the cost of developing and maintaining separate feedback and evaluation tools for every training simulator and every set of customer needs. The MIFT Architecture is designed as a set of independently reusable components which interact with each other through standardized formalisms such as the Knowledge Interchange Format (KIF) and Knowledge Query and Manipulation Language (KQML).
Koch, Ina; Schueler, Markus; Heiner, Monika
2005-01-01
To understand biochemical processes caused by, e. g., mutations or deletions in the genome, the knowledge of possible alternative paths between two arbitrary chemical compounds is of increasing interest for biotechnology, pharmacology, medicine, and drug design. With the steadily increasing amount of data from high-throughput experiments new biochemical networks can be constructed and existing ones can be extended, which results in many large metabolic, signal transduction, and gene regulatory networks. The search for alternative paths within these complex and large networks can provide a huge amount of solutions, which can not be handled manually. Moreover, not all of the alternative paths are generally of interest. Therefore, we have developed and implemented a method, which allows us to define constraints to reduce the set of all structurally possible paths to the truly interesting path set. The paper describes the search algorithm and the constraints definition language. We give examples for path searches using this dedicated special language for a Petri net model of the sucrose-to-starch breakdown in the potato tuber.
Koch, Ina; Schüler, Markus; Heiner, Monika
2011-01-01
To understand biochemical processes caused by, e.g., mutations or deletions in the genome, the knowledge of possible alternative paths between two arbitrary chemical compounds is of increasing interest for biotechnology, pharmacology, medicine, and drug design. With the steadily increasing amount of data from high-throughput experiments new biochemical networks can be constructed and existing ones can be extended, which results in many large metabolic, signal transduction, and gene regulatory networks. The search for alternative paths within these complex and large networks can provide a huge amount of solutions, which can not be handled manually. Moreover, not all of the alternative paths are generally of interest. Therefore, we have developed and implemented a method, which allows us to define constraints to reduce the set of all structurally possible paths to the truly interesting path set. The paper describes the search algorithm and the constraints definition language. We give examples for path searches using this dedicated special language for a Petri net model of the sucrose-to-starch breakdown in the potato tuber. http://sanaga.tfh-berlin.de/~stepp/
Visualization and interaction tools for aerial photograph mosaics
NASA Astrophysics Data System (ADS)
Fernandes, João Pedro; Fonseca, Alexandra; Pereira, Luís; Faria, Adriano; Figueira, Helder; Henriques, Inês; Garção, Rita; Câmara, António
1997-05-01
This paper describes the development of a digital spatial library based on mosaics of digital orthophotos, called Interactive Portugal, that will enable users both to retrieve geospatial information existing in the Portuguese National System for Geographic Information World Wide Web server, and to develop local databases connected to the main system. A set of navigation, interaction, and visualization tools are proposed and discussed. They include sketching, dynamic sketching, and navigation capabilities over the digital orthophotos mosaics. Main applications of this digital spatial library are pointed out and discussed, namely for education, professional, and tourism markets. Future developments are considered. These developments are related to user reactions, technological advancements, and projects that also aim at delivering and exploring digital imagery on the World Wide Web. Future capabilities for site selection and change detection are also considered.
DOCU-TEXT: A tool before the data dictionary
NASA Technical Reports Server (NTRS)
Carter, B.
1983-01-01
DOCU-TEXT, a proprietary software package that aids in the production of documentation for a data processing organization and can be installed and operated only on IBM computers is discussed. In organizing information that ultimately will reside in a data dictionary, DOCU-TEXT proved to be a useful documentation tool in extracting information from existing production jobs, procedure libraries, system catalogs, control data sets and related files. DOCU-TEXT reads these files to derive data that is useful at the system level. The output of DOCU-TEXT is a series of user selectable reports. These reports can reflect the interactions within a single job stream, a complete system, or all the systems in an installation. Any single report, or group of reports, can be generated in an independent documentation pass.
A New Method for Setting Calculation Sequence of Directional Relay Protection in Multi-Loop Networks
NASA Astrophysics Data System (ADS)
Haijun, Xiong; Qi, Zhang
2016-08-01
Workload of relay protection setting calculation in multi-loop networks may be reduced effectively by optimization setting calculation sequences. A new method of setting calculation sequences of directional distance relay protection in multi-loop networks based on minimum broken nodes cost vector (MBNCV) was proposed to solve the problem experienced in current methods. Existing methods based on minimum breakpoint set (MBPS) lead to more break edges when untying the loops in dependent relationships of relays leading to possibly more iterative calculation workloads in setting calculations. A model driven approach based on behavior trees (BT) was presented to improve adaptability of similar problems. After extending the BT model by adding real-time system characters, timed BT was derived and the dependency relationship in multi-loop networks was then modeled. The model was translated into communication sequence process (CSP) models and an optimization setting calculation sequence in multi-loop networks was finally calculated by tools. A 5-nodes multi-loop network was applied as an example to demonstrate effectiveness of the modeling and calculation method. Several examples were then calculated with results indicating the method effectively reduces the number of forced broken edges for protection setting calculation in multi-loop networks.
Tools for visually exploring biological networks.
Suderman, Matthew; Hallett, Michael
2007-10-15
Many tools exist for visually exploring biological networks including well-known examples such as Cytoscape, VisANT, Pathway Studio and Patika. These systems play a key role in the development of integrative biology, systems biology and integrative bioinformatics. The trend in the development of these tools is to go beyond 'static' representations of cellular state, towards a more dynamic model of cellular processes through the incorporation of gene expression data, subcellular localization information and time-dependent behavior. We provide a comprehensive review of the relative advantages and disadvantages of existing systems with two goals in mind: to aid researchers in efficiently identifying the appropriate existing tools for data visualization; to describe the necessary and realistic goals for the next generation of visualization tools. In view of the first goal, we provide in the Supplementary Material a systematic comparison of more than 35 existing tools in terms of over 25 different features. Supplementary data are available at Bioinformatics online.
The center for causal discovery of biomedical knowledge from big data.
Cooper, Gregory F; Bahar, Ivet; Becich, Michael J; Benos, Panayiotis V; Berg, Jeremy; Espino, Jeremy U; Glymour, Clark; Jacobson, Rebecca Crowley; Kienholz, Michelle; Lee, Adrian V; Lu, Xinghua; Scheines, Richard
2015-11-01
The Big Data to Knowledge (BD2K) Center for Causal Discovery is developing and disseminating an integrated set of open source tools that support causal modeling and discovery of biomedical knowledge from large and complex biomedical datasets. The Center integrates teams of biomedical and data scientists focused on the refinement of existing and the development of new constraint-based and Bayesian algorithms based on causal Bayesian networks, the optimization of software for efficient operation in a supercomputing environment, and the testing of algorithms and software developed using real data from 3 representative driving biomedical projects: cancer driver mutations, lung disease, and the functional connectome of the human brain. Associated training activities provide both biomedical and data scientists with the knowledge and skills needed to apply and extend these tools. Collaborative activities with the BD2K Consortium further advance causal discovery tools and integrate tools and resources developed by other centers. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Williams, Bradley S; D'Amico, Ellen; Kastens, Jude H; Thorp, James H; Flotemersch, Joseph E; Thoms, Martin C
2013-09-01
River systems consist of hydrogeomorphic patches (HPs) that emerge at multiple spatiotemporal scales. Functional process zones (FPZs) are HPs that exist at the river valley scale and are important strata for framing whole-watershed research questions and management plans. Hierarchical classification procedures aid in HP identification by grouping sections of river based on their hydrogeomorphic character; however, collecting data required for such procedures with field-based methods is often impractical. We developed a set of GIS-based tools that facilitate rapid, low cost riverine landscape characterization and FPZ classification. Our tools, termed RESonate, consist of a custom toolbox designed for ESRI ArcGIS®. RESonate automatically extracts 13 hydrogeomorphic variables from readily available geospatial datasets and datasets derived from modeling procedures. An advanced 2D flood model, FLDPLN, designed for MATLAB® is used to determine valley morphology by systematically flooding river networks. When used in conjunction with other modeling procedures, RESonate and FLDPLN can assess the character of large river networks quickly and at very low costs. Here we describe tool and model functions in addition to their benefits, limitations, and applications.
Gene Drive for Mosquito Control: Where Did It Come from and Where Are We Headed?
Macias, Vanessa M.; Ohm, Johanna R.; Rasgon, Jason L.
2017-01-01
Mosquito-borne pathogens place an enormous burden on human health. The existing toolkit is insufficient to support ongoing vector-control efforts towards meeting disease elimination and eradication goals. The perspective that genetic approaches can potentially add a significant set of tools toward mosquito control is not new, but the recent improvements in site-specific gene editing with CRISPR/Cas9 systems have enhanced our ability to both study mosquito biology using reverse genetics and produce genetics-based tools. Cas9-mediated gene-editing is an efficient and adaptable platform for gene drive strategies, which have advantages over innundative release strategies for introgressing desirable suppression and pathogen-blocking genotypes into wild mosquito populations; until recently, an effective gene drive has been largely out of reach. Many considerations will inform the effective use of new genetic tools, including gene drives. Here we review the lengthy history of genetic advances in mosquito biology and discuss both the impact of efficient site-specific gene editing on vector biology and the resulting potential to deploy new genetic tools for the abatement of mosquito-borne disease. PMID:28869513
Toward mapping the biology of the genome.
Chanock, Stephen
2012-09-01
This issue of Genome Research presents new results, methods, and tools from The ENCODE Project (ENCyclopedia of DNA Elements), which collectively represents an important step in moving beyond a parts list of the genome and promises to shape the future of genomic research. This collection sheds light on basic biological questions and frames the current debate over the optimization of tools and methodological challenges necessary to compare and interpret large complex data sets focused on how the genome is organized and regulated. In a number of instances, the authors have highlighted the strengths and limitations of current computational and technical approaches, providing the community with useful standards, which should stimulate development of new tools. In many ways, these papers will ripple through the scientific community, as those in pursuit of understanding the "regulatory genome" will heavily traverse the maps and tools. Similarly, the work should have a substantive impact on how genetic variation contributes to specific diseases and traits by providing a compendium of functional elements for follow-up study. The success of these papers should not only be measured by the scope of the scientific insights and tools but also by their ability to attract new talent to mine existing and future data.
Zartarian, Valerie G; Schultz, Bradley D; Barzyk, Timothy M; Smuts, Marybeth; Hammond, Davyda M; Medina-Vera, Myriam; Geller, Andrew M
2011-12-01
Our primary objective was to provide higher quality, more accessible science to address challenges of characterizing local-scale exposures and risks for enhanced community-based assessments and environmental decision-making. After identifying community needs, priority environmental issues, and current tools, we designed and populated the Community-Focused Exposure and Risk Screening Tool (C-FERST) in collaboration with stakeholders, following a set of defined principles, and considered it in the context of environmental justice. C-FERST is a geographic information system and resource access Web tool under development for supporting multimedia community assessments. Community-level exposure and risk research is being conducted to address specific local issues through case studies. C-FERST can be applied to support environmental justice efforts. It incorporates research to develop community-level data and modeled estimates for priority environmental issues, and other relevant information identified by communities. Initial case studies are under way to refine and test the tool to expand its applicability and transferability. Opportunities exist for scientists to address the many research needs in characterizing local cumulative exposures and risks and for community partners to apply and refine C-FERST.
Screening and Evaluation Tool (SET) Users Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pincock, Layne
This document is the users guide to using the Screening and Evaluation Tool (SET). SET is a tool for comparing multiple fuel cycle options against a common set of criteria and metrics. It does this using standard multi-attribute utility decision analysis methods.
On pseudo-hyperkähler prepotentials
NASA Astrophysics Data System (ADS)
Devchand, Chandrashekar; Spiro, Andrea
2016-10-01
An explicit surjection from a set of (locally defined) unconstrained holomorphic functions on a certain submanifold of Sp1(ℂ) × ℂ4n onto the set HKp,q of local isometry classes of real analytic pseudo-hyperkähler metrics of signature (4p, 4q) in dimension 4n is constructed. The holomorphic functions, called prepotentials, are analogues of Kähler potentials for Kähler metrics and provide a complete parameterisation of HKp,q. In particular, there exists a bijection between HKp,q and the set of equivalence classes of prepotentials. This affords the explicit construction of pseudo-hyperkähler metrics from specified prepotentials. The construction generalises one due to Galperin, Ivanov, Ogievetsky, and Sokatchev. Their work is given a coordinate-free formulation and complete, self-contained proofs are provided. The Appendix provides a vital tool for this construction: a reformulation of real analytic G-structures in terms of holomorphic frame fields on complex manifolds.
NASA Technical Reports Server (NTRS)
Roads, John; Voeroesmarty, Charles
2005-01-01
The main focus of our work was to solidify underlying data sets, the data processing tools and the modeling environment needed to perform a series of long-term global and regional hydrological simulations leading eventually to routine hydrometeorological predictions. A water and energy budget synthesis was developed for the Mississippi River Basin (Roads et al. 2003), in order to understand better what kinds of errors exist in current hydrometeorological data sets. This study is now being extended globally with a larger number of observations and model based data sets under the new NASA NEWS program. A global comparison of a number of precipitation data sets was subsequently carried out (Fekete et al. 2004) in which it was further shown that reanalysis precipitation has substantial problems, which subsequently led us to the development of a precipitation assimilation effort (Nunes and Roads 2005). We believe that with current levels of model skill in predicting precipitation that precipitation assimilation is necessary to get the appropriate land surface forcing.
Measuring Iranian women's sexual behaviors: Expert opinion
Ghorashi, Zohreh; Merghati-Khoei, Effat; Yousefy, Alireza
2014-01-01
The cultural compatibility of sexually related instruments is problematic because the contexts from which the concepts and meanings were extracted may be significantly different from related contexts in a different society. This paper describes the instruments that have been used to assess sexual behaviors, primarily in Western contexts. Then, based on the instruments’ working definition of ‘sexual behavior’ and their theoretical frameworks, we will (1) discuss the applicability or cultural compatibility of existing instruments targeting women's sexual behaviors within an Iranian context, and (2) suggest criteria for sexually related tools applicable in Iranian settings. Iranian women's sexual scripts may compromise the existing instruments’ compatibility. Suggested criteria are as follows: understanding, language of sexuality, ethics and morality. Therefore, developing a culturally comprehensive measure that can adequately examine Iranian women's sexual behaviors is needed. PMID:25250346
Search for sterile neutrinos with the SOX experiment
NASA Astrophysics Data System (ADS)
Caminata, A.; Agostini, M.; Altenmüller, K.; Appel, S.; Bellini, G.; Benziger, J.; Berton, N.; Bick, D.; Bonfini, G.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Cavalcante, P.; Chepurnov, A.; Choi, K.; Cribier, M.; D'Angelo, D.; Davini, S.; Derbin, A.; Di Noto, L.; Drachnev, I.; Durero, M.; Empl, A.; Etenko, A.; Farinon, S.; Fischer, V.; Fomenko, K.; Franco, D.; Gabriele, F.; Gaffiot, J.; Galbiati, C.; Ghiano, C.; Giammarchi, M.; Goeger-Neff, M.; Goretti, A.; Gromov, M.; Hagner, C.; Houdy, Th.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Jonquères, N.; Jedrzejczak, K.; Kaiser, M.; Kobychev, V.; Korablev, D.; Korga, G.; Kornoukhov, V.; Kryn, D.; Lachenmaier, T.; Lasserre, T.; Laubenstein, M.; Lehnert, B.; Link, J.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Manecki, S.; Maneschg, W.; Marcocci, S.; Maricic, J.; Mention, G.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Montuschi, M.; Mosteiro, P.; Muratova, V.; Musenich, R.; Neumair, B.; Oberauer, L.; Obolensky, M.; Ortica, F.; Otis, K.; Pagani, L.; Pallavicini, M.; Papp, L.; Perasso, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Roncin, R.; Rossi, N.; Schönert, S.; Scola, L.; Semenov, D.; Simgen, H.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Sukhotin, S.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Thurn, J.; Toropova, M.; Unzhakov, E.; Veyssière, C.; Vivier, M.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Weinz, S.; Winter, J.; Wojcik, M.; Wurm, M.; Yokley, Z.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.
2016-01-01
In the recent years, the Borexino detector has proven its outstanding performances in detecting neutrinos and antineutrinos in the low energy regime. Consequently, it is an ideal tool to investigate the existence of sterile neutrinos, whose presence has been suggested by several anomalies over the past two decades. The SOX ( Short distance neutrino Oscillations with boreXino) project will investigate the presence of sterile neutrinos placing a neutrino and an antineutrino sources in a location under the detector foreseen for this purpose since the construction of Borexino. Interacting in the detector active volume, each beam would create a well detectable spatial wave pattern in case of oscillation of neutrino or antineutrino in a sterile state. Otherwise, the experiment will set a very stringent limit on the existence of a sterile state.
NASA Technical Reports Server (NTRS)
Nieten, Joseph L.; Seraphine, Kathleen M.
1991-01-01
Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.
ECR plasma thruster research - Preliminary theory and experiments
NASA Technical Reports Server (NTRS)
Sercel, Joel C.; Fitzgerald, Dennis J.
1989-01-01
A preliminary theory of the operation of the electron-cyclotron-resonance (ECR) plasma thruster is described along with an outline of recent experiments. This work is presented to communicate the status of an ongoing research effort directed at developing a unified theory to quantitatively describe the operation of the ECR plasma thruster. The theory is presented as a set of nonlinear ordinary differential equations and boundary conditions which describe the plasma density, velocity, and electron temperature. Diagnostic tools developed to measure plasma conditions in the existing research device are described.
Botti, Mari; Redley, Bernice; Nguyen, Lemai; Coleman, Kimberley; Wickramasinghe, Nilmini
2015-01-01
This research focuses on a major health priority for Australia by addressing existing gaps in the implementation of nursing informatics solutions in healthcare. It serves to inform the successful deployment of IT solutions designed to support patient-centered, frontline acute healthcare delivery by multidisciplinary care teams. The outcomes can guide future evaluations of the contribution of IT solutions to the efficiency, safety and quality of care delivery in acute hospital settings.
Using artificial intelligence to control fluid flow computations
NASA Technical Reports Server (NTRS)
Gelsey, Andrew
1992-01-01
Computational simulation is an essential tool for the prediction of fluid flow. Many powerful simulation programs exist today. However, using these programs to reliably analyze fluid flow and other physical situations requires considerable human effort and expertise to set up a simulation, determine whether the output makes sense, and repeatedly run the simulation with different inputs until a satisfactory result is achieved. Automating this process is not only of considerable practical importance but will also significantly advance basic artificial intelligence (AI) research in reasoning about the physical world.
NASA Technical Reports Server (NTRS)
Simoneau, Robert J.; Strazisar, Anthony J.; Sockol, Peter M.; Reid, Lonnie; Adamczyk, John J.
1987-01-01
The discipline research in turbomachinery, which is directed toward building the tools needed to understand such a complex flow phenomenon, is based on the fact that flow in turbomachinery is fundamentally unsteady or time dependent. Success in building a reliable inventory of analytic and experimental tools will depend on how the time and time-averages are treated, as well as on who the space and space-averages are treated. The raw tools at disposal (both experimentally and computational) are truly powerful and their numbers are growing at a staggering pace. As a result of this power, a case can be made that a situation exists where information is outstripping understanding. The challenge is to develop a set of computational and experimental tools which genuinely increase understanding of the fluid flow and heat transfer in a turbomachine. Viewgraphs outline a philosophy based on working on a stairstep hierarchy of mathematical and experimental complexity to build a system of tools, which enable one to aggressively design the turbomachinery of the next century. Examples of the types of computational and experimental tools under current development at Lewis, with progress to date, are examined. The examples include work in both the time-resolved and time-averaged domains. Finally, an attempt is made to identify the proper place for Lewis in this continuum of research.
Pyviko: an automated Python tool to design gene knockouts in complex viruses with overlapping genes.
Taylor, Louis J; Strebel, Klaus
2017-01-07
Gene knockouts are a common tool used to study gene function in various organisms. However, designing gene knockouts is complicated in viruses, which frequently contain sequences that code for multiple overlapping genes. Designing mutants that can be traced by the creation of new or elimination of existing restriction sites further compounds the difficulty in experimental design of knockouts of overlapping genes. While software is available to rapidly identify restriction sites in a given nucleotide sequence, no existing software addresses experimental design of mutations involving multiple overlapping amino acid sequences in generating gene knockouts. Pyviko performed well on a test set of over 240,000 gene pairs collected from viral genomes deposited in the National Center for Biotechnology Information Nucleotide database, identifying a point mutation which added a premature stop codon within the first 20 codons of the target gene in 93.2% of all tested gene-overprinted gene pairs. This shows that Pyviko can be used successfully in a wide variety of contexts to facilitate the molecular cloning and study of viral overprinted genes. Pyviko is an extensible and intuitive Python tool for designing knockouts of overlapping genes. Freely available as both a Python package and a web-based interface ( http://louiejtaylor.github.io/pyViKO/ ), Pyviko simplifies the experimental design of gene knockouts in complex viruses with overlapping genes.
Developing a Science Commons for Geosciences
NASA Astrophysics Data System (ADS)
Lenhardt, W. C.; Lander, H.
2016-12-01
Many scientific communities, recognizing the research possibilities inherent in data sets, have created domain specific archives such as the Incorporated Research Institutions for Seismology (iris.edu) and ClinicalTrials.gov. Though this is an important step forward, most scientists, including geoscientists, also use a variety of software tools and at least some amount of computation to conduct their research. While the archives make it simpler for scientists to locate the required data, provisioning disk space, compute resources, and network bandwidth can still require significant efforts. This challenge exists despite the wealth of resources available to researchers, namely lab IT resources, institutional IT resources, national compute resources (XSEDE, OSG), private clouds, public clouds, and the development of cyberinfrastructure technologies meant to facilitate use of those resources. Further tasks include obtaining and installing required tools for analysis and visualization. If the research effort is a collaboration or involves certain types of data, then the partners may well have additional non-scientific tasks such as securing the data and developing secure sharing methods for the data. These requirements motivate our investigations into the "Science Commons". This paper will present a working definition of a science commons, compare and contrast examples of existing science commons, and describe a project based at RENCI to implement a science commons for risk analytics. We will then explore what a similar tool might look like for the geosciences.
MACHETE: Environment for Space Networking Evaluation
NASA Technical Reports Server (NTRS)
Jennings, Esther H.; Segui, John S.; Woo, Simon
2010-01-01
Space Exploration missions requires the design and implementation of space networking that differs from terrestrial networks. In a space networking architecture, interplanetary communication protocols need to be designed, validated and evaluated carefully to support different mission requirements. As actual systems are expensive to build, it is essential to have a low cost method to validate and verify mission/system designs and operations. This can be accomplished through simulation. Simulation can aid design decisions where alternative solutions are being considered, support trade-studies and enable fast study of what-if scenarios. It can be used to identify risks, verify system performance against requirements, and as an initial test environment as one moves towards emulation and actual hardware implementation of the systems. We describe the development of Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) and its use cases in supporting architecture trade studies, protocol performance and its role in hybrid simulation/emulation. The MACHETE environment contains various tools and interfaces such that users may select the set of tools tailored for the specific simulation end goal. The use cases illustrate tool combinations for simulating space networking in different mission scenarios. This simulation environment is useful in supporting space networking design for planned and future missions as well as evaluating performance of existing networks where non-determinism exist in data traffic and/or link conditions.
Coupling of snow and permafrost processes using the Basic Modeling Interface (BMI)
NASA Astrophysics Data System (ADS)
Wang, K.; Overeem, I.; Jafarov, E. E.; Piper, M.; Stewart, S.; Clow, G. D.; Schaefer, K. M.
2017-12-01
We developed a permafrost modeling tool based by implementing the Kudryavtsev empirical permafrost active layer depth model (the so-called "Ku" component). The model is specifically set up to have a basic model interface (BMI), which enhances the potential coupling to other earth surface processes model components. This model is accessible through the Web Modeling Tool in Community Surface Dynamics Modeling System (CSDMS). The Kudryavtsev model has been applied for entire Alaska to model permafrost distribution at high spatial resolution and model predictions have been verified by Circumpolar Active Layer Monitoring (CALM) in-situ observations. The Ku component uses monthly meteorological forcing, including air temperature, snow depth, and snow density, and predicts active layer thickness (ALT) and temperature on the top of permafrost (TTOP), which are important factors in snow-hydrological processes. BMI provides an easy approach to couple the models with each other. Here, we provide a case of coupling the Ku component to snow process components, including the Snow-Degree-Day (SDD) method and Snow-Energy-Balance (SEB) method, which are existing components in the hydrological model TOPOFLOW. The work flow is (1) get variables from meteorology component, set the values to snow process component, and advance the snow process component, (2) get variables from meteorology and snow component, provide these to the Ku component and advance, (3) get variables from snow process component, set the values to meteorology component, and advance the meteorology component. The next phase is to couple the permafrost component with fully BMI-compliant TOPOFLOW hydrological model, which could provide a useful tool to investigate the permafrost hydrological effect.
CNN-BLPred: a Convolutional neural network based predictor for β-Lactamases (BL) and their classes.
White, Clarence; Ismail, Hamid D; Saigo, Hiroto; Kc, Dukka B
2017-12-28
The β-Lactamase (BL) enzyme family is an important class of enzymes that plays a key role in bacterial resistance to antibiotics. As the newly identified number of BL enzymes is increasing daily, it is imperative to develop a computational tool to classify the newly identified BL enzymes into one of its classes. There are two types of classification of BL enzymes: Molecular Classification and Functional Classification. Existing computational methods only address Molecular Classification and the performance of these existing methods is unsatisfactory. We addressed the unsatisfactory performance of the existing methods by implementing a Deep Learning approach called Convolutional Neural Network (CNN). We developed CNN-BLPred, an approach for the classification of BL proteins. The CNN-BLPred uses Gradient Boosted Feature Selection (GBFS) in order to select the ideal feature set for each BL classification. Based on the rigorous benchmarking of CCN-BLPred using both leave-one-out cross-validation and independent test sets, CCN-BLPred performed better than the other existing algorithms. Compared with other architectures of CNN, Recurrent Neural Network, and Random Forest, the simple CNN architecture with only one convolutional layer performs the best. After feature extraction, we were able to remove ~95% of the 10,912 features using Gradient Boosted Trees. During 10-fold cross validation, we increased the accuracy of the classic BL predictions by 7%. We also increased the accuracy of Class A, Class B, Class C, and Class D performance by an average of 25.64%. The independent test results followed a similar trend. We implemented a deep learning algorithm known as Convolutional Neural Network (CNN) to develop a classifier for BL classification. Combined with feature selection on an exhaustive feature set and using balancing method such as Random Oversampling (ROS), Random Undersampling (RUS) and Synthetic Minority Oversampling Technique (SMOTE), CNN-BLPred performs significantly better than existing algorithms for BL classification.
Rabbani, Fauziah; Jafri, Syed M Wasim; Abbas, Farhat; Shah, Mairaj; Azam, Syed Iqbal; Shaikh, Babar Tasneem; Brommels, Mats; Tomson, Goran
2010-01-01
Balanced Scorecards (BSC) are being implemented in high income health settings linking organizational strategies with performance data. At this private university hospital in Pakistan an elaborate information system exists. This study aimed to make best use of available data for better performance management. Applying the modified Delphi technique an expert panel of clinicians and hospital managers reduced a long list of indicators to a manageable size. Indicators from existing documents were evaluated for their importance, scientific soundness, appropriateness to hospital's strategic plan, feasibility and modifiability. Panel members individually rated each indicator on a scale of 1-9 for the above criteria. Median scores were assigned. Of an initial set of 50 indicators, 20 were finally selected to be assigned to the four BSC quadrants. These were financial (n = 4), customer or patient (n = 4), internal business or quality of care (n = 7) and innovation/learning or employee perspectives (n = 5). A need for stringent definitions, international benchmarking and standardized measurement methods was identified. BSC compels individual clinicians and managers to jointly work towards improving performance. This scorecard is now ready to be implemented by this hospital as a performance management tool for monitoring indicators, addressing measurement issues and enabling comparisons with hospitals in other settings. Copyright 2010 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
gochis, David; hooper, Rick; parodi, Antonio; Jha, Shantenu; Yu, Wei; Zaslavsky, Ilya; Ganapati, Dinesh
2014-05-01
The community WRF-Hydro system is currently being used in a variety of flood prediction and regional hydroclimate impacts assessment applications around the world. Despite its increasingly wide use certain cyberinfrastructure bottlenecks exist in the setup, execution and post-processing of WRF-Hydro model runs. These bottlenecks result in wasted time, labor, data transfer bandwidth and computational resource use. Appropriate development and use of cyberinfrastructure to setup and manage WRF-Hydro modeling applications will streamline the entire workflow of hydrologic model predictions. This talk will present recent advances in the development and use of new open-source cyberinfrastructure tools for the WRF-Hydro architecture. These tools include new web-accessible pre-processing applications, supercomputer job management applications and automated verification and visualization applications. The tools will be described successively and then demonstrated in a set of flash flood use cases for recent destructive flood events in the U.S. and in Europe. Throughout, an emphasis on the implementation and use of community data standards for data exchange is made.
Redefining the genetics of Murine Gammaherpesvirus 68 via transcriptome-based annotation
Johnson, L. Steven; Willert, Erin K.; Virgin, Herbert W.
2010-01-01
Summary Viral genetic studies often focus on large open reading frames (ORFs) identified during genome annotation (ORF-based annotation). Here we provide a tool and software set for defining gene expression by murine gammaherpesvirus 68 (γHV68) nucleotide-by-nucleotide across the 119,450 basepair (bp) genome. These tools allowed us to determine that viral RNA expression was significantly more complex than predicted from ORF-based annotation, including over 73,000 nucleotides of unexpected transcription within 30 expressed genomic regions (EGRs). Approximately 90% of this RNA expression was antisense to genomic regions containing known large ORFs. We verified the existence of novel transcripts in three EGRs using standard methods to validate the approach and determined which parts of the transcriptome depend on protein or viral DNA synthesis. This redefines the genetic map of γHV68, indicates that herpesviruses contain significantly more genetic complexity than predicted from ORF-based genome annotations, and provides new tools and approaches for viral genetic studies. PMID:20542255
An Assessment of IMPAC - Integrated Methodology for Propulsion and Airframe Controls
NASA Technical Reports Server (NTRS)
Walker, G. P.; Wagner, E. A.; Bodden, D. S.
1996-01-01
This report documents the work done under a NASA sponsored contract to transition to industry technologies developed under the NASA Lewis Research Center IMPAC (Integrated Methodology for Propulsion and Airframe Control) program. The critical steps in IMPAC are exercised on an example integrated flight/propulsion control design for linear airframe/engine models of a conceptual STOVL (Short Take-Off and Vertical Landing) aircraft, and MATRIXX (TM) executive files to implement each step are developed. The results from the example study are analyzed and lessons learned are listed along with recommendations that will improve the application of each design step. The end product of this research is a set of software requirements for developing a user-friendly control design tool which will automate the steps in the IMPAC methodology. Prototypes for a graphical user interface (GUI) are sketched to specify how the tool will interact with the user, and it is recommended to build the tool around existing computer aided control design software packages.
Spencer, Jean L; Bhatia, Vivek N; Whelan, Stephen A; Costello, Catherine E; McComb, Mark E
2013-12-01
The identification of protein post-translational modifications (PTMs) is an increasingly important component of proteomics and biomarker discovery, but very few tools exist for performing fast and easy characterization of global PTM changes and differential comparison of PTMs across groups of data obtained from liquid chromatography-tandem mass spectrometry experiments. STRAP PTM (Software Tool for Rapid Annotation of Proteins: Post-Translational Modification edition) is a program that was developed to facilitate the characterization of PTMs using spectral counting and a novel scoring algorithm to accelerate the identification of differential PTMs from complex data sets. The software facilitates multi-sample comparison by collating, scoring, and ranking PTMs and by summarizing data visually. The freely available software (beta release) installs on a PC and processes data in protXML format obtained from files parsed through the Trans-Proteomic Pipeline. The easy-to-use interface allows examination of results at protein, peptide, and PTM levels, and the overall design offers tremendous flexibility that provides proteomics insight beyond simple assignment and counting.
NIRP Core Software Suite v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitener, Dustin Heath; Folz, Wesley; Vo, Duong
The NIRP Core Software Suite is a core set of code that supports multiple applications. It includes miscellaneous base code for data objects, mathematic equations, and user interface components; and the framework includes several fully-developed software applications that exist as stand-alone tools to compliment other applications. The stand-alone tools are described below. Analyst Manager: An application to manage contact information for people (analysts) that use the software products. This information is often included in generated reports and may be used to identify the owners of calculations. Radionuclide Viewer: An application for viewing the DCFPAK radiological data. Compliments the Mixture Managermore » tool. Mixture Manager: An application to create and manage radionuclides mixtures that are commonly used in other applications. High Explosive Manager: An application to manage explosives and their properties. Chart Viewer: An application to view charts of data (e.g. meteorology charts). Other applications may use this framework to create charts specific to their data needs.« less
Kohonen, Pekka; Parkkinen, Juuso A.; Willighagen, Egon L.; Ceder, Rebecca; Wennerberg, Krister; Kaski, Samuel; Grafström, Roland C.
2017-01-01
Predicting unanticipated harmful effects of chemicals and drug molecules is a difficult and costly task. Here we utilize a ‘big data compacting and data fusion’—concept to capture diverse adverse outcomes on cellular and organismal levels. The approach generates from transcriptomics data set a ‘predictive toxicogenomics space’ (PTGS) tool composed of 1,331 genes distributed over 14 overlapping cytotoxicity-related gene space components. Involving ∼2.5 × 108 data points and 1,300 compounds to construct and validate the PTGS, the tool serves to: explain dose-dependent cytotoxicity effects, provide a virtual cytotoxicity probability estimate intrinsic to omics data, predict chemically-induced pathological states in liver resulting from repeated dosing of rats, and furthermore, predict human drug-induced liver injury (DILI) from hepatocyte experiments. Analysing 68 DILI-annotated drugs, the PTGS tool outperforms and complements existing tests, leading to a hereto-unseen level of DILI prediction accuracy. PMID:28671182
SketchBio: a scientist's 3D interface for molecular modeling and animation.
Waldon, Shawn M; Thompson, Peter M; Hahn, Patrick J; Taylor, Russell M
2014-10-30
Because of the difficulties involved in learning and using 3D modeling and rendering software, many scientists hire programmers or animators to create models and animations. This both slows the discovery process and provides opportunities for miscommunication. Working with multiple collaborators, a tool was developed (based on a set of design goals) to enable them to directly construct models and animations. SketchBio is presented, a tool that incorporates state-of-the-art bimanual interaction and drop shadows to enable rapid construction of molecular structures and animations. It includes three novel features: crystal-by-example, pose-mode physics, and spring-based layout that accelerate operations common in the formation of molecular models. Design decisions and their consequences are presented, including cases where iterative design was required to produce effective approaches. The design decisions, novel features, and inclusion of state-of-the-art techniques enabled SketchBio to meet all of its design goals. These features and decisions can be incorporated into existing and new tools to improve their effectiveness.
The need for monetary information within corporate water accounting.
Burritt, Roger L; Christ, Katherine L
2017-10-01
A conceptual discussion is provided about the need to add monetary data to water accounting initiatives and how best to achieve this if companies are to become aware of the water crisis and to take actions to improve water management. Analysis of current water accounting initiatives reveals the monetary business case for companies to improve water management is rarely considered, there being a focus on physical information about water use. Three possibilities emerge for mainstreaming the integration of monetization into water accounting: add-on to existing water accounting frameworks and tools, develop new tools which include physical and monetary information from the start, and develop environmental management accounting (EMA) into a water-specific application and set of tools. The paper appraises these three alternatives and concludes that development of EMA would be the best way forward. Suggestions for further research include the need to examine the use of a transdisciplinary method to address the complexities of water accounting. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds
NASA Astrophysics Data System (ADS)
Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni
2012-09-01
Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.
A validated set of tool pictures with matched objects and non-objects for laterality research.
Verma, Ark; Brysbaert, Marc
2015-01-01
Neuropsychological and neuroimaging research has established that knowledge related to tool use and tool recognition is lateralized to the left cerebral hemisphere. Recently, behavioural studies with the visual half-field technique have confirmed the lateralization. A limitation of this research was that different sets of stimuli had to be used for the comparison of tools to other objects and objects to non-objects. Therefore, we developed a new set of stimuli containing matched triplets of tools, other objects and non-objects. With the new stimulus set, we successfully replicated the findings of no visual field advantage for objects in an object recognition task combined with a significant right visual field advantage for tools in a tool recognition task. The set of stimuli is available as supplemental data to this article.
Modular modelling with Physiome standards
Nickerson, David P.; Nielsen, Poul M. F.; Hunter, Peter J.
2016-01-01
Key points The complexity of computational models is increasing, supported by research in modelling tools and frameworks. But relatively little thought has gone into design principles for complex models.We propose a set of design principles for complex model construction with the Physiome standard modelling protocol CellML.By following the principles, models are generated that are extensible and are themselves suitable for reuse in larger models of increasing complexity.We illustrate these principles with examples including an architectural prototype linking, for the first time, electrophysiology, thermodynamically compliant metabolism, signal transduction, gene regulation and synthetic biology.The design principles complement other Physiome research projects, facilitating the application of virtual experiment protocols and model analysis techniques to assist the modelling community in creating libraries of composable, characterised and simulatable quantitative descriptions of physiology. Abstract The ability to produce and customise complex computational models has great potential to have a positive impact on human health. As the field develops towards whole‐cell models and linking such models in multi‐scale frameworks to encompass tissue, organ, or organism levels, reuse of previous modelling efforts will become increasingly necessary. Any modelling group wishing to reuse existing computational models as modules for their own work faces many challenges in the context of construction, storage, retrieval, documentation and analysis of such modules. Physiome standards, frameworks and tools seek to address several of these challenges, especially for models expressed in the modular protocol CellML. Aside from providing a general ability to produce modules, there has been relatively little research work on architectural principles of CellML models that will enable reuse at larger scales. To complement and support the existing tools and frameworks, we develop a set of principles to address this consideration. The principles are illustrated with examples that couple electrophysiology, signalling, metabolism, gene regulation and synthetic biology, together forming an architectural prototype for whole‐cell modelling (including human intervention) in CellML. Such models illustrate how testable units of quantitative biophysical simulation can be constructed. Finally, future relationships between modular models so constructed and Physiome frameworks and tools are discussed, with particular reference to how such frameworks and tools can in turn be extended to complement and gain more benefit from the results of applying the principles. PMID:27353233
Tools for Local and Distributed Climate Data Access
NASA Astrophysics Data System (ADS)
Schweitzer, R.; O'Brien, K.; Burger, E. F.; Smith, K. M.; Manke, A. B.; Radhakrishnan, A.; Balaji, V.
2017-12-01
Last year we reported on our efforts to adapt existing tools to facilitate model development. During the lifecycle of a Climate Model Intercomparison Project (CMIP), data must be quality controlled before it can be published and studied. Like previous efforts, the next CMIP6 will produce an unprecedented volume of data. For an institution, modelling group or modeller the volume of data is unmanageable without tools that organize and automate as many processes as possible. Even if a modelling group has tools for data and metadata management, it often falls on individuals to do the initial quality assessment for a model run with bespoke tools. Using individually crafted tools can lead to interruptions when project personnel change and may result in inconsistencies and duplication of effort across groups. This talk will expand on our experiences using available tools (Ferret/PyFerret, the Live Access Server, the GFDL Curator, the GFDL Model Development Database Interface and the THREDDS Data Server) to seamlessly automate the data assembly process to give users "one-click" access to a rich suite of Web-based analysis and comparison tools. On the surface, it appears that this collection of tools is well suited to the task, but our experience of the last year taught us that the data volume and distributed storage adds a number of challenges in adapting the tools for this task. Quality control and initial evaluation add their own set of challenges. We will discuss how we addressed the needs of QC researchers by expanding standard tools to include specialized plots and leveraged the configurability of the tools to add specific user defined analysis operations so they are available to everyone using the system. We also report on our efforts to overcome some of the technical barriers for wide adoption of the tools by providing pre-built containers that are easily deployed in virtual machine and cloud environments. Finally, we will offer some suggestions for added features, configuration options and improved robustness that can make future implementation of similar systems operate faster and more reliably. Solving these challenges for data sets distributed narrowly across networks and storage systems of points the way to solving similar problems associated with sharing data distributed across institutions continents.
Kovacs Burns, Katharina; Bellows, Mandy; Eigenseher, Carol; Gallivan, Jennifer
2014-04-15
Extensive literature exists on public involvement or engagement, but what actual tools or guides exist that are practical, tested and easy to use specifically for initiating and implementing patient and family engagement, is uncertain. No comprehensive review and synthesis of general international published or grey literature on this specific topic was found. A systematic scoping review of published and grey literature is, therefore, appropriate for searching through the vast general engagement literature to identify 'patient/family engagement' tools and guides applicable in health organization decision-making, such as within Alberta Health Services in Alberta, Canada. This latter organization requested this search and review to inform the contents of a patient engagement resource kit for patients, providers and leaders. Search terms related to 'patient engagement', tools, guides, education and infrastructure or resources, were applied to published literature databases and grey literature search engines. Grey literature also included United States, Australia and Europe where most known public engagement practices exist, and Canada as the location for this study. Inclusion and exclusion criteria were set, and include: English documents referencing 'patient engagement' with specific criteria, and published between 1995 and 2011. For document analysis and synthesis, document analysis worksheets were used by three reviewers for the selected 224 published and 193 grey literature documents. Inter-rater reliability was ensured for the final reviews and syntheses of 76 published and 193 grey documents. Seven key themes emerged from the literature synthesis analysis, and were identified for patient, provider and/or leader groups. Articles/items within each theme were clustered under main topic areas of 'tools', 'education' and 'infrastructure'. The synthesis and findings in the literature include 15 different terms and definitions for 'patient engagement', 17 different engagement models, numerous barriers and benefits, and 34 toolkits for various patient engagement and evaluation initiatives. Patient engagement is very complex. This scoping review for patient/family engagement tools and guides is a good start for a resource inventory and can guide the content development of a patient engagement resource kit to be used by patients/families, healthcare providers and administrators.
2013-01-01
Background Understanding the relationship between organizational context and research utilization is key to reducing the research-practice gap in health care. This is particularly true in the residential long term care (LTC) setting where relatively little work has examined the influence of context on research implementation. Reliable, valid measures and tools are a prerequisite for studying organizational context and research utilization. Few such tools exist in German. We thus translated three such tools (the Alberta Context Tool and two measures of research use) into German for use in German residential LTC. We point out challenges and strategies for their solution unique to German residential LTC, and demonstrate how resolving specific challenges in the translation of the health care aide instrument version streamlined the translation process of versions for registered nurses, allied health providers, practice specialists, and managers. Methods Our translation methods were based on best practices and included two independent forward translations, reconciliation of the forward translations, expert panel discussions, two independent back translations, reconciliation of the back translations, back translation review, and cognitive debriefing. Results We categorized the challenges in this translation process into seven categories: (1) differing professional education of Canadian and German care providers, (2) risk that German translations would become grammatically complex, (3) wordings at risk of being misunderstood, (4) phrases/idioms non-existent in German, (5) lack of corresponding German words, (6) limited comprehensibility of corresponding German words, and (7) target persons’ unfamiliarity with activities detailed in survey items. Examples of each challenge are described with strategies that we used to manage the challenge. Conclusion Translating an existing instrument is complex and time-consuming, but a rigorous approach is necessary to obtain instrument equivalence. Essential components were (1) involvement of and co-operation with the instrument developers and (2) expert panel discussions, including both target group and content experts. Equivalent translated instruments help researchers from different cultures to find a common language and undertake comparative research. As acceptable psychometric properties are a prerequisite for that, we are currently carrying out a study with that focus. PMID:24238613
2013-01-01
Background Multicellular organisms consist of cells of many different types that are established during development. Each type of cell is characterized by the unique combination of expressed gene products as a result of spatiotemporal gene regulation. Currently, a fundamental challenge in regulatory biology is to elucidate the gene expression controls that generate the complex body plans during development. Recent advances in high-throughput biotechnologies have generated spatiotemporal expression patterns for thousands of genes in the model organism fruit fly Drosophila melanogaster. Existing qualitative methods enhanced by a quantitative analysis based on computational tools we present in this paper would provide promising ways for addressing key scientific questions. Results We develop a set of computational methods and open source tools for identifying co-expressed embryonic domains and the associated genes simultaneously. To map the expression patterns of many genes into the same coordinate space and account for the embryonic shape variations, we develop a mesh generation method to deform a meshed generic ellipse to each individual embryo. We then develop a co-clustering formulation to cluster the genes and the mesh elements, thereby identifying co-expressed embryonic domains and the associated genes simultaneously. Experimental results indicate that the gene and mesh co-clusters can be correlated to key developmental events during the stages of embryogenesis we study. The open source software tool has been made available at http://compbio.cs.odu.edu/fly/. Conclusions Our mesh generation and machine learning methods and tools improve upon the flexibility, ease-of-use and accuracy of existing methods. PMID:24373308
Hagen, Espen; Ness, Torbjørn V; Khosrowshahi, Amir; Sørensen, Christina; Fyhn, Marianne; Hafting, Torkel; Franke, Felix; Einevoll, Gaute T
2015-04-30
New, silicon-based multielectrodes comprising hundreds or more electrode contacts offer the possibility to record spike trains from thousands of neurons simultaneously. This potential cannot be realized unless accurate, reliable automated methods for spike sorting are developed, in turn requiring benchmarking data sets with known ground-truth spike times. We here present a general simulation tool for computing benchmarking data for evaluation of spike-sorting algorithms entitled ViSAPy (Virtual Spiking Activity in Python). The tool is based on a well-established biophysical forward-modeling scheme and is implemented as a Python package built on top of the neuronal simulator NEURON and the Python tool LFPy. ViSAPy allows for arbitrary combinations of multicompartmental neuron models and geometries of recording multielectrodes. Three example benchmarking data sets are generated, i.e., tetrode and polytrode data mimicking in vivo cortical recordings and microelectrode array (MEA) recordings of in vitro activity in salamander retinas. The synthesized example benchmarking data mimics salient features of typical experimental recordings, for example, spike waveforms depending on interspike interval. ViSAPy goes beyond existing methods as it includes biologically realistic model noise, synaptic activation by recurrent spiking networks, finite-sized electrode contacts, and allows for inhomogeneous electrical conductivities. ViSAPy is optimized to allow for generation of long time series of benchmarking data, spanning minutes of biological time, by parallel execution on multi-core computers. ViSAPy is an open-ended tool as it can be generalized to produce benchmarking data or arbitrary recording-electrode geometries and with various levels of complexity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments
Yim, Won Cheol; Cushman, John C.
2017-07-22
Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less
BioLemmatizer: a lemmatization tool for morphological processing of biomedical text
2012-01-01
Background The wide variety of morphological variants of domain-specific technical terms contributes to the complexity of performing natural language processing of the scientific literature related to molecular biology. For morphological analysis of these texts, lemmatization has been actively applied in the recent biomedical research. Results In this work, we developed a domain-specific lemmatization tool, BioLemmatizer, for the morphological analysis of biomedical literature. The tool focuses on the inflectional morphology of English and is based on the general English lemmatization tool MorphAdorner. The BioLemmatizer is further tailored to the biological domain through incorporation of several published lexical resources. It retrieves lemmas based on the use of a word lexicon, and defines a set of rules that transform a word to a lemma if it is not encountered in the lexicon. An innovative aspect of the BioLemmatizer is the use of a hierarchical strategy for searching the lexicon, which enables the discovery of the correct lemma even if the input Part-of-Speech information is inaccurate. The BioLemmatizer achieves an accuracy of 97.5% in lemmatizing an evaluation set prepared from the CRAFT corpus, a collection of full-text biomedical articles, and an accuracy of 97.6% on the LLL05 corpus. The contribution of the BioLemmatizer to accuracy improvement of a practical information extraction task is further demonstrated when it is used as a component in a biomedical text mining system. Conclusions The BioLemmatizer outperforms other tools when compared with eight existing lemmatizers. The BioLemmatizer is released as an open source software and can be downloaded from http://biolemmatizer.sourceforge.net. PMID:22464129
Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yim, Won Cheol; Cushman, John C.
Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less
The NSF ITR Project: Framework for the National Virtual Observatory
NASA Astrophysics Data System (ADS)
Szalay, A. S.; Williams, R. D.; NVO Collaboration
2002-05-01
Technological advances in telescope and instrument design during the last ten years, coupled with the exponential increase in computer and communications capability, have caused a dramatic and irreversible change in the character of astronomical research. Large-scale surveys of the sky from space and ground are being initiated at wavelengths from radio to x-ray, thereby generating vast amounts of high quality irreplaceable data. The potential for scientific discovery afforded by these new surveys is enormous. Entirely new and unexpected scientific results of major significance will emerge from the combined use of the resulting datasets, science that would not be possible from such sets used singly. However, their large size and complexity require tools and structures to discover the complex phenomena encoded within them. We plan to build the NVO framework both through coordinating diverse efforts already in existence and providing a focus for the development of capabilities that do not yet exist. The NVO we envisage will act as an enabling and coordinating entity to foster the development of further tools, protocols, and collaborations necessary to realize the full scientific potential of large astronomical datasets in the coming decade. The NVO must be able to change and respond to the rapidly evolving world of IT technology. In spite of its underlying complex software, the NVO should be no harder to use for the average astronomer, than today's brick-and-mortar observatories and telescopes. Development of these capabilities will require close interaction and collaboration with the information technology community and other disciplines facing similar challenges. We need to ensure that the tools that we need exist or are built, but we do not duplicate efforts, and rely on relevant experience of others.
The KIT Motion-Language Dataset.
Plappert, Matthias; Mandery, Christian; Asfour, Tamim
2016-12-01
Linking human motion and natural language is of great interest for the generation of semantic representations of human activities as well as for the generation of robot activities based on natural language input. However, although there have been years of research in this area, no standardized and openly available data set exists to support the development and evaluation of such systems. We, therefore, propose the Karlsruhe Institute of Technology (KIT) Motion-Language Dataset, which is large, open, and extensible. We aggregate data from multiple motion capture databases and include them in our data set using a unified representation that is independent of the capture system or marker set, making it easy to work with the data regardless of its origin. To obtain motion annotations in natural language, we apply a crowd-sourcing approach and a web-based tool that was specifically build for this purpose, the Motion Annotation Tool. We thoroughly document the annotation process itself and discuss gamification methods that we used to keep annotators motivated. We further propose a novel method, perplexity-based selection, which systematically selects motions for further annotation that are either under-represented in our data set or that have erroneous annotations. We show that our method mitigates the two aforementioned problems and ensures a systematic annotation process. We provide an in-depth analysis of the structure and contents of our resulting data set, which, as of October 10, 2016, contains 3911 motions with a total duration of 11.23 hours and 6278 annotations in natural language that contain 52,903 words. We believe this makes our data set an excellent choice that enables more transparent and comparable research in this important area.
The Virtual Physiological Human ToolKit.
Cooper, Jonathan; Cervenansky, Frederic; De Fabritiis, Gianni; Fenner, John; Friboulet, Denis; Giorgino, Toni; Manos, Steven; Martelli, Yves; Villà-Freixa, Jordi; Zasada, Stefan; Lloyd, Sharon; McCormack, Keith; Coveney, Peter V
2010-08-28
The Virtual Physiological Human (VPH) is a major European e-Science initiative intended to support the development of patient-specific computer models and their application in personalized and predictive healthcare. The VPH Network of Excellence (VPH-NoE) project is tasked with facilitating interaction between the various VPH projects and addressing issues of common concern. A key deliverable is the 'VPH ToolKit'--a collection of tools, methodologies and services to support and enable VPH research, integrating and extending existing work across Europe towards greater interoperability and sustainability. Owing to the diverse nature of the field, a single monolithic 'toolkit' is incapable of addressing the needs of the VPH. Rather, the VPH ToolKit should be considered more as a 'toolbox' of relevant technologies, interacting around a common set of standards. The latter apply to the information used by tools, including any data and the VPH models themselves, and also to the naming and categorizing of entities and concepts involved. Furthermore, the technologies and methodologies available need to be widely disseminated, and relevant tools and services easily found by researchers. The VPH-NoE has thus created an online resource for the VPH community to meet this need. It consists of a database of tools, methods and services for VPH research, with a Web front-end. This has facilities for searching the database, for adding or updating entries, and for providing user feedback on entries. Anyone is welcome to contribute.
NASA Astrophysics Data System (ADS)
See, Linda; Perger, Christoph; Dresel, Christopher; Hofer, Martin; Weichselbaum, Juergen; Mondel, Thomas; Steffen, Fritz
2016-04-01
The validation of land cover products is an important step in the workflow of generating a land cover map from remotely-sensed imagery. Many students of remote sensing will be given exercises on classifying a land cover map followed by the validation process. Many algorithms exist for classification, embedded within proprietary image processing software or increasingly as open source tools. However, there is little standardization for land cover validation, nor a set of open tools available for implementing this process. The LACO-Wiki tool was developed as a way of filling this gap, bringing together standardized land cover validation methods and workflows into a single portal. This includes the storage and management of land cover maps and validation data; step-by-step instructions to guide users through the validation process; sound sampling designs; an easy-to-use environment for validation sample interpretation; and the generation of accuracy reports based on the validation process. The tool was developed for a range of users including producers of land cover maps, researchers, teachers and students. The use of such a tool could be embedded within the curriculum of remote sensing courses at a university level but is simple enough for use by students aged 13-18. A beta version of the tool is available for testing at: http://www.laco-wiki.net.
Comparison of high pressure transient PVT measurements and model predictions. Part I.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felver, Todd G.; Paradiso, Nicholas Joseph; Evans, Gregory Herbert
2010-07-01
A series of experiments consisting of vessel-to-vessel transfers of pressurized gas using Transient PVT methodology have been conducted to provide a data set for optimizing heat transfer correlations in high pressure flow systems. In rapid expansions such as these, the heat transfer conditions are neither adiabatic nor isothermal. Compressible flow tools exist, such as NETFLOW that can accurately calculate the pressure and other dynamical mechanical properties of such a system as a function of time. However to properly evaluate the mass that has transferred as a function of time these computational tools rely on heat transfer correlations that must bemore » confirmed experimentally. In this work new data sets using helium gas are used to evaluate the accuracy of these correlations for receiver vessel sizes ranging from 0.090 L to 13 L and initial supply pressures ranging from 2 MPa to 40 MPa. The comparisons show that the correlations developed in the 1980s from sparse data sets perform well for the supply vessels but are not accurate for the receivers, particularly at early time during the transfers. This report focuses on the experiments used to obtain high quality data sets that can be used to validate computational models. Part II of this report discusses how these data were used to gain insight into the physics of gas transfer and to improve vessel heat transfer correlations. Network flow modeling and CFD modeling is also discussed.« less
Ajja, Rahma; Beets, Michael W; Chandler, Jessica; Kaczynski, Andrew T; Ward, Dianne S
2015-08-01
There is a growing interest in evaluating the physical activity (PA) and healthy eating (HE) policy and practice environment characteristics in settings frequented by youth (≤18years). This review evaluates the measurement properties of audit tools designed to assess PA and HE policy and practice environment characteristics in settings that care for youth (e.g., childcare, school, afterschool, summer camp). Three electronic databases, reference lists, educational department and national health organizations' web pages were searched between January 1980 and February 2014 to identify tools assessing PA and/or HE policy and practice environments in settings that care for youth (≤18years). Sixty-five audit tools were identified of which 53 individual tools met the inclusion criteria. Thirty-three tools assessed both the PA and HE domains, 6 assessed the PA domain and 14 assessed the HE domain solely. The majority of the tools were self-assessment tools (n=40), and were developed to assess the PA and/or HE environment in school settings (n=33), childcare (n=12), and after school programs (n=4). Four tools assessed the community at-large and had sections for assessing preschool, school and/or afterschool settings within the tool. The majority of audit tools lacked validity and/or reliability data (n=42). Inter-rater reliability and construct validity were the most frequently reported reliability (n=7) and validity types (n=5). Limited attention has been given to establishing the reliability and validity of audit tools for settings that care for youth. Future efforts should be directed towards establishing a strong measurement foundation for these important environmental audit tools. Published by Elsevier Inc.
Rabbani, Fauziah; Lalji, Sabrina Nh; Abbas, Farhat; Jafri, Sm Wasim; Razzak, Junaid A; Nabi, Naheed; Jahan, Firdous; Ajmal, Agha; Petzold, Max; Brommels, Mats; Tomson, Goran
2011-03-31
As a response to a changing operating environment, healthcare administrators are implementing modern management tools in their organizations. The balanced scorecard (BSC) is considered a viable tool in high-income countries to improve hospital performance. The BSC has not been applied to hospital settings in low-income countries nor has the context for implementation been examined. This study explored contextual perspectives in relation to BSC implementation in a Pakistani hospital. Four clinical units of this hospital were involved in the BSC implementation based on their willingness to participate. Implementation included sensitization of units towards the BSC, developing specialty specific BSCs and reporting of performance based on the BSC during administrative meetings. Pettigrew and Whipp's context (why), process (how) and content (what) framework of strategic change was used to guide data collection and analysis. Data collection methods included quantitative tools (a validated culture assessment questionnaire) and qualitative approaches including key informant interviews and participant observation. Method triangulation provided common and contrasting results between the four units. A participatory culture, supportive leadership, financial and non-financial incentives, the presentation of clear direction by integrating support for the BSC in policies, resources, and routine activities emerged as desirable attributes for BSC implementation. The two units that lagged behind were more involved in direct inpatient care and carried a considerable clinical workload. Role clarification and consensus about the purpose and benefits of the BSC were noted as key strategies for overcoming implementation challenges in two clinical units that were relatively ahead in BSC implementation. It was noted that, rather than seeking to replace existing information systems, initiatives such as the BSC could be readily adopted if they are built on existing infrastructures and data networks. Variable levels of the BSC implementation were observed in this study. Those intending to apply the BSC in other hospital settings need to ensure a participatory culture, clear institutional mandate, appropriate leadership support, proper reward and recognition system, and sensitization to BSC benefits.
Keitel, Kristina; D'Acremont, Valérie
2018-04-20
The lack of effective, integrated diagnostic tools pose a major challenge to the primary care management of febrile childhood illnesses. These limitations are especially evident in low-resource settings and are often inappropriately compensated by antimicrobial over-prescription. Interactive electronic decision trees (IEDTs) have the potential to close these gaps: guiding antibiotic use and better identifying serious disease. This narrative review summarizes existing IEDTs, to provide an overview of their degree of validation, as well as to identify gaps in current knowledge and prospects for future innovation. Structured literature review in PubMed and Embase complemented by google search and contact with developers. Six integrated IEDTs were identified: three (eIMCI, REC, and Bangladesh digital IMCI) based on Integrated Management of Childhood Illnesses (IMCI); four (SL eCCM, MEDSINC, e-iCCM, and D-Tree eCCM) on Integrated Community Case Management (iCCM); two (ALMANACH, MSFeCARE) with a modified IMCI content; and one (ePOCT) that integrates novel content with biomarker testing. The types of publications and evaluation studies varied greatly: the content and evidence-base was published for two (ALMANACH and ePOCT), ALMANACH and ePOCT were validated in efficacy studies. Other types of evaluations, such as compliance, acceptability were available for D-Tree eCCM, eIMCI, ALMANACH. Several evaluations are still ongoing. Future prospects include conducting effectiveness and impact studies using data gathered through larger studies to adapt the medical content to local epidemiology, improving the software and sensors, and Assessing factors that influence compliance and scale-up. IEDTs are valuable tools that have the potential to improve management of febrile children in primary care and increase the rational use of diagnostics and antimicrobials. Next steps in the evidence pathway should be larger effectiveness and impact studies (including cost analysis) and continuous integration of clinically useful diagnostic and treatment innovations. Copyright © 2018. Published by Elsevier Ltd.
A RESTful API for accessing microbial community data for MG-RAST
Wilke, Andreas; Bischof, Jared; Harrison, Travis; ...
2015-01-08
Metagenomic sequencing has produced significant amounts of data in recent years. For example, as of summer 2013, MGRAST has been used to annotate over 110,000 data sets totaling over 43 Terabases. With metagenomic sequencing finding even wider adoption in the scientific community, the existing web-based analysis tools and infrastructure in MG-RAST provide limited capability for data retrieval and analysis, such as comparative analysis between multiple data sets. Moreover, although the system provides many analysis tools, it is not comprehensive. By opening MG-RAST up via a web services API (application programmers interface) we have greatly expanded access to MG-RAST data, asmore » well as provided a mechanism for the use of third-party analysis tools with MG-RAST data. This RESTful API makes all data and data objects created by the MG-RAST pipeline accessible as JSON objects. As part of the DOE Systems Biology Knowledgebase project (KBase, http:// kbase.us) we have implemented a web services API for MG-RAST. This API complements the existing MG-RAST web interface and constitutes the basis of KBase’s microbial community capabilities. In addition, the API exposes a comprehensive collection of data to programmers. This API, which uses a RESTful (Representational State Transfer) implementation, is compatible with most programming environments and should be easy to use for end users and third parties. It provides comprehensive access to sequence data, quality control results, annotations, and many other data types. Where feasible, we have used standards to expose data and metadata. Code examples are provided in a number of languages both to show the versatility of the API and to provide a starting point for users. We present an API that exposes the data in MG-RAST for consumption by our users, greatly enhancing the utility of the MG-RAST service.« less
Bridge, Heather; Smolskis, Mary; Bianchine, Peter; Dixon, Dennis O; Kelly, Grace; Herpin, Betsey; Tavel, Jorge
2009-08-01
A clinical research protocol document must reflect both sound scientific rationale as well as local, national and, when applicable, international regulatory and human subject protections requirements. These requirements originate from a variety of sources, undergo frequent revision and are subject to interpretation. Tools to assist clinical investigators in the production of clinical protocols could facilitate navigating these requirements and ultimately increase the efficiency of clinical research. The National Institute of Allergy and Infectious Diseases (NIAID) developed templates for investigators to serve as the foundation for protocol development. These protocol templates are designed as tools to support investigators in developing clinical protocols. NIAID established a series of working groups to determine how to improve its capacity to conduct clinical research more efficiently and effectively. The Protocol Template Working Group was convened to determine what protocol templates currently existed within NIAID and whether standard NIAID protocol templates should be produced. After review and assessment of existing protocol documents and requirements, the group reached consensus about required and optional content, determined the format and identified methods for distribution as well as education of investigators in the use of these templates. The templates were approved by the NIAID Executive Committee in 2006 and posted as part of the NIAID Clinical Research Toolkit [1] website for broad access. These documents require scheduled revisions to stay current with regulatory and policy changes. The structure of any clinical protocol template, whether comprehensive or specific to a particular study phase, setting or design, affects how it is used by investigators. Each structure presents its own set of advantages and disadvantages. While useful, protocol templates are not stand-alone tools for creating an optimal protocol document, but must be complemented by institutional resources and support. Education and guidance of investigators in the appropriate use of templates is necessary to ensure a complete yet concise protocol document. Due to changing regulatory requirements, clinical protocol templates cannot become static, but require frequent revisions.
A RESTful API for Accessing Microbial Community Data for MG-RAST
Wilke, Andreas; Bischof, Jared; Harrison, Travis; Brettin, Tom; D'Souza, Mark; Gerlach, Wolfgang; Matthews, Hunter; Paczian, Tobias; Wilkening, Jared; Glass, Elizabeth M.; Desai, Narayan; Meyer, Folker
2015-01-01
Metagenomic sequencing has produced significant amounts of data in recent years. For example, as of summer 2013, MG-RAST has been used to annotate over 110,000 data sets totaling over 43 Terabases. With metagenomic sequencing finding even wider adoption in the scientific community, the existing web-based analysis tools and infrastructure in MG-RAST provide limited capability for data retrieval and analysis, such as comparative analysis between multiple data sets. Moreover, although the system provides many analysis tools, it is not comprehensive. By opening MG-RAST up via a web services API (application programmers interface) we have greatly expanded access to MG-RAST data, as well as provided a mechanism for the use of third-party analysis tools with MG-RAST data. This RESTful API makes all data and data objects created by the MG-RAST pipeline accessible as JSON objects. As part of the DOE Systems Biology Knowledgebase project (KBase, http://kbase.us) we have implemented a web services API for MG-RAST. This API complements the existing MG-RAST web interface and constitutes the basis of KBase's microbial community capabilities. In addition, the API exposes a comprehensive collection of data to programmers. This API, which uses a RESTful (Representational State Transfer) implementation, is compatible with most programming environments and should be easy to use for end users and third parties. It provides comprehensive access to sequence data, quality control results, annotations, and many other data types. Where feasible, we have used standards to expose data and metadata. Code examples are provided in a number of languages both to show the versatility of the API and to provide a starting point for users. We present an API that exposes the data in MG-RAST for consumption by our users, greatly enhancing the utility of the MG-RAST service. PMID:25569221
QUADrATiC: scalable gene expression connectivity mapping for repurposing FDA-approved therapeutics.
O'Reilly, Paul G; Wen, Qing; Bankhead, Peter; Dunne, Philip D; McArt, Darragh G; McPherson, Suzanne; Hamilton, Peter W; Mills, Ken I; Zhang, Shu-Dong
2016-05-04
Gene expression connectivity mapping has proven to be a powerful and flexible tool for research. Its application has been shown in a broad range of research topics, most commonly as a means of identifying potential small molecule compounds, which may be further investigated as candidates for repurposing to treat diseases. The public release of voluminous data from the Library of Integrated Cellular Signatures (LINCS) programme further enhanced the utilities and potentials of gene expression connectivity mapping in biomedicine. We describe QUADrATiC ( http://go.qub.ac.uk/QUADrATiC ), a user-friendly tool for the exploration of gene expression connectivity on the subset of the LINCS data set corresponding to FDA-approved small molecule compounds. It enables the identification of compounds for repurposing therapeutic potentials. The software is designed to cope with the increased volume of data over existing tools, by taking advantage of multicore computing architectures to provide a scalable solution, which may be installed and operated on a range of computers, from laptops to servers. This scalability is provided by the use of the modern concurrent programming paradigm provided by the Akka framework. The QUADrATiC Graphical User Interface (GUI) has been developed using advanced Javascript frameworks, providing novel visualization capabilities for further analysis of connections. There is also a web services interface, allowing integration with other programs or scripts. QUADrATiC has been shown to provide an improvement over existing connectivity map software, in terms of scope (based on the LINCS data set), applicability (using FDA-approved compounds), usability and speed. It offers potential to biological researchers to analyze transcriptional data and generate potential therapeutics for focussed study in the lab. QUADrATiC represents a step change in the process of investigating gene expression connectivity and provides more biologically-relevant results than previous alternative solutions.
A Public Database of Memory and Naive B-Cell Receptor Sequences.
DeWitt, William S; Lindau, Paul; Snyder, Thomas M; Sherwood, Anna M; Vignali, Marissa; Carlson, Christopher S; Greenberg, Philip D; Duerkopp, Natalie; Emerson, Ryan O; Robins, Harlan S
2016-01-01
The vast diversity of B-cell receptors (BCR) and secreted antibodies enables the recognition of, and response to, a wide range of epitopes, but this diversity has also limited our understanding of humoral immunity. We present a public database of more than 37 million unique BCR sequences from three healthy adult donors that is many fold deeper than any existing resource, together with a set of online tools designed to facilitate the visualization and analysis of the annotated data. We estimate the clonal diversity of the naive and memory B-cell repertoires of healthy individuals, and provide a set of examples that illustrate the utility of the database, including several views of the basic properties of immunoglobulin heavy chain sequences, such as rearrangement length, subunit usage, and somatic hypermutation positions and dynamics.
Low-energy electron dose-point kernel simulations using new physics models implemented in Geant4-DNA
NASA Astrophysics Data System (ADS)
Bordes, Julien; Incerti, Sébastien; Lampe, Nathanael; Bardiès, Manuel; Bordage, Marie-Claude
2017-05-01
When low-energy electrons, such as Auger electrons, interact with liquid water, they induce highly localized ionizing energy depositions over ranges comparable to cell diameters. Monte Carlo track structure (MCTS) codes are suitable tools for performing dosimetry at this level. One of the main MCTS codes, Geant4-DNA, is equipped with only two sets of cross section models for low-energy electron interactions in liquid water (;option 2; and its improved version, ;option 4;). To provide Geant4-DNA users with new alternative physics models, a set of cross sections, extracted from CPA100 MCTS code, have been added to Geant4-DNA. This new version is hereafter referred to as ;Geant4-DNA-CPA100;. In this study, ;Geant4-DNA-CPA100; was used to calculate low-energy electron dose-point kernels (DPKs) between 1 keV and 200 keV. Such kernels represent the radial energy deposited by an isotropic point source, a parameter that is useful for dosimetry calculations in nuclear medicine. In order to assess the influence of different physics models on DPK calculations, DPKs were calculated using the existing Geant4-DNA models (;option 2; and ;option 4;), newly integrated CPA100 models, and the PENELOPE Monte Carlo code used in step-by-step mode for monoenergetic electrons. Additionally, a comparison was performed of two sets of DPKs that were simulated with ;Geant4-DNA-CPA100; - the first set using Geant4‧s default settings, and the second using CPA100‧s original code default settings. A maximum difference of 9.4% was found between the Geant4-DNA-CPA100 and PENELOPE DPKs. Between the two Geant4-DNA existing models, slight differences, between 1 keV and 10 keV were observed. It was highlighted that the DPKs simulated with the two Geant4-DNA's existing models were always broader than those generated with ;Geant4-DNA-CPA100;. The discrepancies observed between the DPKs generated using Geant4-DNA's existing models and ;Geant4-DNA-CPA100; were caused solely by their different cross sections. The different scoring and interpolation methods used in CPA100 and Geant4 to calculate DPKs showed differences close to 3.0% near the source.
Toppar: an interactive browser for viewing association study results.
Juliusdottir, Thorhildur; Banasik, Karina; Robertson, Neil R; Mott, Richard; McCarthy, Mark I
2018-06-01
Data integration and visualization help geneticists make sense of large amounts of data. To help facilitate interpretation of genetic association data we developed Toppar, a customizable visualization tool that stores results from association studies and enables browsing over multiple results, by combining features from existing tools and linking to appropriate external databases. Detailed information on Toppar's features and functionality are on our website http://mccarthy.well.ox.ac.uk/toppar/docs along with instructions on how to download, install and run Toppar. Our online version of Toppar is accessible from the website and can be test-driven using Firefox, Safari or Chrome on sub-sets of publicly available genome-wide association study anthropometric waist and body mass index data (Locke et al., 2015; Shungin et al., 2015) from the Genetic Investigation of ANthropometric Traits consortium. totajuliusd@gmail.com.
Optimal SSN Tasking to Enhance Real-time Space Situational Awareness
NASA Astrophysics Data System (ADS)
Ferreira, J., III; Hussein, I.; Gerber, J.; Sivilli, R.
2016-09-01
Space Situational Awareness (SSA) is currently constrained by an overwhelming number of resident space objects (RSOs) that need to be tracked and the amount of data these observations produce. The Joint Centralized Autonomous Tasking System (JCATS) is an autonomous, net-centric tool that approaches these SSA concerns from an agile, information-based stance. Finite set statistics and stochastic optimization are used to maintain an RSO catalog and develop sensor tasking schedules based on operator configured, state information-gain metrics to determine observation priorities. This improves the efficiency of sensors to target objects as awareness changes and new information is needed, not at predefined frequencies solely. A net-centric, service-oriented architecture (SOA) allows for JCATS integration into existing SSA systems. Testing has shown operationally-relevant performance improvements and scalability across multiple types of scenarios and against current sensor tasking tools.
NASA Astrophysics Data System (ADS)
Allgood, Glenn O.; Kuruganti, Phani Teja; Nutaro, James; Saffold, Jay
2009-05-01
Combat resiliency is the ability of a commander to prosecute, control, and consolidate his/her's sphere of influence in adverse and changing conditions. To support this, an infrastructure must exist that allows the commander to view the world in varying degrees of granularity with sufficient levels of detail to permit confidence estimates to be levied against decisions and course of actions. An infrastructure such as this will include the ability to effectively communicate context and relevance within and across the battle space. To achieve this will require careful thought, planning, and understanding of a network and its capacity limitations in post-event command and control. Relevance and impact on any existing infrastructure must be fully understood prior to deployment to exploit the system's full capacity and capabilities. In this view, the combat communication network is considered an integral part of or National communication network and infrastructure. This paper will describe an analytical tool set developed at ORNL and RNI incorporating complexity theory, advanced communications modeling, simulation, and visualization technologies that could be used as a pre-planning tool or post event reasoning application to support response and containment.
Integrating DICOM structure reporting (SR) into the medical imaging informatics data grid
NASA Astrophysics Data System (ADS)
Lee, Jasper; Le, Anh; Liu, Brent
2008-03-01
The Medical Imaging Informatics (MI2) Data Grid developed at the USC Image Processing and Informatics Laboratory enables medical images to be shared securely between multiple imaging centers. Current applications include an imaging-based clinical trial setting where multiple field sites perform image acquisition and a centralized radiology core performs image analysis, often using computer-aided diagnosis tools (CAD) that generate a DICOM-SR to report their findings and measurements. As more and more CAD tools are being developed in the radiology field, the generated DICOM Structure Reports (SR) holding key radiological findings and measurements that are not part of the DICOM image need to be integrated into the existing Medical Imaging Informatics Data Grid with the corresponding imaging studies. We will discuss the significance and method involved in adapting DICOM-SR into the Medical Imaging Informatics Data Grid. The result is a MI2 Data Grid repository from which users can send and receive DICOM-SR objects based on the imaging-based clinical trial application. The services required to extract and categorize information from the structured reports will be discussed, and the workflow to store and retrieve a DICOM-SR file into the existing MI2 Data Grid will be shown.
Code Parallelization with CAPO: A User Manual
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Frumkin, Michael; Yan, Jerry; Biegel, Bryan (Technical Monitor)
2001-01-01
A software tool has been developed to assist the parallelization of scientific codes. This tool, CAPO, extends an existing parallelization toolkit, CAPTools developed at the University of Greenwich, to generate OpenMP parallel codes for shared memory architectures. This is an interactive toolkit to transform a serial Fortran application code to an equivalent parallel version of the software - in a small fraction of the time normally required for a manual parallelization. We first discuss the way in which loop types are categorized and how efficient OpenMP directives can be defined and inserted into the existing code using the in-depth interprocedural analysis. The use of the toolkit on a number of application codes ranging from benchmark to real-world application codes is presented. This will demonstrate the great potential of using the toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of toolkit to quickly parallelize serial programs as well as the good performance achievable on a large number of processors. The second part of the document gives references to the parameters and the graphic user interface implemented in the toolkit. Finally a set of tutorials is included for hands-on experiences with this toolkit.
MTpy - Python Tools for Magnetotelluric Data Processing and Analysis
NASA Astrophysics Data System (ADS)
Krieger, Lars; Peacock, Jared; Thiel, Stephan; Inverarity, Kent; Kirkby, Alison; Robertson, Kate; Soeffky, Paul; Didana, Yohannes
2014-05-01
We present the Python package MTpy, which provides functions for the processing, analysis, and handling of magnetotelluric (MT) data sets. MT is a relatively immature and not widely applied geophysical method in comparison to other geophysical techniques such as seismology. As a result, the data processing within the academic MT community is not thoroughly standardised and is often based on a loose collection of software, adapted to the respective local specifications. We have developed MTpy to overcome problems that arise from missing standards, and to provide a simplification of the general handling of MT data. MTpy is written in Python, and the open-source code is freely available from a GitHub repository. The setup follows the modular approach of successful geoscience software packages such as GMT or Obspy. It contains sub-packages and modules for the various tasks within the standard work-flow of MT data processing and interpretation. In order to allow the inclusion of already existing and well established software, MTpy does not only provide pure Python classes and functions, but also wrapping command-line scripts to run standalone tools, e.g. modelling and inversion codes. Our aim is to provide a flexible framework, which is open for future dynamic extensions. MTpy has the potential to promote the standardisation of processing procedures and at same time be a versatile supplement for existing algorithms. Here, we introduce the concept and structure of MTpy, and we illustrate the workflow of MT data processing, interpretation, and visualisation utilising MTpy on example data sets collected over different regions of Australia and the USA.
PSAMM: A Portable System for the Analysis of Metabolic Models
Steffensen, Jon Lund; Dufault-Thompson, Keith; Zhang, Ying
2016-01-01
The genome-scale models of metabolic networks have been broadly applied in phenotype prediction, evolutionary reconstruction, community functional analysis, and metabolic engineering. Despite the development of tools that support individual steps along the modeling procedure, it is still difficult to associate mathematical simulation results with the annotation and biological interpretation of metabolic models. In order to solve this problem, here we developed a Portable System for the Analysis of Metabolic Models (PSAMM), a new open-source software package that supports the integration of heterogeneous metadata in model annotations and provides a user-friendly interface for the analysis of metabolic models. PSAMM is independent of paid software environments like MATLAB, and all its dependencies are freely available for academic users. Compared to existing tools, PSAMM significantly reduced the running time of constraint-based analysis and enabled flexible settings of simulation parameters using simple one-line commands. The integration of heterogeneous, model-specific annotation information in PSAMM is achieved with a novel format of YAML-based model representation, which has several advantages, such as providing a modular organization of model components and simulation settings, enabling model version tracking, and permitting the integration of multiple simulation problems. PSAMM also includes a number of quality checking procedures to examine stoichiometric balance and to identify blocked reactions. Applying PSAMM to 57 models collected from current literature, we demonstrated how the software can be used for managing and simulating metabolic models. We identified a number of common inconsistencies in existing models and constructed an updated model repository to document the resolution of these inconsistencies. PMID:26828591
SOX: Short Distance Neutrino Oscillations with Borexino
NASA Astrophysics Data System (ADS)
Bravo-Berguño, D.; Agostini, M.; Althenmüller, K.; Bellini, G.; Benziger, J.; Berton, N.; Bick, D.; Bonfini, G.; Caccianiga, B.; Cadonati, L.; Calaprice, F.; Caminata, A.; Cavalcante, P.; Chavarria, A.; Chepurnov, A.; Cribier, M.; D'Angelo, D.; Davini, S.; Derbin, A.; di Noto, L.; Durero, M.; Empl, A.; Etenko, A.; Farinon, S.; Fischer, V.; Fomenko, K.; Franco, D.; Gabriele, F.; Gaffiot, J.; Galbiati, C.; Gazzana, S.; Ghiano, C.; Giammarchi, M.; Göger-Neff, M.; Goretti, A.; Grandi, L.; Gromov, M.; Hagner, C.; Houdy, Th.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Jonquères, N.; Kobychev, V.; Korablev, D.; Korga, G.; Kryn, D.; Lasserre, T.; Laubenstein, M.; Lehnert, B.; Lewke, T.; Link, J.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Manecki, S.; Maneschg, W.; Marcocci, S.; Maricic, J.; Meindl, Q.; Mention, G.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Montuschi, M.; Mosteiro, P.; Muratova, V.; Musenich, R.; Oberauer, L.; Obolensky, M.; Ortica, F.; Otis, K.; Pallavicini, M.; Papp, L.; Perasso, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Rossi, N.; Saldanha, R.; Salvo, C.; Schönert, S.; Scola, L.; Simgen, H.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Sukhotin, S.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Veyssière, C.; Vivier, M.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Winter, J.; Wojcik, M.; Wright, A.; Wurm, M.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.; SOX Collaboration
2016-04-01
The Borexino detector has convincingly shown its outstanding performance in the in the sub-MeV regime through its unprecedented accomplishments in the solar and geo-neutrinos detection, which make it the ideal tool to unambiguously test the long-standing issue of the existence of a sterile neutrino, as suggested by several anomalies: the outputs of the LSND and Miniboone experiments, the results of the source calibration of the two Gallium solar ν experiments, and the recently hinted reactor anomaly. The SOX project will exploit two sources, based on chromium and cerium, which deployed under the experiment will emit two intense beams of νe (Cr) and νe ‾ (Ce). Interacting in the active volume of the liquid scintillator, each beam would create a spatial wave pattern in case of oscillation of the νe (or νe ‾) into the sterile state, which would be the smoking gun proving the existence of the new sterile member of the neutrino family. Otherwise, its absence will allow setting very stringent limit on its existence.
Mann, Devin M; Lin, Jenny J
2012-01-23
Studies have shown that lifestyle behavior changes are most effective to prevent onset of diabetes in high-risk patients. Primary care providers are charged with encouraging behavior change among their patients at risk for diabetes, yet the practice environment and training in primary care often do not support effective provider counseling. The goal of this study is to develop an electronic health record-embedded tool to facilitate shared patient-provider goal setting to promote behavioral change and prevent diabetes. The ADAPT (Avoiding Diabetes Thru Action Plan Targeting) trial leverages an innovative system that integrates evidence-based interventions for behavioral change with already-existing technology to enhance primary care providers' effectiveness to counsel about lifestyle behavior changes. Using principles of behavior change theory, the multidisciplinary design team utilized in-depth interviews and in vivo usability testing to produce a prototype diabetes prevention counseling system embedded in the electronic health record. The core element of the tool is a streamlined, shared goal-setting module within the electronic health record system. The team then conducted a series of innovative, "near-live" usability testing simulations to refine the tool and enhance workflow integration. The system also incorporates a pre-encounter survey to elicit patients' behavior-change goals to help tailor patient-provider goal setting during the clinical encounter and to encourage shared decision making. Lastly, the patients interact with a website that collects their longitudinal behavior data and allows them to visualize their progress over time and compare their progress with other study members. The finalized ADAPT system is now being piloted in a small randomized control trial of providers using the system with prediabetes patients over a six-month period. The ADAPT system combines the influential powers of shared goal setting and feedback, tailoring, modeling, contracting, reminders, and social comparisons to integrate evidence-based behavior-change principles into the electronic health record to maximize provider counseling efficacy during routine primary care clinical encounters. If successful, the ADAPT system may represent an adaptable and scalable technology-enabled behavior-change tool for all primary care providers. ClinicalTrials.gov Identifier NCT01473654.
Saturno, P J; Martinez-Nicolas, I; Robles-Garcia, I S; López-Soriano, F; Angel-García, D
2015-01-01
Pain is among the most important symptoms in terms of prevalence and cause of distress for cancer patients and their families. However, there is a lack of clearly defined measures of quality pain management to identify problems and monitor changes in improvement initiatives. We built a comprehensive set of evidence-based indicators following a four-step model: (1) review and systematization of existing guidelines to list evidence-based recommendations; (2) review and systematization of existing indicators matching the recommendations; (3) development of new indicators to complete a set of measures for the identified recommendations; and (4) pilot test (in hospital and primary care settings) for feasibility, reliability (kappa), and usefulness for the identification of quality problems using the lot quality acceptance sampling (LQAS) method and estimates of compliance. Twenty-two indicators were eventually pilot tested. Seventeen were feasible in hospitals and 12 in all settings. Feasibility barriers included difficulties in identifying target patients, deficient clinical records and low prevalence of cases for some indicators. Reliability was mostly very good or excellent (k > 0.8). Four indicators, all of them related to medication and prevention of side effects, had acceptable compliance at 75%/40% LQAS level. Other important medication-related indicators (i.e., adjustment to pain intensity, prescription for breakthrough pain) and indicators concerning patient-centred care (i.e., attention to psychological distress and educational needs) had very low compliance, highlighting specific quality gaps. A set of good practice indicators has been built and pilot tested as a feasible, reliable and useful quality monitoring tool, and underscoring particular and important areas for improvement. © 2014 European Pain Federation - EFIC®
Dedy, Nicolas J; Szasz, Peter; Louridas, Marisa; Bonrath, Esther M; Husslein, Heinrich; Grantcharov, Teodor P
2015-06-01
Nontechnical skills are critical for patient safety in the operating room (OR). As a result, regulatory bodies for accreditation and certification have mandated the integration of these competencies into postgraduate education. A generally accepted approach to the in-training assessment of nontechnical skills, however, is lacking. The goal of the present study was to develop an evidence-based and reliable tool for the in-training assessment of residents' nontechnical performance in the OR. The Objective Structured Assessment of Nontechnical Skills tool was designed as a 5-point global rating scale with descriptive anchors for each item, based on existing evidence-based frameworks of nontechnical skills, as well as resident training requirements. The tool was piloted on scripted videos and refined in an iterative process. The final version was used to rate residents' performance in recorded OR crisis simulations and during live observations in the OR. A total of 37 simulations and 10 live procedures were rated. Interrater agreement was good for total mean scores, both in simulation and in the real OR, with intraclass correlation coefficients >0.90 in all settings for average and single measures. Internal consistency of the scale was high (Cronbach's alpha = 0.80). The Objective Structured Assessment of Nontechnical Skills global rating scale was developed as an evidence-based tool for the in-training assessment of residents' nontechnical performance in the OR. Unique descriptive anchors allow for a criterion-referenced assessment of performance. Good reliability was demonstrated in different settings, supporting applications in research and education. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Iltis, G.; Caswell, T. A.; Dill, E.; Wilkins, S.; Lee, W. K.
2014-12-01
X-ray tomographic imaging of porous media has proven to be a valuable tool for investigating and characterizing the physical structure and state of both natural and synthetic porous materials, including glass bead packs, ceramics, soil and rock. Given that most synchrotron facilities have user programs which grant academic researchers access to facilities and x-ray imaging equipment free of charge, a key limitation or hindrance for small research groups interested in conducting x-ray imaging experiments is the financial cost associated with post-experiment data analysis. While the cost of high performance computing hardware continues to decrease, expenses associated with licensing commercial software packages for quantitative image analysis continue to increase, with current prices being as high as $24,000 USD, for a single user license. As construction of the Nation's newest synchrotron accelerator nears completion, a significant effort is being made here at the National Synchrotron Light Source II (NSLS-II), Brookhaven National Laboratory (BNL), to provide an open-source, experiment-to-publication toolbox that reduces the financial and technical 'activation energy' required for performing sophisticated quantitative analysis of multidimensional porous media data sets, collected using cutting-edge x-ray imaging techniques. Implementation focuses on leveraging existing open-source projects and developing additional tools for quantitative analysis. We will present an overview of the software suite that is in development here at BNL including major design decisions, a demonstration of several test cases illustrating currently available quantitative tools for analysis and characterization of multidimensional porous media image data sets and plans for their future development.
Ensemble Eclipse: A Process for Prefab Development Environment for the Ensemble Project
NASA Technical Reports Server (NTRS)
Wallick, Michael N.; Mittman, David S.; Shams, Khawaja, S.; Bachmann, Andrew G.; Ludowise, Melissa
2013-01-01
This software simplifies the process of having to set up an Eclipse IDE programming environment for the members of the cross-NASA center project, Ensemble. It achieves this by assembling all the necessary add-ons and custom tools/preferences. This software is unique in that it allows developers in the Ensemble Project (approximately 20 to 40 at any time) across multiple NASA centers to set up a development environment almost instantly and work on Ensemble software. The software automatically has the source code repositories and other vital information and settings included. The Eclipse IDE is an open-source development framework. The NASA (Ensemble-specific) version of the software includes Ensemble-specific plug-ins as well as settings for the Ensemble project. This software saves developers the time and hassle of setting up a programming environment, making sure that everything is set up in the correct manner for Ensemble development. Existing software (i.e., standard Eclipse) requires an intensive setup process that is both time-consuming and error prone. This software is built once by a single user and tested, allowing other developers to simply download and use the software
Shen, Lishuang; Attimonelli, Marcella; Bai, Renkui; Lott, Marie T; Wallace, Douglas C; Falk, Marni J; Gai, Xiaowu
2018-06-01
Accurate mitochondrial DNA (mtDNA) variant annotation is essential for the clinical diagnosis of diverse human diseases. Substantial challenges to this process include the inconsistency in mtDNA nomenclatures, the existence of multiple reference genomes, and a lack of reference population frequency data. Clinicians need a simple bioinformatics tool that is user-friendly, and bioinformaticians need a powerful informatics resource for programmatic usage. Here, we report the development and functionality of the MSeqDR mtDNA Variant Tool set (mvTool), a one-stop mtDNA variant annotation and analysis Web service. mvTool is built upon the MSeqDR infrastructure (https://mseqdr.org), with contributions of expert curated data from MITOMAP (https://www.mitomap.org) and HmtDB (https://www.hmtdb.uniba.it/hmdb). mvTool supports all mtDNA nomenclatures, converts variants to standard rCRS- and HGVS-based nomenclatures, and annotates novel mtDNA variants. Besides generic annotations from dbNSFP and Variant Effect Predictor (VEP), mvTool provides allele frequencies in more than 47,000 germline mitogenomes, and disease and pathogenicity classifications from MSeqDR, Mitomap, HmtDB and ClinVar (Landrum et al., 2013). mvTools also provides mtDNA somatic variants annotations. "mvTool API" is implemented for programmatic access using inputs in VCF, HGVS, or classical mtDNA variant nomenclatures. The results are reported as hyperlinked html tables, JSON, Excel, and VCF formats. MSeqDR mvTool is freely accessible at https://mseqdr.org/mvtool.php. © 2018 Wiley Periodicals, Inc.
Topological chaos, braiding and bifurcation of almost-cyclic sets.
Grover, Piyush; Ross, Shane D; Stremler, Mark A; Kumar, Pankaj
2012-12-01
In certain two-dimensional time-dependent flows, the braiding of periodic orbits provides a way to analyze chaos in the system through application of the Thurston-Nielsen classification theorem (TNCT). We expand upon earlier work that introduced the application of the TNCT to braiding of almost-cyclic sets, which are individual components of almost-invariant sets [Stremler et al., "Topological chaos and periodic braiding of almost-cyclic sets," Phys. Rev. Lett. 106, 114101 (2011)]. In this context, almost-cyclic sets are periodic regions in the flow with high local residence time that act as stirrers or "ghost rods" around which the surrounding fluid appears to be stretched and folded. In the present work, we discuss the bifurcation of the almost-cyclic sets as a system parameter is varied, which results in a sequence of topologically distinct braids. We show that, for Stokes' flow in a lid-driven cavity, these various braids give good lower bounds on the topological entropy over the respective parameter regimes in which they exist. We make the case that a topological analysis based on spatiotemporal braiding of almost-cyclic sets can be used for analyzing chaos in fluid flows. Hence, we further develop a connection between set-oriented statistical methods and topological methods, which promises to be an important analysis tool in the study of complex systems.
Manning, Joseph C; Walker, Gemma M; Carter, Tim; Aubeeluck, Aimee; Witchell, Miranda; Coad, Jane
2018-04-12
Currently, no standardised, evidence-based assessment tool for assessing immediate self-harm and suicide in acute paediatric inpatient settings exists. The aim of this study is to develop and test the psychometric properties of an assessment tool that identifies immediate risk of self-harm and suicide in children and young people (10-19 years) in acute paediatric hospital settings. Development phase: This phase involved a scoping review of the literature to identify and extract items from previously published suicide and self-harm risk assessment scales. Using a modified electronic Delphi approach, these items will then be rated according to their relevance for assessment of immediate suicide or self-harm risk by expert professionals. Inclusion of items will be determined by 65%-70% consensus between raters. Subsequently, a panel of expert members will convene to determine the face validity, appropriate phrasing, item order and response format for the finalised items.Psychometric testing phase: The finalised items will be tested for validity and reliability through a multicentre, psychometric evaluation. Psychometric testing will be undertaken to determine the following: internal consistency, inter-rater reliability, convergent, divergent validity and concurrent validity. Ethical approval was provided by the National Health Service East Midlands-Derby Research Ethics Committee (17/EM/0347) and full governance clearance received by the Health Research Authority and local participating sites. Findings from this study will be disseminated to professionals and the public via peer-reviewed journal publications, popular social media and conference presentations. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Web mining in soft computing framework: relevance, state of the art and future directions.
Pal, S K; Talwar, V; Mitra, P
2002-01-01
The paper summarizes the different characteristics of Web data, the basic components of Web mining and its different types, and the current state of the art. The reason for considering Web mining, a separate field from data mining, is explained. The limitations of some of the existing Web mining methods and tools are enunciated, and the significance of soft computing (comprising fuzzy logic (FL), artificial neural networks (ANNs), genetic algorithms (GAs), and rough sets (RSs) are highlighted. A survey of the existing literature on "soft Web mining" is provided along with the commercially available systems. The prospective areas of Web mining where the application of soft computing needs immediate attention are outlined with justification. Scope for future research in developing "soft Web mining" systems is explained. An extensive bibliography is also provided.
Taube-Schiff, M; El Morr, C; Counsell, A; Mehak, Adrienne; Gollan, J
2018-05-01
WHAT IS KNOWN ON THE SUBJECT?: The psychometrics of the CUB measure have been tested within an inpatient psychiatric setting. Results show that the CUB has two factors that reflect patients' approach and avoidance of dimensions of the treatment milieu, and that an increase of approach and decrease of avoidance are correlated with discharge. No empirical research has examined the validity of the CUB in a day hospital programme. WHAT THIS ARTICLE ADDS TO EXISTING KNOWLEDGE?: This study was the first to address the validity of this questionnaire within a psychiatric day hospital setting. This now allows other mental health service providers to use this questionnaire following administration of patient engagement interventions (such as behavioural activation), which are routinely used within this type of a setting. WHAT ARE THE IMPLICATIONS FOR PRACTICE?: Our results can enable healthcare providers to employ an effective and psychometrically validated tool in a day hospital setting to measure treatment outcomes and provide reflections of patients' approach behaviours and avoidance behaviours. Introduction We evaluated the Checklist of Unit Behaviours (CUBs) questionnaire in a novel mental health setting: a day hospital within a large acute care general hospital. No empirical evidence exists, as of yet, to look at the validity of this measure in this type of a treatment setting. The CUB measures two factors, avoidance or approach, of the patients' engagement with the treatment milieu within the previous 24 hr. Aim A confirmatory factor analysis (CFA) was conducted to validate the CUB's original two factor structure in an outpatient day programme. Methods Psychiatric outpatients (n = 163) completed the CUB daily while participating in a day hospital programme in Toronto, Canada. Results A CFA was used to confirm the CUB factors but resulted in a poor fitting model for our sample, χ 2 (103) = 278.59, p < .001, CFI = 0.80, RMSEA = 0.10, SRMR = 0.10. Questions 5, 8 and 10 had higher loadings on a third factor revealed through exploratory factor analysis. We believe this factor, "Group Engagement," reflects the construct of group-related issues. Discussion The CUB was a practical and useful tool in our psychiatric day hospital setting at a large acute care general hospital. Implications for practice Our analysis identified group engagement, a critical variable in day programmes, as patients have autonomy regarding staying or leaving the programme. © 2017 John Wiley & Sons Ltd.
Mental Health Services in NCAA Division I Athletics: A Survey of Head ATCs.
Sudano, Laura E; Miles, Christopher M
There is a growing awareness of the importance of mental health care in National Collegiate Athletic Association (NCAA) student-athletes; however, there is a lack of literature on mental health resources in collegiate settings. Identifying current practices can set the stage to improve the delivery of care. There is great variability in resources and current practices and no "standard of care" exists. Observational, quantitative. Level 5. One hundred twenty-seven (36% response rate) head athletic trainers at Division I NCAA member colleges completed a web-based survey. Questions assessed several aspects of mental health clinicians, perception of care coordination, and screening. Seventy-two percent of respondents noted that counseling took place in a counseling center, and 20.5% of respondents indicated that they had a mental health provider who worked in the athletic training room. Mental health clinician credentials included marriage and family therapist, psychologist, clinical social worker, and psychiatrist. The majority of athletic trainers (ATCs) noted that they are satisfied with the feedback from the mental health provider about the student-athletes' mental health (57.3%) and believe that they would be able to provide better care to student-athletes if mental health services occurred onsite in the training room (46.4%). Fewer than half (43%) indicated that they use screening instruments to assess for mental health disorders. There is wide variability on how mental health services are provided to NCAA Division 1 student-athletes. Some mental health care providers are located offsite, while some provide care in the training room setting. Also, there are inconsistencies in the use of standardized screening tools for mental health evaluation. There is no standard collaborative or integrated care delivery model for student-athletes. Opportunities exist for standardization through integrated care models and increased use of validated screening tools to deliver comprehensive care to student-athletes.
Cognitive assessment tools in Asia: a systematic review.
Rosli, Roshaslina; Tan, Maw Pin; Gray, William Keith; Subramanian, Pathmawathi; Chin, Ai-Vyrn
2016-02-01
The prevalence of dementia is increasing in Asia than in any other continent. However, the applicability of the existing cognitive assessment tools is limited by differences in educational and cultural factors in this setting. We conducted a systematic review of published studies on cognitive assessments tools in Asia. We aimed to rationalize the results of available studies which evaluated the validity of cognitive tools for the detection of cognitive impairment and to identify the issues surrounding the available cognitive impairment screening tools in Asia. Five electronic databases (CINAHL, MEDLINE, Embase, Cochrane Library, and Science Direct) were searched using the keywords dementia Or Alzheimer Or cognitive impairment And screen Or measure Or test Or tool Or instrument Or assessment, and 2,381 articles were obtained. Thirty-eight articles, evaluating 28 tools in seven Asian languages, were included. Twenty-nine (76%) of the studies had been conducted in East Asia with only four studies conducted in South Asia and no study from northern, western, or central Asia or Indochina. Local language translations of the Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA) were assessed in 15 and six studies respectively. Only three tools (the Korean Dementia Screening Questionnaire, the Picture-based Memory Intelligence Scale, and the revised Hasegawa Dementia Screen) were derived de novo from Asian populations. These tools were assessed in five studies. Highly variable cut-offs were reported for the MMSE (17-29/30) and MoCA (21-26/30), with 13/19 (68%) of studies reporting educational bias. Few cognitive assessment tools have been validated in Asia, with no published validation studies for many Asian nations and languages. In addition, many available tools display educational bias. Future research should include concerted efforts to develop culturally appropriate tools with minimal educational bias.
TaxI: a software tool for DNA barcoding using distance methods
Steinke, Dirk; Vences, Miguel; Salzburger, Walter; Meyer, Axel
2005-01-01
DNA barcoding is a promising approach to the diagnosis of biological diversity in which DNA sequences serve as the primary key for information retrieval. Most existing software for evolutionary analysis of DNA sequences was designed for phylogenetic analyses and, hence, those algorithms do not offer appropriate solutions for the rapid, but precise analyses needed for DNA barcoding, and are also unable to process the often large comparative datasets. We developed a flexible software tool for DNA taxonomy, named TaxI. This program calculates sequence divergences between a query sequence (taxon to be barcoded) and each sequence of a dataset of reference sequences defined by the user. Because the analysis is based on separate pairwise alignments this software is also able to work with sequences characterized by multiple insertions and deletions that are difficult to align in large sequence sets (i.e. thousands of sequences) by multiple alignment algorithms because of computational restrictions. Here, we demonstrate the utility of this approach with two datasets of fish larvae and juveniles from Lake Constance and juvenile land snails under different models of sequence evolution. Sets of ribosomal 16S rRNA sequences, characterized by multiple indels, performed as good as or better than cox1 sequence sets in assigning sequences to species, demonstrating the suitability of rRNA genes for DNA barcoding. PMID:16214755
Shahmoradi, Leila; Safadari, Reza; Jimma, Worku
2017-09-01
Healthcare is a knowledge driven process and thus knowledge management and the tools to manage knowledge in healthcare sector are gaining attention. The aim of this systematic review is to investigate knowledge management implementation and knowledge management tools used in healthcare for informed decision making. Three databases, two journals websites and Google Scholar were used as sources for the review. The key terms used to search relevant articles include: "Healthcare and Knowledge Management"; "Knowledge Management Tools in Healthcare" and "Community of Practices in healthcare". It was found that utilization of knowledge management in healthcare is encouraging. There exist numbers of opportunities for knowledge management implementation, though there are some barriers as well. Some of the opportunities that can transform healthcare are advances in health information and communication technology, clinical decision support systems, electronic health record systems, communities of practice and advanced care planning. Providing the right knowledge at the right time, i.e., at the point of decision making by implementing knowledge management in healthcare is paramount. To do so, it is very important to use appropriate tools for knowledge management and user-friendly system because it can significantly improve the quality and safety of care provided for patients both at hospital and home settings.
RGmatch: matching genomic regions to proximal genes in omics data integration.
Furió-Tarí, Pedro; Conesa, Ana; Tarazona, Sonia
2016-11-22
The integrative analysis of multiple genomics data often requires that genome coordinates-based signals have to be associated with proximal genes. The relative location of a genomic region with respect to the gene (gene area) is important for functional data interpretation; hence algorithms that match regions to genes should be able to deliver insight into this information. In this work we review the tools that are publicly available for making region-to-gene associations. We also present a novel method, RGmatch, a flexible and easy-to-use Python tool that computes associations either at the gene, transcript, or exon level, applying a set of rules to annotate each region-gene association with the region location within the gene. RGmatch can be applied to any organism as long as genome annotation is available. Furthermore, we qualitatively and quantitatively compare RGmatch to other tools. RGmatch simplifies the association of a genomic region with its closest gene. At the same time, it is a powerful tool because the rules used to annotate these associations are very easy to modify according to the researcher's specific interests. Some important differences between RGmatch and other similar tools already in existence are RGmatch's flexibility, its wide range of user options, compatibility with any annotatable organism, and its comprehensive and user-friendly output.
NASA Astrophysics Data System (ADS)
Prasad, U.; Rahabi, A.
2001-05-01
The following utilities developed for HDF-EOS format data dump are of special use for Earth science data for NASA's Earth Observation System (EOS). This poster demonstrates their use and application. The first four tools take HDF-EOS data files as input. HDF-EOS Metadata Dumper - metadmp Metadata dumper extracts metadata from EOS data granules. It operates by simply copying blocks of metadata from the file to the standard output. It does not process the metadata in any way. Since all metadata in EOS granules is encoded in the Object Description Language (ODL), the output of metadmp will be in the form of complete ODL statements. EOS data granules may contain up to three different sets of metadata (Core, Archive, and Structural Metadata). HDF-EOS Contents Dumper - heosls Heosls dumper displays the contents of HDF-EOS files. This utility provides detailed information on the POINT, SWATH, and GRID data sets. in the files. For example: it will list, the Geo-location fields, Data fields and objects. HDF-EOS ASCII Dumper - asciidmp The ASCII dump utility extracts fields from EOS data granules into plain ASCII text. The output from asciidmp should be easily human readable. With minor editing, asciidmp's output can be made ingestible by any application with ASCII import capabilities. HDF-EOS Binary Dumper - bindmp The binary dumper utility dumps HDF-EOS objects in binary format. This is useful for feeding the output of it into existing program, which does not understand HDF, for example: custom software and COTS products. HDF-EOS User Friendly Metadata - UFM The UFM utility tool is useful for viewing ECS metadata. UFM takes an EOSDIS ODL metadata file and produces an HTML report of the metadata for display using a web browser. HDF-EOS METCHECK - METCHECK METCHECK can be invoked from either Unix or Dos environment with a set of command line options that a user might use to direct the tool inputs and output . METCHECK validates the inventory metadata in (.met file) using The Descriptor file (.desc) as the reference. The tool takes (.desc), and (.met) an ODL file as inputs, and generates a simple output file contains the results of the checking process.
SPARTA: Simple Program for Automated reference-based bacterial RNA-seq Transcriptome Analysis.
Johnson, Benjamin K; Scholz, Matthew B; Teal, Tracy K; Abramovitch, Robert B
2016-02-04
Many tools exist in the analysis of bacterial RNA sequencing (RNA-seq) transcriptional profiling experiments to identify differentially expressed genes between experimental conditions. Generally, the workflow includes quality control of reads, mapping to a reference, counting transcript abundance, and statistical tests for differentially expressed genes. In spite of the numerous tools developed for each component of an RNA-seq analysis workflow, easy-to-use bacterially oriented workflow applications to combine multiple tools and automate the process are lacking. With many tools to choose from for each step, the task of identifying a specific tool, adapting the input/output options to the specific use-case, and integrating the tools into a coherent analysis pipeline is not a trivial endeavor, particularly for microbiologists with limited bioinformatics experience. To make bacterial RNA-seq data analysis more accessible, we developed a Simple Program for Automated reference-based bacterial RNA-seq Transcriptome Analysis (SPARTA). SPARTA is a reference-based bacterial RNA-seq analysis workflow application for single-end Illumina reads. SPARTA is turnkey software that simplifies the process of analyzing RNA-seq data sets, making bacterial RNA-seq analysis a routine process that can be undertaken on a personal computer or in the classroom. The easy-to-install, complete workflow processes whole transcriptome shotgun sequencing data files by trimming reads and removing adapters, mapping reads to a reference, counting gene features, calculating differential gene expression, and, importantly, checking for potential batch effects within the data set. SPARTA outputs quality analysis reports, gene feature counts and differential gene expression tables and scatterplots. SPARTA provides an easy-to-use bacterial RNA-seq transcriptional profiling workflow to identify differentially expressed genes between experimental conditions. This software will enable microbiologists with limited bioinformatics experience to analyze their data and integrate next generation sequencing (NGS) technologies into the classroom. The SPARTA software and tutorial are available at sparta.readthedocs.org.
Modeling biochemical transformation processes and information processing with Narrator.
Mandel, Johannes J; Fuss, Hendrik; Palfreyman, Niall M; Dubitzky, Werner
2007-03-27
Software tools that model and simulate the dynamics of biological processes and systems are becoming increasingly important. Some of these tools offer sophisticated graphical user interfaces (GUIs), which greatly enhance their acceptance by users. Such GUIs are based on symbolic or graphical notations used to describe, interact and communicate the developed models. Typically, these graphical notations are geared towards conventional biochemical pathway diagrams. They permit the user to represent the transport and transformation of chemical species and to define inhibitory and stimulatory dependencies. A critical weakness of existing tools is their lack of supporting an integrative representation of transport, transformation as well as biological information processing. Narrator is a software tool facilitating the development and simulation of biological systems as Co-dependence models. The Co-dependence Methodology complements the representation of species transport and transformation together with an explicit mechanism to express biological information processing. Thus, Co-dependence models explicitly capture, for instance, signal processing structures and the influence of exogenous factors or events affecting certain parts of a biological system or process. This combined set of features provides the system biologist with a powerful tool to describe and explore the dynamics of life phenomena. Narrator's GUI is based on an expressive graphical notation which forms an integral part of the Co-dependence Methodology. Behind the user-friendly GUI, Narrator hides a flexible feature which makes it relatively easy to map models defined via the graphical notation to mathematical formalisms and languages such as ordinary differential equations, the Systems Biology Markup Language or Gillespie's direct method. This powerful feature facilitates reuse, interoperability and conceptual model development. Narrator is a flexible and intuitive systems biology tool. It is specifically intended for users aiming to construct and simulate dynamic models of biology without recourse to extensive mathematical detail. Its design facilitates mappings to different formal languages and frameworks. The combined set of features makes Narrator unique among tools of its kind. Narrator is implemented as Java software program and available as open-source from http://www.narrator-tool.org.
Betancourt, Theresa S.; Zuilkowski, Stephanie S.; Ravichandran, Arathi; Einhorn, Honora; Arora, Nikita; Bhattacharya Chakravarty, Aruna; Brennan, Robert T.
2015-01-01
Background The child protection community is increasingly focused on developing tools to assess threats to child protection and the basic security needs and rights of children and families living in adverse circumstances. Although tremendous advances have been made to improve measurement of individual child health status or household functioning for use in low-resource settings, little attention has been paid to a more diverse array of settings in which many children in adversity spend time and how context contributes to threats to child protection. The SAFE model posits that insecurity in any of the following fundamental domains threatens security in the others: Safety/freedom from harm; Access to basic physiological needs and healthcare; Family and connection to others; Education and economic security. Site-level tools are needed in order to monitor the conditions that can dramatically undermine or support healthy child growth, development and emotional and behavioral health. From refugee camps and orphanages to schools and housing complexes, site-level threats exist that are not well captured by commonly used measures of child health and well-being or assessments of single households (e.g., SDQ, HOME). Methods The present study presents a methodology and the development of a scale for assessing site-level child protection threats in various settings of adversity. A modified Delphi panel process was enhanced with two stages of expert review in core content areas as well as review by experts in instrument development, and field pilot testing. Results Field testing in two diverse sites in India—a construction site and a railway station—revealed that the resulting SAFE instrument was sensitive to the differences between the sites from the standpoint of core child protection issues. PMID:26540159
Betancourt, Theresa S; Zuilkowski, Stephanie S; Ravichandran, Arathi; Einhorn, Honora; Arora, Nikita; Bhattacharya Chakravarty, Aruna; Brennan, Robert T
2015-01-01
The child protection community is increasingly focused on developing tools to assess threats to child protection and the basic security needs and rights of children and families living in adverse circumstances. Although tremendous advances have been made to improve measurement of individual child health status or household functioning for use in low-resource settings, little attention has been paid to a more diverse array of settings in which many children in adversity spend time and how context contributes to threats to child protection. The SAFE model posits that insecurity in any of the following fundamental domains threatens security in the others: Safety/freedom from harm; Access to basic physiological needs and healthcare; Family and connection to others; Education and economic security. Site-level tools are needed in order to monitor the conditions that can dramatically undermine or support healthy child growth, development and emotional and behavioral health. From refugee camps and orphanages to schools and housing complexes, site-level threats exist that are not well captured by commonly used measures of child health and well-being or assessments of single households (e.g., SDQ, HOME). The present study presents a methodology and the development of a scale for assessing site-level child protection threats in various settings of adversity. A modified Delphi panel process was enhanced with two stages of expert review in core content areas as well as review by experts in instrument development, and field pilot testing. Field testing in two diverse sites in India-a construction site and a railway station-revealed that the resulting SAFE instrument was sensitive to the differences between the sites from the standpoint of core child protection issues.
Muellner, Ulrich J; Vial, Flavie; Wohlfender, Franziska; Hadorn, Daniela; Reist, Martin; Muellner, Petra
2015-01-01
The reporting of outputs from health surveillance systems should be done in a near real-time and interactive manner in order to provide decision makers with powerful means to identify, assess, and manage health hazards as early and efficiently as possible. While this is currently rarely the case in veterinary public health surveillance, reporting tools do exist for the visual exploration and interactive interrogation of health data. In this work, we used tools freely available from the Google Maps and Charts library to develop a web application reporting health-related data derived from slaughterhouse surveillance and from a newly established web-based equine surveillance system in Switzerland. Both sets of tools allowed entry-level usage without or with minimal programing skills while being flexible enough to cater for more complex scenarios for users with greater programing skills. In particular, interfaces linking statistical softwares and Google tools provide additional analytical functionality (such as algorithms for the detection of unusually high case occurrences) for inclusion in the reporting process. We show that such powerful approaches could improve timely dissemination and communication of technical information to decision makers and other stakeholders and could foster the early-warning capacity of animal health surveillance systems.
Implementing clinical protocols in oncology: quality gaps and the learning curve phenomenon.
Kedikoglou, Simos; Syrigos, Konstantinos; Skalkidis, Yannis; Ploiarchopoulou, Fani; Dessypris, Nick; Petridou, Eleni
2005-08-01
The quality improvement effort in clinical practice has focused mostly on 'performance quality', i.e. on the development of comprehensive, evidence-based guidelines. This study aimed to assess the 'conformance quality', i.e. the extent to which guidelines once developed are correctly and consistently applied. It also aimed to assess the existence of quality gaps in the treatment of certain patient segments as defined by age or gender and to investigate methods to improve overall conformance quality. A retrospective audit of clinical practice in a well-defined oncology setting was undertaken and the results compared to those obtained from prospectively applying an internally developed clinical protocol in the same setting and using specific tools to increase conformance quality. All indicators showed improvement after the implementation of the protocol that in many cases reached statistical significance, while in the entire cohort advanced age was associated (although not significantly) with sub-optimal delivery of care. A 'learning curve' phenomenon in the implementation of quality initiatives was detected, with all indicators improving substantially in the second part of the prospective study. Clinicians should pay separate attention to the implementation of chosen protocols and employ specific tools to increase conformance quality in patient care.
Tomizawa, Ryoko; Yamano, Mayumi; Osako, Mitue; Hirabayashi, Naotugu; Oshima, Nobuo; Sigeta, Masahiro; Reeves, Scott
2017-12-01
Few scales currently exist to assess the quality of interprofessional teamwork through team members' perceptions of working together in mental health settings. The purpose of this study was to revise and validate an interprofessional scale to assess the quality of teamwork in inpatient psychiatric units and to use it multi-nationally. A literature review was undertaken to identify evaluative teamwork tools and develop an additional 12 items to ensure a broad global focus. Focus group discussions considered adaptation to different care systems using subjective judgements from 11 participants in a pre-test of items. Data quality, construct validity, reproducibility, and internal consistency were investigated in the survey using an international comparative design. Exploratory factor analysis yielded five factors with 21 items: 'patient/community centred care', 'collaborative communication', 'interprofessional conflict', 'role clarification', and 'environment'. High overall internal consistency, reproducibility, adequate face validity, and reasonable construct validity were shown in the USA and Japan. The revised Collaborative Practice Assessment Tool (CPAT) is a valid measure to assess the quality of interprofessional teamwork in psychiatry and identifies the best strategies to improve team performance. Furthermore, the revised scale will generate more rigorous evidence for collaborative practice in psychiatry internationally.
VennDIS: a JavaFX-based Venn and Euler diagram software to generate publication quality figures.
Ignatchenko, Vladimir; Ignatchenko, Alexandr; Sinha, Ankit; Boutros, Paul C; Kislinger, Thomas
2015-04-01
Venn diagrams are graphical representations of the relationships among multiple sets of objects and are often used to illustrate similarities and differences among genomic and proteomic datasets. All currently existing tools for producing Venn diagrams evince one of two traits; they require expertise in specific statistical software packages (such as R), or lack the flexibility required to produce publication-quality figures. We describe a simple tool that addresses both shortcomings, Venn Diagram Interactive Software (VennDIS), a JavaFX-based solution for producing highly customizable, publication-quality Venn, and Euler diagrams of up to five sets. The strengths of VennDIS are its simple graphical user interface and its large array of customization options, including the ability to modify attributes such as font, style and position of the labels, background color, size of the circle/ellipse, and outline color. It is platform independent and provides real-time visualization of figure modifications. The created figures can be saved as XML files for future modification or exported as high-resolution images for direct use in publications. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
GEMINI: Integrative Exploration of Genetic Variation and Genome Annotations
Paila, Umadevi; Chapman, Brad A.; Kirchner, Rory; Quinlan, Aaron R.
2013-01-01
Modern DNA sequencing technologies enable geneticists to rapidly identify genetic variation among many human genomes. However, isolating the minority of variants underlying disease remains an important, yet formidable challenge for medical genetics. We have developed GEMINI (GEnome MINIng), a flexible software package for exploring all forms of human genetic variation. Unlike existing tools, GEMINI integrates genetic variation with a diverse and adaptable set of genome annotations (e.g., dbSNP, ENCODE, UCSC, ClinVar, KEGG) into a unified database to facilitate interpretation and data exploration. Whereas other methods provide an inflexible set of variant filters or prioritization methods, GEMINI allows researchers to compose complex queries based on sample genotypes, inheritance patterns, and both pre-installed and custom genome annotations. GEMINI also provides methods for ad hoc queries and data exploration, a simple programming interface for custom analyses that leverage the underlying database, and both command line and graphical tools for common analyses. We demonstrate GEMINI's utility for exploring variation in personal genomes and family based genetic studies, and illustrate its ability to scale to studies involving thousands of human samples. GEMINI is designed for reproducibility and flexibility and our goal is to provide researchers with a standard framework for medical genomics. PMID:23874191
An expert system based software sizing tool, phase 2
NASA Technical Reports Server (NTRS)
Friedlander, David
1990-01-01
A software tool was developed for predicting the size of a future computer program at an early stage in its development. The system is intended to enable a user who is not expert in Software Engineering to estimate software size in lines of source code with an accuracy similar to that of an expert, based on the program's functional specifications. The project was planned as a knowledge based system with a field prototype as the goal of Phase 2 and a commercial system planned for Phase 3. The researchers used techniques from Artificial Intelligence and knowledge from human experts and existing software from NASA's COSMIC database. They devised a classification scheme for the software specifications, and a small set of generic software components that represent complexity and apply to large classes of programs. The specifications are converted to generic components by a set of rules and the generic components are input to a nonlinear sizing function which makes the final prediction. The system developed for this project predicted code sizes from the database with a bias factor of 1.06 and a fluctuation factor of 1.77, an accuracy similar to that of human experts but without their significant optimistic bias.
ProtaBank: A repository for protein design and engineering data.
Wang, Connie Y; Chang, Paul M; Ary, Marie L; Allen, Benjamin D; Chica, Roberto A; Mayo, Stephen L; Olafson, Barry D
2018-03-25
We present ProtaBank, a repository for storing, querying, analyzing, and sharing protein design and engineering data in an actively maintained and updated database. ProtaBank provides a format to describe and compare all types of protein mutational data, spanning a wide range of properties and techniques. It features a user-friendly web interface and programming layer that streamlines data deposition and allows for batch input and queries. The database schema design incorporates a standard format for reporting protein sequences and experimental data that facilitates comparison of results across different data sets. A suite of analysis and visualization tools are provided to facilitate discovery, to guide future designs, and to benchmark and train new predictive tools and algorithms. ProtaBank will provide a valuable resource to the protein engineering community by storing and safeguarding newly generated data, allowing for fast searching and identification of relevant data from the existing literature, and exploring correlations between disparate data sets. ProtaBank invites researchers to contribute data to the database to make it accessible for search and analysis. ProtaBank is available at https://protabank.org. © 2018 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.
EuroFlow standardization of flow cytometer instrument settings and immunophenotyping protocols
Kalina, T; Flores-Montero, J; van der Velden, V H J; Martin-Ayuso, M; Böttcher, S; Ritgen, M; Almeida, J; Lhermitte, L; Asnafi, V; Mendonça, A; de Tute, R; Cullen, M; Sedek, L; Vidriales, M B; Pérez, J J; te Marvelde, J G; Mejstrikova, E; Hrusak, O; Szczepański, T; van Dongen, J J M; Orfao, A
2012-01-01
The EU-supported EuroFlow Consortium aimed at innovation and standardization of immunophenotyping for diagnosis and classification of hematological malignancies by introducing 8-color flow cytometry with fully standardized laboratory procedures and antibody panels in order to achieve maximally comparable results among different laboratories. This required the selection of optimal combinations of compatible fluorochromes and the design and evaluation of adequate standard operating procedures (SOPs) for instrument setup, fluorescence compensation and sample preparation. Additionally, we developed software tools for the evaluation of individual antibody reagents and antibody panels. Each section describes what has been evaluated experimentally versus adopted based on existing data and experience. Multicentric evaluation demonstrated high levels of reproducibility based on strict implementation of the EuroFlow SOPs and antibody panels. Overall, the 6 years of extensive collaborative experiments and the analysis of hundreds of cell samples of patients and healthy controls in the EuroFlow centers have provided for the first time laboratory protocols and software tools for fully standardized 8-color flow cytometric immunophenotyping of normal and malignant leukocytes in bone marrow and blood; this has yielded highly comparable data sets, which can be integrated in a single database. PMID:22948490
The Gene Set Builder: collation, curation, and distribution of sets of genes
Yusuf, Dimas; Lim, Jonathan S; Wasserman, Wyeth W
2005-01-01
Background In bioinformatics and genomics, there are many applications designed to investigate the common properties for a set of genes. Often, these multi-gene analysis tools attempt to reveal sequential, functional, and expressional ties. However, while tremendous effort has been invested in developing tools that can analyze a set of genes, minimal effort has been invested in developing tools that can help researchers compile, store, and annotate gene sets in the first place. As a result, the process of making or accessing a set often involves tedious and time consuming steps such as finding identifiers for each individual gene. These steps are often repeated extensively to shift from one identifier type to another; or to recreate a published set. In this paper, we present a simple online tool which – with the help of the gene catalogs Ensembl and GeneLynx – can help researchers build and annotate sets of genes quickly and easily. Description The Gene Set Builder is a database-driven, web-based tool designed to help researchers compile, store, export, and share sets of genes. This application supports the 17 eukaryotic genomes found in version 32 of the Ensembl database, which includes species from yeast to human. User-created information such as sets and customized annotations are stored to facilitate easy access. Gene sets stored in the system can be "exported" in a variety of output formats – as lists of identifiers, in tables, or as sequences. In addition, gene sets can be "shared" with specific users to facilitate collaborations or fully released to provide access to published results. The application also features a Perl API (Application Programming Interface) for direct connectivity to custom analysis tools. A downloadable Quick Reference guide and an online tutorial are available to help new users learn its functionalities. Conclusion The Gene Set Builder is an Ensembl-facilitated online tool designed to help researchers compile and manage sets of genes in a user-friendly environment. The application can be accessed via . PMID:16371163
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franklin, Lyndsey; Pirrung, Megan A.; Blaha, Leslie M.
Cyber network analysts follow complex processes in their investigations of potential threats to their network. Much research is dedicated to providing automated tool support in the effort to make their tasks more efficient, accurate, and timely. This tool support comes in a variety of implementations from machine learning algorithms that monitor streams of data to visual analytic environments for exploring rich and noisy data sets. Cyber analysts, however, often speak of a need for tools which help them merge the data they already have and help them establish appropriate baselines against which to compare potential anomalies. Furthermore, existing threat modelsmore » that cyber analysts regularly use to structure their investigation are not often leveraged in support tools. We report on our work with cyber analysts to understand they analytic process and how one such model, the MITRE ATT&CK Matrix [32], is used to structure their analytic thinking. We present our efforts to map specific data needed by analysts into the threat model to inform our eventual visualization designs. We examine data mapping for gaps where the threat model is under-supported by either data or tools. We discuss these gaps as potential design spaces for future research efforts. We also discuss the design of a prototype tool that combines machine-learning and visualization components to support cyber analysts working with this threat model.« less
Quantitative Tools for Examining the Vocalizations of Juvenile Songbirds
Wellock, Cameron D.; Reeke, George N.
2012-01-01
The singing of juvenile songbirds is highly variable and not well stereotyped, a feature that makes it difficult to analyze with existing computational techniques. We present here a method suitable for analyzing such vocalizations, windowed spectral pattern recognition (WSPR). Rather than performing pairwise sample comparisons, WSPR measures the typicality of a sample against a large sample set. We also illustrate how WSPR can be used to perform a variety of tasks, such as sample classification, song ontogeny measurement, and song variability measurement. Finally, we present a novel measure, based on WSPR, for quantifying the apparent complexity of a bird's singing. PMID:22701474
Heinrich, Henriette; Misselwitz, Benjamin
2018-04-01
Functional anorectal disorders such as faecal incontinence (FI), functional anorectal pain, and functional defecation disorders (FDD) are highly prevalent and represent a high socioeconomic burden. Several tests of anorectal function exist in this setting; however, high-resolution anorectal manometry (HR-ARM) is a new tool that depicts pressure all along the anal canal and can assess rectoanal coordination. HR-ARM is used in the diagnosis of FI and especially FDD although data in health is still sparse, and pressure phenomena seen during simulated defecation, such as dyssynergia, are highly prevalent in health.
NASA Astrophysics Data System (ADS)
Neuville, R.; Pouliot, J.; Poux, F.; Hallot, P.; De Rudder, L.; Billen, R.
2017-10-01
This paper deals with the establishment of a comprehensive methodological framework that defines 3D visualisation rules and its application in a decision support tool. Whilst the use of 3D models grows in many application fields, their visualisation remains challenging from the point of view of mapping and rendering aspects to be applied to suitability support the decision making process. Indeed, there exists a great number of 3D visualisation techniques but as far as we know, a decision support tool that facilitates the production of an efficient 3D visualisation is still missing. This is why a comprehensive methodological framework is proposed in order to build decision tables for specific data, tasks and contexts. Based on the second-order logic formalism, we define a set of functions and propositions among and between two collections of entities: on one hand static retinal variables (hue, size, shape…) and 3D environment parameters (directional lighting, shadow, haze…) and on the other hand their effect(s) regarding specific visual tasks. It enables to define 3D visualisation rules according to four categories: consequence, compatibility, potential incompatibility and incompatibility. In this paper, the application of the methodological framework is demonstrated for an urban visualisation at high density considering a specific set of entities. On the basis of our analysis and the results of many studies conducted in the 3D semiotics, which refers to the study of symbols and how they relay information, the truth values of propositions are determined. 3D visualisation rules are then extracted for the considered context and set of entities and are presented into a decision table with a colour coding. Finally, the decision table is implemented into a plugin developed with three.js, a cross-browser JavaScript library. The plugin consists of a sidebar and warning windows that help the designer in the use of a set of static retinal variables and 3D environment parameters.
Informed consent comprehension in African research settings.
Afolabi, Muhammed O; Okebe, Joseph U; McGrath, Nuala; Larson, Heidi J; Bojang, Kalifa; Chandramohan, Daniel
2014-06-01
Previous reviews on participants' comprehension of informed consent information have focused on developed countries. Experience has shown that ethical standards developed on Western values may not be appropriate for African settings where research concepts are unfamiliar. We undertook this review to describe how informed consent comprehension is defined and measured in African research settings. We conducted a comprehensive search involving five electronic databases: Medline, Embase, Global Health, EthxWeb and Bioethics Literature Database (BELIT). We also examined African Index Medicus and Google Scholar for relevant publications on informed consent comprehension in clinical studies conducted in sub-Saharan Africa. 29 studies satisfied the inclusion criteria; meta-analysis was possible in 21 studies. We further conducted a direct comparison of participants' comprehension on domains of informed consent in all eligible studies. Comprehension of key concepts of informed consent varies considerably from country to country and depends on the nature and complexity of the study. Meta-analysis showed that 47% of a total of 1633 participants across four studies demonstrated comprehension about randomisation (95% CI 13.9-80.9%). Similarly, 48% of 3946 participants in six studies had understanding about placebo (95% CI 19.0-77.5%), while only 30% of 753 participants in five studies understood the concept of therapeutic misconception (95% CI 4.6-66.7%). Measurement tools for informed consent comprehension were developed with little or no validation. Assessment of comprehension was carried out at variable times after disclosure of study information. No uniform definition of informed consent comprehension exists to form the basis for development of an appropriate tool to measure comprehension in African participants. Comprehension of key concepts of informed consent is poor among study participants across Africa. There is a vital need to develop a uniform definition for informed consent comprehension in low literacy research settings in Africa. This will be an essential step towards developing appropriate tools that can adequately measure informed consent comprehension. This may consequently suggest adequate measures to improve the informed consent procedure. © 2014 John Wiley & Sons Ltd.
Sma3s: a three-step modular annotator for large sequence datasets.
Muñoz-Mérida, Antonio; Viguera, Enrique; Claros, M Gonzalo; Trelles, Oswaldo; Pérez-Pulido, Antonio J
2014-08-01
Automatic sequence annotation is an essential component of modern 'omics' studies, which aim to extract information from large collections of sequence data. Most existing tools use sequence homology to establish evolutionary relationships and assign putative functions to sequences. However, it can be difficult to define a similarity threshold that achieves sufficient coverage without sacrificing annotation quality. Defining the correct configuration is critical and can be challenging for non-specialist users. Thus, the development of robust automatic annotation techniques that generate high-quality annotations without needing expert knowledge would be very valuable for the research community. We present Sma3s, a tool for automatically annotating very large collections of biological sequences from any kind of gene library or genome. Sma3s is composed of three modules that progressively annotate query sequences using either: (i) very similar homologues, (ii) orthologous sequences or (iii) terms enriched in groups of homologous sequences. We trained the system using several random sets of known sequences, demonstrating average sensitivity and specificity values of ~85%. In conclusion, Sma3s is a versatile tool for high-throughput annotation of a wide variety of sequence datasets that outperforms the accuracy of other well-established annotation algorithms, and it can enrich existing database annotations and uncover previously hidden features. Importantly, Sma3s has already been used in the functional annotation of two published transcriptomes. © The Author 2014. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.
Harris, Claire; Green, Sally; Elshaug, Adam G
2017-09-08
This is the tenth in a series of papers reporting a program of Sustainability in Health care by Allocating Resources Effectively (SHARE) in a local healthcare setting. After more than a decade of research, there is little published evidence of active and successful disinvestment. The paucity of frameworks, methods and tools is reported to be a factor in the lack of success. However there are clear and consistent messages in the literature that can be used to inform development of a framework for operationalising disinvestment. This paper, along with the conceptual review of disinvestment in Paper 9 of this series, aims to integrate the findings of the SHARE Program with the existing disinvestment literature to address the lack of information regarding systematic organisation-wide approaches to disinvestment at the local health service level. A framework for disinvestment in a local healthcare setting is proposed. Definitions for essential terms and key concepts underpinning the framework have been made explicit to address the lack of consistent terminology. Given the negative connotations of the word 'disinvestment' and the problems inherent in considering disinvestment in isolation, the basis for the proposed framework is 'resource allocation' to address the spectrum of decision-making from investment to disinvestment. The focus is positive: optimising healthcare, improving health outcomes, using resources effectively. The framework is based on three components: a program for decision-making, projects to implement decisions and evaluate outcomes, and research to understand and improve the program and project activities. The program consists of principles for decision-making and settings that provide opportunities to introduce systematic prompts and triggers to initiate disinvestment. The projects follow the steps in the disinvestment process. Potential methods and tools are presented, however the framework does not stipulate project design or conduct; allowing application of any theories, methods or tools at each step. Barriers are discussed and examples illustrating constituent elements are provided. The framework can be employed at network, institutional, departmental, ward or committee level. It is proposed as an organisation-wide application, embedded within existing systems and processes, which can be responsive to needs and priorities at the level of implementation. It can be used in policy, management or clinical contexts.
Bat detective-Deep learning tools for bat acoustic signal detection.
Mac Aodha, Oisin; Gibb, Rory; Barlow, Kate E; Browning, Ella; Firman, Michael; Freeman, Robin; Harder, Briana; Kinsey, Libby; Mead, Gary R; Newson, Stuart E; Pandourski, Ivan; Parsons, Stuart; Russ, Jon; Szodoray-Paradi, Abigel; Szodoray-Paradi, Farkas; Tilova, Elena; Girolami, Mark; Brostow, Gabriel; Jones, Kate E
2018-03-01
Passive acoustic sensing has emerged as a powerful tool for quantifying anthropogenic impacts on biodiversity, especially for echolocating bat species. To better assess bat population trends there is a critical need for accurate, reliable, and open source tools that allow the detection and classification of bat calls in large collections of audio recordings. The majority of existing tools are commercial or have focused on the species classification task, neglecting the important problem of first localizing echolocation calls in audio which is particularly problematic in noisy recordings. We developed a convolutional neural network based open-source pipeline for detecting ultrasonic, full-spectrum, search-phase calls produced by echolocating bats. Our deep learning algorithms were trained on full-spectrum ultrasonic audio collected along road-transects across Europe and labelled by citizen scientists from www.batdetective.org. When compared to other existing algorithms and commercial systems, we show significantly higher detection performance of search-phase echolocation calls with our test sets. As an example application, we ran our detection pipeline on bat monitoring data collected over five years from Jersey (UK), and compared results to a widely-used commercial system. Our detection pipeline can be used for the automatic detection and monitoring of bat populations, and further facilitates their use as indicator species on a large scale. Our proposed pipeline makes only a small number of bat specific design decisions, and with appropriate training data it could be applied to detecting other species in audio. A crucial novelty of our work is showing that with careful, non-trivial, design and implementation considerations, state-of-the-art deep learning methods can be used for accurate and efficient monitoring in audio.
Bat detective—Deep learning tools for bat acoustic signal detection
Barlow, Kate E.; Firman, Michael; Freeman, Robin; Harder, Briana; Kinsey, Libby; Mead, Gary R.; Newson, Stuart E.; Pandourski, Ivan; Russ, Jon; Szodoray-Paradi, Abigel; Tilova, Elena; Girolami, Mark; Jones, Kate E.
2018-01-01
Passive acoustic sensing has emerged as a powerful tool for quantifying anthropogenic impacts on biodiversity, especially for echolocating bat species. To better assess bat population trends there is a critical need for accurate, reliable, and open source tools that allow the detection and classification of bat calls in large collections of audio recordings. The majority of existing tools are commercial or have focused on the species classification task, neglecting the important problem of first localizing echolocation calls in audio which is particularly problematic in noisy recordings. We developed a convolutional neural network based open-source pipeline for detecting ultrasonic, full-spectrum, search-phase calls produced by echolocating bats. Our deep learning algorithms were trained on full-spectrum ultrasonic audio collected along road-transects across Europe and labelled by citizen scientists from www.batdetective.org. When compared to other existing algorithms and commercial systems, we show significantly higher detection performance of search-phase echolocation calls with our test sets. As an example application, we ran our detection pipeline on bat monitoring data collected over five years from Jersey (UK), and compared results to a widely-used commercial system. Our detection pipeline can be used for the automatic detection and monitoring of bat populations, and further facilitates their use as indicator species on a large scale. Our proposed pipeline makes only a small number of bat specific design decisions, and with appropriate training data it could be applied to detecting other species in audio. A crucial novelty of our work is showing that with careful, non-trivial, design and implementation considerations, state-of-the-art deep learning methods can be used for accurate and efficient monitoring in audio. PMID:29518076
Chae, Minho; Danko, Charles G; Kraus, W Lee
2015-07-16
Global run-on coupled with deep sequencing (GRO-seq) provides extensive information on the location and function of coding and non-coding transcripts, including primary microRNAs (miRNAs), long non-coding RNAs (lncRNAs), and enhancer RNAs (eRNAs), as well as yet undiscovered classes of transcripts. However, few computational tools tailored toward this new type of sequencing data are available, limiting the applicability of GRO-seq data for identifying novel transcription units. Here, we present groHMM, a computational tool in R, which defines the boundaries of transcription units de novo using a two state hidden-Markov model (HMM). A systematic comparison of the performance between groHMM and two existing peak-calling methods tuned to identify broad regions (SICER and HOMER) favorably supports our approach on existing GRO-seq data from MCF-7 breast cancer cells. To demonstrate the broader utility of our approach, we have used groHMM to annotate a diverse array of transcription units (i.e., primary transcripts) from four GRO-seq data sets derived from cells representing a variety of different human tissue types, including non-transformed cells (cardiomyocytes and lung fibroblasts) and transformed cells (LNCaP and MCF-7 cancer cells), as well as non-mammalian cells (from flies and worms). As an example of the utility of groHMM and its application to questions about the transcriptome, we show how groHMM can be used to analyze cell type-specific enhancers as defined by newly annotated enhancer transcripts. Our results show that groHMM can reveal new insights into cell type-specific transcription by identifying novel transcription units, and serve as a complete and useful tool for evaluating functional genomic elements in cells.
Software packager user's guide
NASA Technical Reports Server (NTRS)
Callahan, John R.
1995-01-01
Software integration is a growing area of concern for many programmers and software managers because the need to build new programs quickly from existing components is greater than ever. This includes building versions of software products for multiple hardware platforms and operating systems, building programs from components written in different languages, and building systems from components that must execute on different machines in a distributed network. The goal of software integration is to make building new programs from existing components more seamless -- programmers should pay minimal attention to the underlying configuration issues involved. Libraries of reusable components and classes are important tools but only partial solutions to software development problems. Even though software components may have compatible interfaces, there may be other reasons, such as differences between execution environments, why they cannot be integrated. Often, components must be adapted or reimplemented to fit into another application because of implementation differences -- they are implemented in different programming languages, dependent on different operating system resources, or must execute on different physical machines. The software packager is a tool that allows programmers to deal with interfaces between software components and ignore complex integration details. The packager takes modular descriptions of the structure of a software system written in the package specification language and produces an integration program in the form of a makefile. If complex integration tools are needed to integrate a set of components, such as remote procedure call stubs, their use is implied by the packager automatically and stub generation tools are invoked in the corresponding makefile. The programmer deals only with the components themselves and not the details of how to build the system on any given platform.
Dixon, Philippe C; Loh, Jonathan J; Michaud-Paquette, Yannick; Pearsall, David J
2017-03-01
It is common for biomechanics data sets to contain numerous dependent variables recorded over time, for many subjects, groups, and/or conditions. These data often require standard sorting, processing, and analysis operations to be performed in order to answer research questions. Visualization of these data is also crucial. This manuscript presents biomechZoo, an open-source toolbox that provides tools and graphical user interfaces to help users achieve these goals. The aims of this manuscript are to (1) introduce the main features of the toolbox, including a virtual three-dimensional environment to animate motion data (Director), a data plotting suite (Ensembler), and functions for the computation of three-dimensional lower-limb joint angles, moments, and power and (2) compare these computations to those of an existing validated system. To these ends, the steps required to process and analyze a sample data set via the toolbox are outlined. The data set comprises three-dimensional marker, ground reaction force (GRF), joint kinematic, and joint kinetic data of subjects performing straight walking and 90° turning manoeuvres. Joint kinematics and kinetics processed within the toolbox were found to be similar to outputs from a commercial system. The biomechZoo toolbox represents the work of several years and multiple contributors to provide a flexible platform to examine time-series data sets typical in the movement sciences. The toolbox has previously been used to process and analyse walking, running, and ice hockey data sets, and can integrate existing routines, such as the KineMat toolbox, for additional analyses. The toolbox can help researchers and clinicians new to programming or biomechanics to process and analyze their data through a customizable workflow, while advanced users are encouraged to contribute additional functionality to the project. Students may benefit from using biomechZoo as a learning and research tool. It is hoped that the toolbox can play a role in advancing research in the movement sciences. The biomechZoo m-files, sample data, and help repositories are available online (http://www.biomechzoo.com) under the Apache 2.0 License. The toolbox is supported for Matlab (r2014b or newer, The Mathworks Inc., Natick, USA) for Windows (Microsoft Corp., Redmond, USA) and Mac OS (Apple Inc., Cupertino, USA). Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
TIDE TOOL: Open-Source Sea-Level Monitoring Software for Tsunami Warning Systems
NASA Astrophysics Data System (ADS)
Weinstein, S. A.; Kong, L. S.; Becker, N. C.; Wang, D.
2012-12-01
A tsunami warning center (TWC) typically decides to issue a tsunami warning bulletin when initial estimates of earthquake source parameters suggest it may be capable of generating a tsunami. A TWC, however, relies on sea-level data to provide prima facie evidence for the existence or non-existence of destructive tsunami waves and to constrain tsunami wave height forecast models. In the aftermath of the 2004 Sumatra disaster, the International Tsunami Information Center asked the Pacific Tsunami Warning Center (PTWC) to develop a platform-independent, easy-to-use software package to give nascent TWCs the ability to process WMO Global Telecommunications System (GTS) sea-level messages and to analyze the resulting sea-level curves (marigrams). In response PTWC developed TIDE TOOL that has since steadily grown in sophistication to become PTWC's operational sea-level processing system. TIDE TOOL has two main parts: a decoder that reads GTS sea-level message logs, and a graphical user interface (GUI) written in the open-source platform-independent graphical toolkit scripting language Tcl/Tk. This GUI consists of dynamic map-based clients that allow the user to select and analyze a single station or groups of stations by displaying their marigams in strip-chart or screen-tiled forms. TIDE TOOL also includes detail maps of each station to show each station's geographical context and reverse tsunami travel time contours to each station. TIDE TOOL can also be coupled to the GEOWARE™ TTT program to plot tsunami travel times and to indicate the expected tsunami arrival time on the marigrams. Because sea-level messages are structured in a rich variety of formats TIDE TOOL includes a metadata file, COMP_META, that contains all of the information needed by TIDE TOOL to decode sea-level data as well as basic information such as the geographical coordinates of each station. TIDE TOOL can therefore continuously decode theses sea-level messages in real-time and display the time-series data in the GUI as well. This GUI also includes mouse-clickable functions such as zooming or expanding the time-series display, measuring tsunami signal characteristics (arrival time, wave period and amplitude, etc.), and removing the tide signal from the time-series data. De-tiding of the time series is necessary to obtain accurate measurements of tsunami wave parameters and to maintain accurate historical tsunami databases. With TIDE TOOL, de-tiding is accomplished with a set of tide harmonic coefficients routinely computed and updated at PTWC for many of the stations in PTWC's inventory (~570). PTWC also uses the decoded time series files (previous 3-5 days' worth) to compute on-the-fly tide coefficients. The latter is useful in cases where the station is new and a long-term stable set of tide coefficients are not available or cannot be easily obtained due to various non-astronomical effects. The international tsunami warning system is coordinated globally by the UNESCO IOC, and a number of countries in the Pacific and Indian Ocean, and Caribbean depend on Tide Tool to monitor tsunamis in real time.
Geoillustrator - fast sketching of geological illustrations and animations
NASA Astrophysics Data System (ADS)
Patel, Daniel; Langeland, Tor; Solteszova, Veronika
2014-05-01
We present our research results in the Geoillustrator project. The project has been going for four years and is ending in March. It was aimed at developing a rapid sketching tool for generating geological illustrations and animations for understanding the processes that have led to a current subsurface configuration. The sketching tool facilitates effective dissemination of ideas, e.g. through generation of interactive geo-scientific illustrations for interdisciplinary communication and communication to decision makers, media and lay persons. This can improve work processes in early phases of oil and gas exploration where critical decisions have to be taken based on limited information. It is a challenge for involved specialists in early exploration phases to externalize their ideas, and effectively achieve consensus in multidisciplinary working groups. In these work processes, a tool for rapid sketching of geology would be very useful for expressing geological hypotheses and creating and comparing different evolution scenarios. Often, decisions are influenced by factors that are not relevant, e.g. the geologists who produce the most polished illustrations of their hypothesis have a higher probability for getting their theories through to decision makers as it is more clearly communicated. This results in a competitive advantage for geologists who are skilled in creating illustrations. Having a tool that would lift the ability of all geologists to express their ideas to an equal level would result in more alternatives and better foundation for decision making. Digital sketching will also allow capturing otherwise lost material which can constitute a large amount of mental work and ideas. The results of sketching are currently scrapped as paper or erased from the blackboard or exist only as rough personal sketches. By using a digital sketching tool, the sketches can be exported to a form usable in modelling tools used in later phases of exploration. Currently, no digital tool exists supporting the above mentioned requirements. However, in the Geoillustrator project, relevant visualization and sketching methods have been researched, and prototypes have been developed which demonstrate a set of the mentioned functionalities. Our published results in the project which we will present can be found on our website http://www.cmr.no/cmr_computing/index.cfm?id=313109
Mobile phone tools for field-based health care workers in low-income countries.
Derenzi, Brian; Borriello, Gaetano; Jackson, Jonathan; Kumar, Vikram S; Parikh, Tapan S; Virk, Pushwaz; Lesh, Neal
2011-01-01
In low-income regions, mobile phone-based tools can improve the scope and efficiency of field health workers. They can also address challenges in monitoring and supervising a large number of geographically distributed health workers. Several tools have been built and deployed in the field, but little comparison has been done to help understand their effectiveness. This is largely because no framework exists in which to analyze the different ways in which the tools help strengthen existing health systems. In this article we highlight 6 key functions that health systems currently perform where mobile tools can provide the most benefit. Using these 6 health system functions, we compare existing applications for community health workers, an important class of field health workers who use these technologies, and discuss common challenges and lessons learned about deploying mobile tools. © 2011 Mount Sinai School of Medicine.
NASA Astrophysics Data System (ADS)
Denchik, N.; Pezard, P. A.; Ragnar, A.; Jean-Luc, D.; Jan, H.
2014-12-01
Drilling an entire section of the oceanic crust and through the Moho has been a goal of the scientific community for more than half of a century. On the basis of ODP and IODP experience and data, this will require instruments and strategies working at temperature far above 200°C (reached, for example, at the bottom of DSDP/ODP Hole 504B), and possibly beyond 300°C. Concerning logging and monitoring instruments, progress were made over the past ten years in the context of the HiTI ("High Temperature Instruments") project funded by the european community for deep drilling in hot Icelandic geothermal holes where supercritical conditions and a highly corrosive environment are expected at depth (with temperatures above 374 °C and pressures exceeding 22 MPa). For example, a slickline tool (memory tool) tolerating up to 400°C and wireline tools up to 300°C were developed and tested in Icelandic high-temperature geothermal fields. The temperature limitation of logging tools was defined to comply with the present limitation in wireline cables (320°C). As part of this new set of downhole tools, temperature, pressure, fluid flow and casing collar location might be measured up to 400°C from a single multisensor tool. Natural gamma radiation spectrum, borehole wall ultrasonic images signal, and fiber optic cables (using distributed temperature sensing methods) were also developed for wireline deployment up to 300°C and tested in the field. A wireline, dual laterolog electrical resistivity tool was also developed but could not be field tested as part of HiTI. This new set of tools constitutes a basis for the deep exploration of the oceanic crust in the future. In addition, new strategies including the real-time integration of drilling parameters with modeling of the thermo-mechanical status of the borehole could be developed, using time-lapse logging of temperature (for heat flow determination) and borehole wall images (for hole stability and in-situ stress determination) as boundary conditions for the models. In all, and with limited integration of existing tools, to deployment of high-temperature downhole tools could contribute largely to the success of the long awaited Mohole project.
IsoSCM: improved and alternative 3′ UTR annotation using multiple change-point inference
Shenker, Sol; Miura, Pedro; Sanfilippo, Piero
2015-01-01
Major applications of RNA-seq data include studies of how the transcriptome is modulated at the levels of gene expression and RNA processing, and how these events are related to cellular identity, environmental condition, and/or disease status. While many excellent tools have been developed to analyze RNA-seq data, these generally have limited efficacy for annotating 3′ UTRs. Existing assembly strategies often fragment long 3′ UTRs, and importantly, none of the algorithms in popular use can apportion data into tandem 3′ UTR isoforms, which are frequently generated by alternative cleavage and polyadenylation (APA). Consequently, it is often not possible to identify patterns of differential APA using existing assembly tools. To address these limitations, we present a new method for transcript assembly, Isoform Structural Change Model (IsoSCM) that incorporates change-point analysis to improve the 3′ UTR annotation process. Through evaluation on simulated and genuine data sets, we demonstrate that IsoSCM annotates 3′ termini with higher sensitivity and specificity than can be achieved with existing methods. We highlight the utility of IsoSCM by demonstrating its ability to recover known patterns of tissue-regulated APA. IsoSCM will facilitate future efforts for 3′ UTR annotation and genome-wide studies of the breadth, regulation, and roles of APA leveraging RNA-seq data. The IsoSCM software and source code are available from our website https://github.com/shenkers/isoscm. PMID:25406361
Severson, Carl A; Pendharkar, Sachin R; Ronksley, Paul E; Tsai, Willis H
2015-01-01
To assess the ability of electronic health data and existing screening tools to identify clinically significant obstructive sleep apnea (OSA), as defined by symptomatic or severe OSA. The present retrospective cohort study of 1041 patients referred for sleep diagnostic testing was undertaken at a tertiary sleep centre in Calgary, Alberta. A diagnosis of clinically significant OSA or an alternative sleep diagnosis was assigned to each patient through blinded independent chart review by two sleep physicians. Predictive variables were identified from online questionnaire data, and diagnostic algorithms were developed. The performance of electronically derived algorithms for identifying patients with clinically significant OSA was determined. Diagnostic performance of these algorithms was compared with versions of the STOP-Bang questionnaire and adjusted neck circumference score (ANC) derived from electronic data. Electronic questionnaire data were highly sensitive (>95%) at identifying clinically significant OSA, but not specific. Sleep diagnostic testing-determined respiratory disturbance index was very specific (specificity ≥95%) for clinically relevant disease, but not sensitive (<35%). Derived algorithms had similar accuracy to the STOP-Bang or ANC, but required fewer questions and calculations. These data suggest that a two-step process using a small number of clinical variables (maximizing sensitivity) and objective diagnostic testing (maximizing specificity) is required to identify clinically significant OSA. When used in an online setting, simple algorithms can identify clinically relevant OSA with similar performance to existing decision rules such as the STOP-Bang or ANC.
On the Spectrum of the Plenoptic Function.
Gilliam, Christopher; Dragotti, Pier-Luigi; Brookes, Mike
2014-02-01
The plenoptic function is a powerful tool to analyze the properties of multi-view image data sets. In particular, the understanding of the spectral properties of the plenoptic function is essential in many computer vision applications, including image-based rendering. In this paper, we derive for the first time an exact closed-form expression of the plenoptic spectrum of a slanted plane with finite width and use this expression as the elementary building block to derive the plenoptic spectrum of more sophisticated scenes. This is achieved by approximating the geometry of the scene with a set of slanted planes and evaluating the closed-form expression for each plane in the set. We then use this closed-form expression to revisit uniform plenoptic sampling. In this context, we derive a new Nyquist rate for the plenoptic sampling of a slanted plane and a new reconstruction filter. Through numerical simulations, on both real and synthetic scenes, we show that the new filter outperforms alternative existing filters.
Bringing nursing science to the classroom: a collaborative project.
Reams, Susan; Bashford, Carol
2009-01-01
This project resulted as a collaborative effort on the part of a public school system and nursing faculty. The fifth grade student population utilized in this study focused on the skeletal, muscular, digestive, circulatory, respiratory, and nervous systems as part of their school system's existing science and health curriculum. The intent of the study was to evaluate the impact on student learning outcomes as a result of nursing-focused, science-based, hands-on experiential activities provided by nursing faculty in the public school setting. An assessment tool was created for pretesting and posttesting to evaluate learning outcomes resulting from the intervention. Over a two day period, six classes consisting of 25 to 30 students each were divided into three equal small groups and rotated among three interactive stations. Students explored the normal function of the digestive system, heart, lungs, and skin. Improvement in learning using the pretest and posttest assessment tools were documented.
SLIDE - a web-based tool for interactive visualization of large-scale -omics data.
Ghosh, Soumita; Datta, Abhik; Tan, Kaisen; Choi, Hyungwon
2018-06-28
Data visualization is often regarded as a post hoc step for verifying statistically significant results in the analysis of high-throughput data sets. This common practice leaves a large amount of raw data behind, from which more information can be extracted. However, existing solutions do not provide capabilities to explore large-scale raw datasets using biologically sensible queries, nor do they allow user interaction based real-time customization of graphics. To address these drawbacks, we have designed an open-source, web-based tool called Systems-Level Interactive Data Exploration, or SLIDE to visualize large-scale -omics data interactively. SLIDE's interface makes it easier for scientists to explore quantitative expression data in multiple resolutions in a single screen. SLIDE is publicly available under BSD license both as an online version as well as a stand-alone version at https://github.com/soumitag/SLIDE. Supplementary Information are available at Bioinformatics online.
ABACAS: algorithm-based automatic contiguation of assembled sequences
Assefa, Samuel; Keane, Thomas M.; Otto, Thomas D.; Newbold, Chris; Berriman, Matthew
2009-01-01
Summary: Due to the availability of new sequencing technologies, we are now increasingly interested in sequencing closely related strains of existing finished genomes. Recently a number of de novo and mapping-based assemblers have been developed to produce high quality draft genomes from new sequencing technology reads. New tools are necessary to take contigs from a draft assembly through to a fully contiguated genome sequence. ABACAS is intended as a tool to rapidly contiguate (align, order, orientate), visualize and design primers to close gaps on shotgun assembled contigs based on a reference sequence. The input to ABACAS is a set of contigs which will be aligned to the reference genome, ordered and orientated, visualized in the ACT comparative browser, and optimal primer sequences are automatically generated. Availability and Implementation: ABACAS is implemented in Perl and is freely available for download from http://abacas.sourceforge.net Contact: sa4@sanger.ac.uk PMID:19497936
Controlling QoS in a collaborative multimedia environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfano, M.; Sigle, R.
1996-12-31
A collaborative multimedia environment allows users to work remotely on common projects by sharing applications (e.g., CAD tools, text editors, white boards) and simultaneously communicate audiovisually. Several dedicated applications (e.g., MBone tools) exist for transmitting video, audio and data between users. Due to the fact that they have been developed for the Internet which does not provide any Quality of Service (QoS) guarantee, these applications do not or only partially support specification of QoS requirements by the user. In addition, they all come with different user interfaces. In this paper we first discuss the problems that we experienced both atmore » the host and network levels when executing a multimedia application and varying its resource requirements. We then present the architectural details of a collaborative multimedia environment (CME) that we have been developing in order to help a user to set up and control a collaborative multimedia session.« less
Sandia Advanced MEMS Design Tools v. 3.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yarberry, Victor R.; Allen, James J.; Lantz, Jeffrey W.
This is a major revision to the Sandia Advanced MEMS Design Tools. It replaces all previous versions. New features in this version: Revised to support AutoCAD 2014 and 2015 This CD contains an integrated set of electronic files that: a) Describe the SUMMiT V fabrication process b) Provide enabling educational information (including pictures, videos, technical information) c) Facilitate the process of designing MEMS with the SUMMiT process (prototype file, Design Rule Checker, Standard Parts Library) d) Facilitate the process of having MEMS fabricated at Sandia National Laboratories e) Facilitate the process of having post-fabrication services performed. While there exists somemore » files on the CD that are used in conjunction with software package AutoCAD, these files are not intended for use independent of the CD. Note that the customer must purchase his/her own copy of AutoCAD to use with these files.« less
NASA Technical Reports Server (NTRS)
Soileau, Kerry M.; Baicy, John W.
2008-01-01
Rig Diagnostic Tools is a suite of applications designed to allow an operator to monitor the status and health of complex networked systems using a unique interface between Java applications and UNIX scripts. The suite consists of Java applications, C scripts, Vx- Works applications, UNIX utilities, C programs, and configuration files. The UNIX scripts retrieve data from the system and write them to a certain set of files. The Java side monitors these files and presents the data in user-friendly formats for operators to use in making troubleshooting decisions. This design allows for rapid prototyping and expansion of higher-level displays without affecting the basic data-gathering applications. The suite is designed to be extensible, with the ability to add new system components in building block fashion without affecting existing system applications. This allows for monitoring of complex systems for which unplanned shutdown time comes at a prohibitive cost.
Healey, Lucy; Humphreys, Cathy; Howe, Keran
2013-01-01
Women with disabilities experience violence at greater rates than other women, yet their access to domestic violence services is more limited. This limitation is mirrored in domestic violence sector standards, which often fail to include the specific issues for women with disabilities. This article has a dual focus: to outline a set of internationally transferrable standards for inclusive practice with women with disabilities affected by domestic violence; and report on the results of a documentary analysis of domestic violence service standards, codes of practice, and practice guidelines. It draws on the Building the Evidence (BtE) research and advocacy project in Victoria, Australia in which a matrix tool was developed to identify minimum standards to support the inclusion of women with disabilities in existing domestic violence sector standards. This tool is designed to interrogate domestic violence sector standards for their attention to women with disabilities.
SeqCompress: an algorithm for biological sequence compression.
Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan
2014-10-01
The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kraljić, K.; Strüngmann, L.; Fimmel, E.; Gumbel, M.
2018-01-01
The genetic code is degenerated and it is assumed that redundancy provides error detection and correction mechanisms in the translation process. However, the biological meaning of the code's structure is still under current research. This paper presents a Genetic Code Analysis Toolkit (GCAT) which provides workflows and algorithms for the analysis of the structure of nucleotide sequences. In particular, sets or sequences of codons can be transformed and tested for circularity, comma-freeness, dichotomic partitions and others. GCAT comes with a fertile editor custom-built to work with the genetic code and a batch mode for multi-sequence processing. With the ability to read FASTA files or load sequences from GenBank, the tool can be used for the mathematical and statistical analysis of existing sequence data. GCAT is Java-based and provides a plug-in concept for extensibility. Availability: Open source Homepage:http://www.gcat.bio/
Sicat, Brigitte Luong; Huynh, Christine; Willett, Rita; Polich, Susan; Mayer, Sallie
2014-01-01
Interprofessional education (IPE) can be hindered by the lack of infrastructure required to support it. We developed a clinical IPE experience for medical and pharmacy students built upon an existing infrastructure. We created tools to orient students to IPE and had students participate in pharmacist-led and physician-led IPE clinics. Results from the surveys indicated that after participating in the IPE experience, there were no significant changes in attitudes toward interprofessional teamwork or attitudes toward different members of the healthcare team. Students found less value in tools outlining roles and responsibilities of team members, on-line modules about the other profession, and IPE group discussion. They placed more value on the actual clinical experience. Themes derived from analysis of open-ended survey questions reflected the value that students placed on interprofessional interaction in the setting of direct patient care.
Sandia MEMS Visualization Tools v. 3.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yarberry, Victor; Jorgensen, Craig R.; Young, Andrew I.
This is a revision to the Sandia MEMS Visualization Tools. It replaces all previous versions. New features in this version: Support for AutoCAD 2014 and 2015 . This CD contains an integrated set of electronic files that: a) Provides a 2D Process Visualizer that generates cross-section images of devices constructed using the SUMMiT V fabrication process. b) Provides a 3D Visualizer that generates 3D images of devices constructed using the SUMMiT V fabrication process. c) Provides a MEMS 3D Model generator that creates 3D solid models of devices constructed using the SUMMiT V fabrication process. While there exists some filesmore » on the CD that are used in conjunction with software package AutoCAD , these files are not intended for use independent of the CD. Note that the customer must purchase his/her own copy of AutoCAD to use with these files.« less
Graphical Environment Tools for Application to Gamma-Ray Energy Tracking Arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Todd, Richard A.; Radford, David C.
2013-12-30
Highly segmented, position-sensitive germanium detector systems are being developed for nuclear physics research where traditional electronic signal processing with mixed analog and digital function blocks would be enormously complex and costly. Future systems will be constructed using pipelined processing of high-speed digitized signals as is done in the telecommunications industry. Techniques which provide rapid algorithm and system development for future systems are desirable. This project has used digital signal processing concepts and existing graphical system design tools to develop a set of re-usable modular functions and libraries targeted for the nuclear physics community. Researchers working with complex nuclear detector arraysmore » such as the Gamma-Ray Energy Tracking Array (GRETA) have been able to construct advanced data processing algorithms for implementation in field programmable gate arrays (FPGAs) through application of these library functions using intuitive graphical interfaces.« less
Development and application of CATIA-GDML geometry builder
NASA Astrophysics Data System (ADS)
Belogurov, S.; Berchun, Yu; Chernogorov, A.; Malzacher, P.; Ovcharenko, E.; Schetinin, V.
2014-06-01
Due to conceptual difference between geometry descriptions in Computer-Aided Design (CAD) systems and particle transport Monte Carlo (MC) codes direct conversion of detector geometry in either direction is not feasible. The paper presents an update on functionality and application practice of the CATIA-GDML geometry builder first introduced at CHEP2010. This set of CATIAv5 tools has been developed for building a MC optimized GEANT4/ROOT compatible geometry based on the existing CAD model. The model can be exported via Geometry Description Markup Language (GDML). The builder allows also import and visualization of GEANT4/ROOT geometries in CATIA. The structure of a GDML file, including replicated volumes, volume assemblies and variables, is mapped into a part specification tree. A dedicated file template, a wide range of primitives, tools for measurement and implicit calculation of parameters, different types of multiple volume instantiation, mirroring, positioning and quality check have been implemented. Several use cases are discussed.