User-Friendly Tools for Random Matrices: An Introduction
2012-12-03
T 2011 , Oliveira 2010, Mackey et al . 2012, ... Joel A. Tropp, User-Friendly Tools for Random Matrices, NIPS, 3 December 2012 47 To learn more... E...the matrix product Y = AΩ 3. Construct an orthonormal basis Q for the range of Y [Ref] Halko –Martinsson–T, SIAM Rev. 2011 . Joel A. Tropp, User-Friendly...concentration inequalities...” with L. Mackey et al .. Submitted 2012. § “User-Friendly Tools for Random Matrices: An Introduction.” 2012. See also
Hsu, Chiung-Wen Julia; Wang, Ching-Chan; Tai, Yi-Ting
2011-01-01
This study argues for the necessity of applying offline contexts to social networking site research and the importance of distinguishing the relationship types of users' counterparts when studying Facebook users' behaviors. In an attempt to examine the relationship among users' behaviors, their counterparts' relationship types, and the users' perceived acquaintanceships after using Facebook, this study first investigated users' frequently used tools when interacting with different types of friends. Users tended to use less time- and effort-consuming and less privacy-concerned tools with newly acquired friends. This study further examined users' behaviors in terms of their closeness and intimacy and their perceived acquaintanceships toward four different types of friends. The study found that users gained more perceived acquaintanceships from less close friends with whom users have more frequent interaction but less intimate behaviors. As for closer friends, users tended to use more intimate activities to interact with them. However, these activities did not necessarily occur more frequently than the activities they employed with their less close friends. It was found that perceived acquaintanceships with closer friends were significantly lower than those with less close friends. This implies that Facebook is a mechanism for new friends, rather than close friends, to become more acquainted.
Biblio-MetReS: A bibliometric network reconstruction application and server
2011-01-01
Background Reconstruction of genes and/or protein networks from automated analysis of the literature is one of the current targets of text mining in biomedical research. Some user-friendly tools already perform this analysis on precompiled databases of abstracts of scientific papers. Other tools allow expert users to elaborate and analyze the full content of a corpus of scientific documents. However, to our knowledge, no user friendly tool that simultaneously analyzes the latest set of scientific documents available on line and reconstructs the set of genes referenced in those documents is available. Results This article presents such a tool, Biblio-MetReS, and compares its functioning and results to those of other user-friendly applications (iHOP, STRING) that are widely used. Under similar conditions, Biblio-MetReS creates networks that are comparable to those of other user friendly tools. Furthermore, analysis of full text documents provides more complete reconstructions than those that result from using only the abstract of the document. Conclusions Literature-based automated network reconstruction is still far from providing complete reconstructions of molecular networks. However, its value as an auxiliary tool is high and it will increase as standards for reporting biological entities and relationships become more widely accepted and enforced. Biblio-MetReS is an application that can be downloaded from http://metres.udl.cat/. It provides an easy to use environment for researchers to reconstruct their networks of interest from an always up to date set of scientific documents. PMID:21975133
The Wastewater Information System Tool (TWIST) is downloadable, user-friendly management tool that will allow state and local health departments to effectively inventory and manage small wastewater treatment systems in their jurisdictions.
Xu, Duo; Jaber, Yousef; Pavlidis, Pavlos; Gokcumen, Omer
2017-09-26
Constructing alignments and phylogenies for a given locus from large genome sequencing studies with relevant outgroups allow novel evolutionary and anthropological insights. However, no user-friendly tool has been developed to integrate thousands of recently available and anthropologically relevant genome sequences to construct complete sequence alignments and phylogenies. Here, we provide VCFtoTree, a user friendly tool with a graphical user interface that directly accesses online databases to download, parse and analyze genome variation data for regions of interest. Our pipeline combines popular sequence datasets and tree building algorithms with custom data parsing to generate accurate alignments and phylogenies using all the individuals from the 1000 Genomes Project, Neanderthal and Denisovan genomes, as well as reference genomes of Chimpanzee and Rhesus Macaque. It can also be applied to other phased human genomes, as well as genomes from other species. The output of our pipeline includes an alignment in FASTA format and a tree file in newick format. VCFtoTree fulfills the increasing demand for constructing alignments and phylogenies for a given loci from thousands of available genomes. Our software provides a user friendly interface for a wider audience without prerequisite knowledge in programming. VCFtoTree can be accessed from https://github.com/duoduoo/VCFtoTree_3.0.0 .
Sustainability-based decision making is a challenging process that requires balancing trade-offs among social, economic, and environmental components. System Dynamic (SD) models can be useful tools to inform sustainability-based decision making because they provide a holistic co...
APM_GUI: analyzing particle movement on the cell membrane and determining confinement.
Menchón, Silvia A; Martín, Mauricio G; Dotti, Carlos G
2012-02-20
Single-particle tracking is a powerful tool for tracking individual particles with high precision. It provides useful information that allows the study of diffusion properties as well as the dynamics of movement. Changes in particle movement behavior, such as transitions between Brownian motion and temporary confinement, can reveal interesting biophysical interactions. Although useful applications exist to determine the paths of individual particles, only a few software implementations are available to analyze these data, and these implementations are generally not user-friendly and do not have a graphical interface,. Here, we present APM_GUI (Analyzing Particle Movement), which is a MatLab-implemented application with a Graphical User Interface. This user-friendly application detects confined movement considering non-random confinement when a particle remains in a region longer than a Brownian diffusant would remain. In addition, APM_GUI exports the results, which allows users to analyze this information using software that they are familiar with. APM_GUI provides an open-source tool that quantifies diffusion coefficients and determines whether trajectories have non-random confinements. It also offers a simple and user-friendly tool that can be used by individuals without programming skills.
USDA-ARS?s Scientific Manuscript database
The Soil and Water Assessment Tool (SWAT) is a basin scale hydrologic model developed by the US Department of Agriculture-Agricultural Research Service. SWAT's broad applicability, user friendly model interfaces, and automatic calibration software have led to a rapid increase in the number of new u...
Analyzing Virtual Physics Simulations with Tracker
ERIC Educational Resources Information Center
Claessens, Tom
2017-01-01
In the physics teaching community, Tracker is well known as a user-friendly open source video analysis software, authored by Douglas Brown. With this tool, the user can trace markers indicated on a video or on stroboscopic photos and perform kinematic analyses. Tracker also includes a data modeling tool that allows one to fit some theoretical…
Keller, Rob C.A.
2011-01-01
The Eisenberg plot or hydrophobic moment plot methodology is one of the most frequently used methods of bioinformatics. Bioinformatics is more and more recognized as a helpful tool in Life Sciences in general, and recent developments in approaches recognizing lipid binding regions in proteins are promising in this respect. In this study a bioinformatics approach specialized in identifying lipid binding helical regions in proteins was used to obtain an Eisenberg plot. The validity of the Heliquest generated hydrophobic moment plot was checked and exemplified. This study indicates that the Eisenberg plot methodology can be transferred to another hydrophobicity scale and renders a user-friendly approach which can be utilized in routine checks in protein–lipid interaction and in protein and peptide lipid binding characterization studies. A combined approach seems to be advantageous and results in a powerful tool in the search of helical lipid-binding regions in proteins and peptides. The strength and limitations of the Eisenberg plot approach itself are discussed as well. The presented approach not only leads to a better understanding of the nature of the protein–lipid interactions but also provides a user-friendly tool for the search of lipid-binding regions in proteins and peptides. PMID:22016610
Keller, Rob C A
2011-01-01
The Eisenberg plot or hydrophobic moment plot methodology is one of the most frequently used methods of bioinformatics. Bioinformatics is more and more recognized as a helpful tool in Life Sciences in general, and recent developments in approaches recognizing lipid binding regions in proteins are promising in this respect. In this study a bioinformatics approach specialized in identifying lipid binding helical regions in proteins was used to obtain an Eisenberg plot. The validity of the Heliquest generated hydrophobic moment plot was checked and exemplified. This study indicates that the Eisenberg plot methodology can be transferred to another hydrophobicity scale and renders a user-friendly approach which can be utilized in routine checks in protein-lipid interaction and in protein and peptide lipid binding characterization studies. A combined approach seems to be advantageous and results in a powerful tool in the search of helical lipid-binding regions in proteins and peptides. The strength and limitations of the Eisenberg plot approach itself are discussed as well. The presented approach not only leads to a better understanding of the nature of the protein-lipid interactions but also provides a user-friendly tool for the search of lipid-binding regions in proteins and peptides.
DOT National Transportation Integrated Search
2013-09-01
User-friendly tools are needed for undergraduates to learn about component sizing, powertrain integration, and control : strategies for student competitions involving hybrid vehicles. A TK Solver tool was developed at the University of Idaho for : th...
The Factor Finder CD-ROM is a user-friendly, searchable tool used to locate exposure factors and sociodemographic data for user-defined populations. Factor Finder improves the exposure assessors and risk assessors (etc.) ability to efficiently locate exposure-related informatio...
Software systems for modeling articulated figures
NASA Technical Reports Server (NTRS)
Phillips, Cary B.
1989-01-01
Research in computer animation and simulation of human task performance requires sophisticated geometric modeling and user interface tools. The software for a research environment should present the programmer with a powerful but flexible substrate of facilities for displaying and manipulating geometric objects, yet insure that future tools have a consistent and friendly user interface. Jack is a system which provides a flexible and extensible programmer and user interface for displaying and manipulating complex geometric figures, particularly human figures in a 3D working environment. It is a basic software framework for high-performance Silicon Graphics IRIS workstations for modeling and manipulating geometric objects in a general but powerful way. It provides a consistent and user-friendly interface across various applications in computer animation and simulation of human task performance. Currently, Jack provides input and control for applications including lighting specification and image rendering, anthropometric modeling, figure positioning, inverse kinematics, dynamic simulation, and keyframe animation.
Using Optimization to Improve Test Planning
2017-09-01
friendly and to display the output differently, the test and evaluation test schedule optimization model would be a good tool for the test and... evaluation schedulers. 14. SUBJECT TERMS schedule optimization, test planning 15. NUMBER OF PAGES 223 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...make the input more user-friendly and to display the output differently, the test and evaluation test schedule optimization model would be a good tool
USER'S GUIDE FOR GLOED VERSION 1.0 - THE GLOBAL EMISSIONS DATABASE
The document is a user's guide for the EPA-developed, powerful software package, Global Emissions Database (GloED). GloED is a user-friendly, menu-driven tool for storing and retrieving emissions factors and activity data on a country-specific basis. Data can be selected from dat...
NASA Astrophysics Data System (ADS)
Saleh, A.; Niraula, R.; Gallego, O.; Osei, E.; Kannan, N.
2017-12-01
The Nutrient Tracking Tool (NTT) is a user-friendly web-based computer program that estimate nutrient (nitrogen and phosphorus) and sediment losses from fields managed under a variety of cropping patterns and management practices. The NTT includes a user-friendly web-based interface and is linked to the Agricultural Policy Environmental eXtender (APEX) model. It also accesses USDA-NRCS's Web Soil Survey to obtain field, weather, and soil information. NTT provides producers, government officials, and other users with a fast and efficient method of estimating the nutrient, sediment, and atmosphoric gases (N2o, Co2, and NH4) losses, and crop production under different conservation practices regims at the farm-level. The information obtained from NTT can help producers to determine the most cost-effective conservation practice(s) to reduce the nutrient and sediment losses while optimizing the crop production. Also, the recent version of NTT (NTTg3) has been developed for those coutries without access to national databasis, such as soils and wether. The NTTg3 also has been designed as easy to use APEX interface. NTT is currently being evaluated for trading and other programs at Cheaseapea Bay regions and numerous states in US. During this presentation the new capabilities of NTTg3 will be described and demonstrated.
CRISPR Primer Designer: Design primers for knockout and chromosome imaging CRISPR-Cas system.
Yan, Meng; Zhou, Shi-Rong; Xue, Hong-Wei
2015-07-01
The clustered regularly interspaced short palindromic repeats (CRISPR)-associated system enables biologists to edit genomes precisely and provides a powerful tool for perturbing endogenous gene regulation, modulation of epigenetic markers, and genome architecture. However, there are concerns about the specificity of the system, especially the usages of knocking out a gene. Previous designing tools either were mostly built-in websites or ran as command-line programs, and none of them ran locally and acquired a user-friendly interface. In addition, with the development of CRISPR-derived systems, such as chromosome imaging, there were still no tools helping users to generate specific end-user spacers. We herein present CRISPR Primer Designer for researchers to design primers for CRISPR applications. The program has a user-friendly interface, can analyze the BLAST results by using multiple parameters, score for each candidate spacer, and generate the primers when using a certain plasmid. In addition, CRISPR Primer Designer runs locally and can be used to search spacer clusters, and exports primers for the CRISPR-Cas system-based chromosome imaging system. © 2014 Institute of Botany, Chinese Academy of Sciences.
NASA Astrophysics Data System (ADS)
Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusof, Z.; Tehrany, M. S.
2014-10-01
Modeling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modeling. Bivariate statistical analysis (BSA) assists in hazard modeling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, BSM (bivariate statistical modeler), for BSA technique is proposed. Three popular BSA techniques such as frequency ratio, weights-of-evidence, and evidential belief function models are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and is created by a simple graphical user interface, which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.
NASA Astrophysics Data System (ADS)
Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusoff, Z. M.; Tehrany, M. S.
2015-03-01
Modelling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modelling. Bivariate statistical analysis (BSA) assists in hazard modelling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time-consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, bivariate statistical modeler (BSM), for BSA technique is proposed. Three popular BSA techniques, such as frequency ratio, weight-of-evidence (WoE), and evidential belief function (EBF) models, are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and created by a simple graphical user interface (GUI), which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve (AUC) is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.
ProGeRF: Proteome and Genome Repeat Finder Utilizing a Fast Parallel Hash Function
Moraes, Walas Jhony Lopes; Rodrigues, Thiago de Souza; Bartholomeu, Daniella Castanheira
2015-01-01
Repetitive element sequences are adjacent, repeating patterns, also called motifs, and can be of different lengths; repetitions can involve their exact or approximate copies. They have been widely used as molecular markers in population biology. Given the sizes of sequenced genomes, various bioinformatics tools have been developed for the extraction of repetitive elements from DNA sequences. However, currently available tools do not provide options for identifying repetitive elements in the genome or proteome, displaying a user-friendly web interface, and performing-exhaustive searches. ProGeRF is a web site for extracting repetitive regions from genome and proteome sequences. It was designed to be efficient, fast, and accurate and primarily user-friendly web tool allowing many ways to view and analyse the results. ProGeRF (Proteome and Genome Repeat Finder) is freely available as a stand-alone program, from which the users can download the source code, and as a web tool. It was developed using the hash table approach to extract perfect and imperfect repetitive regions in a (multi)FASTA file, while allowing a linear time complexity. PMID:25811026
NASA Astrophysics Data System (ADS)
Rushi, B. R.; Ellenburg, W. L.; Adams, E. C.; Flores, A.; Limaye, A. S.; Valdés-Pineda, R.; Roy, T.; Valdés, J. B.; Mithieu, F.; Omondi, S.
2017-12-01
SERVIR, a joint NASA-USAID initiative, works to build capacity in Earth observation technologies in developing countries for improved environmental decision making in the arena of: weather and climate, water and disasters, food security and land use/land cover. SERVIR partners with leading regional organizations in Eastern and Southern Africa, Hindu Kush-Himalaya, Mekong region, and West Africa to achieve its objectives. SERVIR develops hydrological applications to address specific needs articulated by key stakeholders and daily rainfall estimates are a vital input for these applications. Satellite-derived rainfall is subjected to systemic biases which need to be corrected before it can be used for any hydrologic application such as real-time or seasonal forecasting. SERVIR and the SWAAT team at the University of Arizona, have co-developed an open-source and user friendly tool of rainfall bias correction approaches for SPPs. Bias correction tools were developed based on Linear Scaling and Quantile Mapping techniques. A set of SPPs, such as PERSIANN-CCS, TMPA-RT, and CMORPH, are bias corrected using Climate Hazards Group InfraRed Precipitation with Station (CHIRPS) data which incorporates ground based precipitation observations. This bias correction tools also contains a component, which is included to improve monthly mean of CHIRPS using precipitation products of the Global Surface Summary of the Day (GSOD) database developed by the National Climatic Data Center (NCDC). This tool takes input from command-line which makes it user-friendly and applicable in any operating platform without prior programming skills. This presentation will focus on this bias-correction tool for SPPs, including application scenarios.
DAnTE: a statistical tool for quantitative analysis of –omics data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polpitiya, Ashoka D.; Qian, Weijun; Jaitly, Navdeep
2008-05-03
DAnTE (Data Analysis Tool Extension) is a statistical tool designed to address challenges unique to quantitative bottom-up, shotgun proteomics data. This tool has also been demonstrated for microarray data and can easily be extended to other high-throughput data types. DAnTE features selected normalization methods, missing value imputation algorithms, peptide to protein rollup methods, an extensive array of plotting functions, and a comprehensive ANOVA scheme that can handle unbalanced data and random effects. The Graphical User Interface (GUI) is designed to be very intuitive and user friendly.
2014-01-01
Background RNA sequencing (RNA-seq) is emerging as a critical approach in biological research. However, its high-throughput advantage is significantly limited by the capacity of bioinformatics tools. The research community urgently needs user-friendly tools to efficiently analyze the complicated data generated by high throughput sequencers. Results We developed a standalone tool with graphic user interface (GUI)-based analytic modules, known as eRNA. The capacity of performing parallel processing and sample management facilitates large data analyses by maximizing hardware usage and freeing users from tediously handling sequencing data. The module miRNA identification” includes GUIs for raw data reading, adapter removal, sequence alignment, and read counting. The module “mRNA identification” includes GUIs for reference sequences, genome mapping, transcript assembling, and differential expression. The module “Target screening” provides expression profiling analyses and graphic visualization. The module “Self-testing” offers the directory setups, sample management, and a check for third-party package dependency. Integration of other GUIs including Bowtie, miRDeep2, and miRspring extend the program’s functionality. Conclusions eRNA focuses on the common tools required for the mapping and quantification analysis of miRNA-seq and mRNA-seq data. The software package provides an additional choice for scientists who require a user-friendly computing environment and high-throughput capacity for large data analysis. eRNA is available for free download at https://sourceforge.net/projects/erna/?source=directory. PMID:24593312
Yuan, Tiezheng; Huang, Xiaoyi; Dittmar, Rachel L; Du, Meijun; Kohli, Manish; Boardman, Lisa; Thibodeau, Stephen N; Wang, Liang
2014-03-05
RNA sequencing (RNA-seq) is emerging as a critical approach in biological research. However, its high-throughput advantage is significantly limited by the capacity of bioinformatics tools. The research community urgently needs user-friendly tools to efficiently analyze the complicated data generated by high throughput sequencers. We developed a standalone tool with graphic user interface (GUI)-based analytic modules, known as eRNA. The capacity of performing parallel processing and sample management facilitates large data analyses by maximizing hardware usage and freeing users from tediously handling sequencing data. The module miRNA identification" includes GUIs for raw data reading, adapter removal, sequence alignment, and read counting. The module "mRNA identification" includes GUIs for reference sequences, genome mapping, transcript assembling, and differential expression. The module "Target screening" provides expression profiling analyses and graphic visualization. The module "Self-testing" offers the directory setups, sample management, and a check for third-party package dependency. Integration of other GUIs including Bowtie, miRDeep2, and miRspring extend the program's functionality. eRNA focuses on the common tools required for the mapping and quantification analysis of miRNA-seq and mRNA-seq data. The software package provides an additional choice for scientists who require a user-friendly computing environment and high-throughput capacity for large data analysis. eRNA is available for free download at https://sourceforge.net/projects/erna/?source=directory.
A user-friendly tool for medical-related patent retrieval.
Pasche, Emilie; Gobeill, Julien; Teodoro, Douglas; Gaudinat, Arnaud; Vishnyakova, Dina; Lovis, Christian; Ruch, Patrick
2012-01-01
Health-related information retrieval is complicated by the variety of nomenclatures available to name entities, since different communities of users will use different ways to name a same entity. We present in this report the development and evaluation of a user-friendly interactive Web application aiming at facilitating health-related patent search. Our tool, called TWINC, relies on a search engine tuned during several patent retrieval competitions, enhanced with intelligent interaction modules, such as chemical query, normalization and expansion. While the functionality of related article search showed promising performances, the ad hoc search results in fairly contrasted results. Nonetheless, TWINC performed well during the PatOlympics competition and was appreciated by intellectual property experts. This result should be balanced by the limited evaluation sample. We can also assume that it can be customized to be applied in corporate search environments to process domain and company-specific vocabularies, including non-English literature and patents reports.
Devlin, Joseph C; Battaglia, Thomas; Blaser, Martin J; Ruggles, Kelly V
2018-06-25
Exploration of large data sets, such as shotgun metagenomic sequence or expression data, by biomedical experts and medical professionals remains as a major bottleneck in the scientific discovery process. Although tools for this purpose exist for 16S ribosomal RNA sequencing analysis, there is a growing but still insufficient number of user-friendly interactive visualization workflows for easy data exploration and figure generation. The development of such platforms for this purpose is necessary to accelerate and streamline microbiome laboratory research. We developed the Workflow Hub for Automated Metagenomic Exploration (WHAM!) as a web-based interactive tool capable of user-directed data visualization and statistical analysis of annotated shotgun metagenomic and metatranscriptomic data sets. WHAM! includes exploratory and hypothesis-based gene and taxa search modules for visualizing differences in microbial taxa and gene family expression across experimental groups, and for creating publication quality figures without the need for command line interface or in-house bioinformatics. WHAM! is an interactive and customizable tool for downstream metagenomic and metatranscriptomic analysis providing a user-friendly interface allowing for easy data exploration by microbiome and ecological experts to facilitate discovery in multi-dimensional and large-scale data sets.
Community-Focused Exposure and Risk Screening Tool (C-FERST): Introduction and Demonstration
Public Need: Communities and decision makers are concerned about where they live, work, and play. C-FERST is a user-friendly tool that helps: Identify environmental issues in communities; Learn about these issues; Explore exposure and risk reduction options.
Integration of bus stop counts data with census data for improving bus service.
DOT National Transportation Integrated Search
2016-04-01
This research project produced an open source transit market data visualization and analysis tool suite, : The Bus Transit Market Analyst (BTMA), which contains user-friendly GIS mapping and data : analytics tools, and state-of-the-art transit demand...
Singh, Vinay Kumar; Ambwani, Sonu; Marla, Soma; Kumar, Anil
2009-10-23
We describe the development of a user friendly tool that would assist in the retrieval of information relating to Cry genes in transgenic crops. The tool also helps in detection of transformed Cry genes from Bacillus thuringiensis present in transgenic plants by providing suitable designed primers for PCR identification of these genes. The tool designed based on relational database model enables easy retrieval of information from the database with simple user queries. The tool also enables users to access related information about Cry genes present in various databases by interacting with different sources (nucleotide sequences, protein sequence, sequence comparison tools, published literature, conserved domains, evolutionary and structural data). http://insilicogenomics.in/Cry-btIdentifier/welcome.html.
Standards-Based Procedural Phenotyping: The Arden Syntax on i2b2.
Mate, Sebastian; Castellanos, Ixchel; Ganslandt, Thomas; Prokosch, Hans-Ulrich; Kraus, Stefan
2017-01-01
Phenotyping, or the identification of patient cohorts, is a recurring challenge in medical informatics. While there are open source tools such as i2b2 that address this problem by providing user-friendly querying interfaces, these platforms lack semantic expressiveness to model complex phenotyping algorithms. The Arden Syntax provides procedural programming language construct, designed specifically for medical decision support and knowledge transfer. In this work, we investigate how language constructs of the Arden Syntax can be used for generic phenotyping. We implemented a prototypical tool to integrate i2b2 with an open source Arden execution environment. To demonstrate the applicability of our approach, we used the tool together with an Arden-based phenotyping algorithm to derive statistics about ICU-acquired hypernatremia. Finally, we discuss how the combination of i2b2's user-friendly cohort pre-selection and Arden's procedural expressiveness could benefit phenotyping.
SETAC Short Course: Introduction to interspecies toxicity extrapolation using EPA’s Web-ICE tool
The Web-ICE tool is a user friendly interface that contains modules to predict acute toxicity to over 500 species of aquatic (algae, invertebrates, fish) and terrestrial (birds and mammals) taxa. The tool contains a suite of over 3000 ICE models developed from a database of over ...
Framework to parameterize and validate APEX to support deployment of the nutrient tracking tool
USDA-ARS?s Scientific Manuscript database
The Agricultural Policy Environmental eXtender (APEX) model is the scientific basis for the Nutrient Tracking Tool (NTT). NTT is an enhanced version of the Nitrogen Trading Tool, a user-friendly web-based computer program originally developed by the USDA. NTT was developed to estimate reductions in...
imDEV: a graphical user interface to R multivariate analysis tools in Microsoft Excel
USDA-ARS?s Scientific Manuscript database
Interactive modules for data exploration and visualization (imDEV) is a Microsoft Excel spreadsheet embedded application providing an integrated environment for the analysis of omics data sets with a user-friendly interface. Individual modules were designed to provide toolsets to enable interactive ...
System Dynamics (SD) models are useful for holistic integration of data to evaluate indirect and cumulative effects and inform decisions. Complex SD models can provide key insights into how decisions affect the three interconnected pillars of sustainability. However, the complexi...
Jobs and Economic Development Impact (JEDI) Model: Offshore Wind User Reference Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lantz, E.; Goldberg, M.; Keyser, D.
2013-06-01
The Offshore Wind Jobs and Economic Development Impact (JEDI) model, developed by NREL and MRG & Associates, is a spreadsheet based input-output tool. JEDI is meant to be a user friendly and transparent tool to estimate potential economic impacts supported by the development and operation of offshore wind projects. This guide describes how to use the model as well as technical information such as methodology, limitations, and data sources.
Guner, Huseyin; Close, Patrick L; Cai, Wenxuan; Zhang, Han; Peng, Ying; Gregorich, Zachery R; Ge, Ying
2014-03-01
The rapid advancements in mass spectrometry (MS) instrumentation, particularly in Fourier transform (FT) MS, have made the acquisition of high-resolution and high-accuracy mass measurements routine. However, the software tools for the interpretation of high-resolution MS data are underdeveloped. Although several algorithms for the automatic processing of high-resolution MS data are available, there is still an urgent need for a user-friendly interface with functions that allow users to visualize and validate the computational output. Therefore, we have developed MASH Suite, a user-friendly and versatile software interface for processing high-resolution MS data. MASH Suite contains a wide range of features that allow users to easily navigate through data analysis, visualize complex high-resolution MS data, and manually validate automatically processed results. Furthermore, it provides easy, fast, and reliable interpretation of top-down, middle-down, and bottom-up MS data. MASH Suite is convenient, easily operated, and freely available. It can greatly facilitate the comprehensive interpretation and validation of high-resolution MS data with high accuracy and reliability.
Forecasting and communicating the potential outcomes of decision options requires support tools that aid in evaluating alternative scenarios in a user-friendly context and that highlight variables relevant to the decision options and valuable stakeholders. Envision is a GIS-base...
Alkahest NuclearBLAST : a user-friendly BLAST management and analysis system
Diener, Stephen E; Houfek, Thomas D; Kalat, Sam E; Windham, DE; Burke, Mark; Opperman, Charles; Dean, Ralph A
2005-01-01
Background - Sequencing of EST and BAC end datasets is no longer limited to large research groups. Drops in per-base pricing have made high throughput sequencing accessible to individual investigators. However, there are few options available which provide a free and user-friendly solution to the BLAST result storage and data mining needs of biologists. Results - Here we describe NuclearBLAST, a batch BLAST analysis, storage and management system designed for the biologist. It is a wrapper for NCBI BLAST which provides a user-friendly web interface which includes a request wizard and the ability to view and mine the results. All BLAST results are stored in a MySQL database which allows for more advanced data-mining through supplied command-line utilities or direct database access. NuclearBLAST can be installed on a single machine or clustered amongst a number of machines to improve analysis throughput. NuclearBLAST provides a platform which eases data-mining of multiple BLAST results. With the supplied scripts, the program can export data into a spreadsheet-friendly format, automatically assign Gene Ontology terms to sequences and provide bi-directional best hits between two datasets. Users with SQL experience can use the database to ask even more complex questions and extract any subset of data they require. Conclusion - This tool provides a user-friendly interface for requesting, viewing and mining of BLAST results which makes the management and data-mining of large sets of BLAST analyses tractable to biologists. PMID:15958161
Raacke, John; Bonds-Raacke, Jennifer
2008-04-01
The increased use of the Internet as a new tool in communication has changed the way people interact. This fact is even more evident in the recent development and use of friend-networking sites. However, no research has evaluated these sites and their impact on college students. Therefore, the present study was conducted to evaluate: (a) why people use these friend-networking sites, (b) what the characteristics are of the typical college user, and (c) what uses and gratifications are met by using these sites. Results indicated that the vast majority of college students are using these friend-networking sites for a significant portion of their day for reasons such as making new friends and locating old friends. Additionally, both men and women of traditional college age are equally engaging in this form of online communication with this result holding true for nearly all ethnic groups. Finally, results showed that many uses and gratifications are met by users (e.g., "keeping in touch with friends"). Results are discussed in light of the impact that friend-networking sites have on communication and social needs of college students.
Real-Time fMRI Pattern Decoding and Neurofeedback Using FRIEND: An FSL-Integrated BCI Toolbox
Sato, João R.; Basilio, Rodrigo; Paiva, Fernando F.; Garrido, Griselda J.; Bramati, Ivanei E.; Bado, Patricia; Tovar-Moll, Fernanda; Zahn, Roland; Moll, Jorge
2013-01-01
The demonstration that humans can learn to modulate their own brain activity based on feedback of neurophysiological signals opened up exciting opportunities for fundamental and applied neuroscience. Although EEG-based neurofeedback has been long employed both in experimental and clinical investigation, functional MRI (fMRI)-based neurofeedback emerged as a promising method, given its superior spatial resolution and ability to gauge deep cortical and subcortical brain regions. In combination with improved computational approaches, such as pattern recognition analysis (e.g., Support Vector Machines, SVM), fMRI neurofeedback and brain decoding represent key innovations in the field of neuromodulation and functional plasticity. Expansion in this field and its applications critically depend on the existence of freely available, integrated and user-friendly tools for the neuroimaging research community. Here, we introduce FRIEND, a graphic-oriented user-friendly interface package for fMRI neurofeedback and real-time multivoxel pattern decoding. The package integrates routines for image preprocessing in real-time, ROI-based feedback (single-ROI BOLD level and functional connectivity) and brain decoding-based feedback using SVM. FRIEND delivers an intuitive graphic interface with flexible processing pipelines involving optimized procedures embedding widely validated packages, such as FSL and libSVM. In addition, a user-defined visual neurofeedback module allows users to easily design and run fMRI neurofeedback experiments using ROI-based or multivariate classification approaches. FRIEND is open-source and free for non-commercial use. Processing tutorials and extensive documentation are available. PMID:24312569
Professional Learning in the Digital Age: The Educator's Guide to User-Generated Learning
ERIC Educational Resources Information Center
Swanson, Kristen
2013-01-01
Discover how to transform your professional development and become a truly connected educator with user-generated learning! This book shows educators how to enhance their professional learning using practical tools, strategies, and online resources. With beginner-friendly, real-world examples and simple steps to get started, the author shows how…
Department of Defense Travel Reengineering Pilot Report to Congress
1997-06-01
Electronic Commerce /Electronic Data Interchange (EC/EDI) capabilities to integrate functions. automate edit checks for internal controls, and create user-friendly management tools at all levels of the process.
NASA Astrophysics Data System (ADS)
Wi, S.; Ray, P. A.; Brown, C.
2015-12-01
A software package developed to facilitate building distributed hydrologic models in a modular modeling system is presented. The software package provides a user-friendly graphical user interface that eases its practical use in water resources-related research and practice. The modular modeling system organizes the options available to users when assembling models according to the stages of hydrological cycle, such as potential evapotranspiration, soil moisture accounting, and snow/glacier melting processes. The software is intended to be a comprehensive tool that simplifies the task of developing, calibrating, validating, and using hydrologic models through the inclusion of intelligent automation to minimize user effort, and reduce opportunities for error. Processes so far automated include the definition of system boundaries (i.e., watershed delineation), climate and geographical input generation, and parameter calibration. Built-in post-processing toolkits greatly improve the functionality of the software as a decision support tool for water resources system management and planning. Example post-processing toolkits enable streamflow simulation at ungauged sites with predefined model parameters, and perform climate change risk assessment by means of the decision scaling approach. The software is validated through application to watersheds representing a variety of hydrologic regimes.
Tools for Energized Teaching: Revitalize Instruction with Ease
ERIC Educational Resources Information Center
Wilson, Kenneth
2006-01-01
"Challenge yourself to break out of your old routines. Think anew." Ken Wilson, educator, trainer and consultant has assembled a versatile, practical and generic book to use across disciplines and with all age levels. This collection of accessible, user-friendly tools incorporates and connects current education research--without the jargon. "This…
ALPHACAL: A new user-friendly tool for the calibration of alpha-particle sources.
Timón, A Fernández; Vargas, M Jurado; Gallardo, P Álvarez; Sánchez-Oro, J; Peralta, L
2018-05-01
In this work, we present and describe the program ALPHACAL, specifically developed for the calibration of alpha-particle sources. It is therefore more user-friendly and less time-consuming than multipurpose codes developed for a wide range of applications. The program is based on the recently developed code AlfaMC, which simulates specifically the transport of alpha particles. Both cylindrical and point sources mounted on the surface of polished backings can be simulated, as is the convention in experimental measurements of alpha-particle sources. In addition to the efficiency calculation and determination of the backscattering coefficient, some additional tools are available to the user, like the visualization of energy spectrum, use of energy cut-off or low-energy tail corrections. ALPHACAL has been implemented in C++ language using QT library, so it is available for Windows, MacOs and Linux platforms. It is free and can be provided under request to the authors. Copyright © 2018 Elsevier Ltd. All rights reserved.
WASP: a Web-based Allele-Specific PCR assay designing tool for detecting SNPs and mutations
Wangkumhang, Pongsakorn; Chaichoompu, Kridsadakorn; Ngamphiw, Chumpol; Ruangrit, Uttapong; Chanprasert, Juntima; Assawamakin, Anunchai; Tongsima, Sissades
2007-01-01
Background Allele-specific (AS) Polymerase Chain Reaction is a convenient and inexpensive method for genotyping Single Nucleotide Polymorphisms (SNPs) and mutations. It is applied in many recent studies including population genetics, molecular genetics and pharmacogenomics. Using known AS primer design tools to create primers leads to cumbersome process to inexperience users since information about SNP/mutation must be acquired from public databases prior to the design. Furthermore, most of these tools do not offer the mismatch enhancement to designed primers. The available web applications do not provide user-friendly graphical input interface and intuitive visualization of their primer results. Results This work presents a web-based AS primer design application called WASP. This tool can efficiently design AS primers for human SNPs as well as mutations. To assist scientists with collecting necessary information about target polymorphisms, this tool provides a local SNP database containing over 10 million SNPs of various populations from public domain databases, namely NCBI dbSNP, HapMap and JSNP respectively. This database is tightly integrated with the tool so that users can perform the design for existing SNPs without going off the site. To guarantee specificity of AS primers, the proposed system incorporates a primer specificity enhancement technique widely used in experiment protocol. In particular, WASP makes use of different destabilizing effects by introducing one deliberate 'mismatch' at the penultimate (second to last of the 3'-end) base of AS primers to improve the resulting AS primers. Furthermore, WASP offers graphical user interface through scalable vector graphic (SVG) draw that allow users to select SNPs and graphically visualize designed primers and their conditions. Conclusion WASP offers a tool for designing AS primers for both SNPs and mutations. By integrating the database for known SNPs (using gene ID or rs number), this tool facilitates the awkward process of getting flanking sequences and other related information from public SNP databases. It takes into account the underlying destabilizing effect to ensure the effectiveness of designed primers. With user-friendly SVG interface, WASP intuitively presents resulting designed primers, which assist users to export or to make further adjustment to the design. This software can be freely accessed at . PMID:17697334
Seed: a user-friendly tool for exploring and visualizing microbial community data.
Beck, Daniel; Dennis, Christopher; Foster, James A
2015-02-15
In this article we present Simple Exploration of Ecological Data (Seed), a data exploration tool for microbial communities. Seed is written in R using the Shiny library. This provides access to powerful R-based functions and libraries through a simple user interface. Seed allows users to explore ecological datasets using principal coordinate analyses, scatter plots, bar plots, hierarchal clustering and heatmaps. Seed is open source and available at https://github.com/danlbek/Seed. danlbek@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
Chung, Wei-Chun; Chen, Chien-Chih; Ho, Jan-Ming; Lin, Chung-Yen; Hsu, Wen-Lian; Wang, Yu-Chun; Lee, D T; Lai, Feipei; Huang, Chih-Wei; Chang, Yu-Jung
2014-01-01
Explosive growth of next-generation sequencing data has resulted in ultra-large-scale data sets and ensuing computational problems. Cloud computing provides an on-demand and scalable environment for large-scale data analysis. Using a MapReduce framework, data and workload can be distributed via a network to computers in the cloud to substantially reduce computational latency. Hadoop/MapReduce has been successfully adopted in bioinformatics for genome assembly, mapping reads to genomes, and finding single nucleotide polymorphisms. Major cloud providers offer Hadoop cloud services to their users. However, it remains technically challenging to deploy a Hadoop cloud for those who prefer to run MapReduce programs in a cluster without built-in Hadoop/MapReduce. We present CloudDOE, a platform-independent software package implemented in Java. CloudDOE encapsulates technical details behind a user-friendly graphical interface, thus liberating scientists from having to perform complicated operational procedures. Users are guided through the user interface to deploy a Hadoop cloud within in-house computing environments and to run applications specifically targeted for bioinformatics, including CloudBurst, CloudBrush, and CloudRS. One may also use CloudDOE on top of a public cloud. CloudDOE consists of three wizards, i.e., Deploy, Operate, and Extend wizards. Deploy wizard is designed to aid the system administrator to deploy a Hadoop cloud. It installs Java runtime environment version 1.6 and Hadoop version 0.20.203, and initiates the service automatically. Operate wizard allows the user to run a MapReduce application on the dashboard list. To extend the dashboard list, the administrator may install a new MapReduce application using Extend wizard. CloudDOE is a user-friendly tool for deploying a Hadoop cloud. Its smart wizards substantially reduce the complexity and costs of deployment, execution, enhancement, and management. Interested users may collaborate to improve the source code of CloudDOE to further incorporate more MapReduce bioinformatics tools into CloudDOE and support next-generation big data open source tools, e.g., Hadoop BigTop and Spark. CloudDOE is distributed under Apache License 2.0 and is freely available at http://clouddoe.iis.sinica.edu.tw/.
Chung, Wei-Chun; Chen, Chien-Chih; Ho, Jan-Ming; Lin, Chung-Yen; Hsu, Wen-Lian; Wang, Yu-Chun; Lee, D. T.; Lai, Feipei; Huang, Chih-Wei; Chang, Yu-Jung
2014-01-01
Background Explosive growth of next-generation sequencing data has resulted in ultra-large-scale data sets and ensuing computational problems. Cloud computing provides an on-demand and scalable environment for large-scale data analysis. Using a MapReduce framework, data and workload can be distributed via a network to computers in the cloud to substantially reduce computational latency. Hadoop/MapReduce has been successfully adopted in bioinformatics for genome assembly, mapping reads to genomes, and finding single nucleotide polymorphisms. Major cloud providers offer Hadoop cloud services to their users. However, it remains technically challenging to deploy a Hadoop cloud for those who prefer to run MapReduce programs in a cluster without built-in Hadoop/MapReduce. Results We present CloudDOE, a platform-independent software package implemented in Java. CloudDOE encapsulates technical details behind a user-friendly graphical interface, thus liberating scientists from having to perform complicated operational procedures. Users are guided through the user interface to deploy a Hadoop cloud within in-house computing environments and to run applications specifically targeted for bioinformatics, including CloudBurst, CloudBrush, and CloudRS. One may also use CloudDOE on top of a public cloud. CloudDOE consists of three wizards, i.e., Deploy, Operate, and Extend wizards. Deploy wizard is designed to aid the system administrator to deploy a Hadoop cloud. It installs Java runtime environment version 1.6 and Hadoop version 0.20.203, and initiates the service automatically. Operate wizard allows the user to run a MapReduce application on the dashboard list. To extend the dashboard list, the administrator may install a new MapReduce application using Extend wizard. Conclusions CloudDOE is a user-friendly tool for deploying a Hadoop cloud. Its smart wizards substantially reduce the complexity and costs of deployment, execution, enhancement, and management. Interested users may collaborate to improve the source code of CloudDOE to further incorporate more MapReduce bioinformatics tools into CloudDOE and support next-generation big data open source tools, e.g., Hadoop BigTop and Spark. Availability: CloudDOE is distributed under Apache License 2.0 and is freely available at http://clouddoe.iis.sinica.edu.tw/. PMID:24897343
United States - Japan evaluation tools and methods.
DOT National Transportation Integrated Search
2014-01-01
Cooperative systems based on intelligent transportation system (ITS) technologies can deliver significant benefits for all road users and the public, especially in terms of safer, more energy-efficient, and environmentally friendly surface transporta...
Ecoupling server: A tool to compute and analyze electronic couplings.
Cabeza de Vaca, Israel; Acebes, Sandra; Guallar, Victor
2016-07-05
Electron transfer processes are often studied through the evaluation and analysis of the electronic coupling (EC). Since most standard QM codes do not provide readily such a measure, additional, and user-friendly tools to compute and analyze electronic coupling from external wave functions will be of high value. The first server to provide a friendly interface for evaluation and analysis of electronic couplings under two different approximations (FDC and GMH) is presented in this communication. Ecoupling server accepts inputs from common QM and QM/MM software and provides useful plots to understand and analyze the results easily. The web server has been implemented in CGI-python using Apache and it is accessible at http://ecouplingserver.bsc.es. Ecoupling server is free and open to all users without login. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Ferret: a user-friendly Java tool to extract data from the 1000 Genomes Project.
Limou, Sophie; Taverner, Andrew M; Winkler, Cheryl A
2016-07-15
The 1000 Genomes (1KG) Project provides a near-comprehensive resource on human genetic variation in worldwide reference populations. 1KG variants can be accessed through a browser and through the raw and annotated data that are regularly released on an ftp server. We developed Ferret, a user-friendly Java tool, to easily extract genetic variation information from these large and complex data files. From a locus, gene(s) or SNP(s) of interest, Ferret retrieves genotype data for 1KG SNPs and indels, and computes allelic frequencies for 1KG populations and optionally, for the Exome Sequencing Project populations. By converting the 1KG data into files that can be imported into popular pre-existing tools (e.g. PLINK and HaploView), Ferret offers a straightforward way, even for non-bioinformatics specialists, to manipulate, explore and merge 1KG data with the user's dataset, as well as visualize linkage disequilibrium pattern, infer haplotypes and design tagSNPs. Ferret tool and source code are publicly available at http://limousophie35.github.io/Ferret/ ferret@nih.gov Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.
ALOG user's manual: A Guide to using the spreadsheet-based artificial log generator
Matthew F. Winn; Philip A. Araman; Randolph H. Wynne
2012-01-01
Computer programs that simulate log sawing can be valuable training tools for sawyers, as well as a means oftesting different sawing patterns. Most available simulation programs rely on diagrammed-log databases, which canbe very costly and time consuming to develop. Artificial Log Generator (ALOG) is a user-friendly Microsoft® Excel®...
PC Software graphics tool for conceptual design of space/planetary electrical power systems
NASA Technical Reports Server (NTRS)
Truong, Long V.
1995-01-01
This paper describes the Decision Support System (DSS), a personal computer software graphics tool for designing conceptual space and/or planetary electrical power systems. By using the DSS, users can obtain desirable system design and operating parameters, such as system weight, electrical distribution efficiency, and bus power. With this tool, a large-scale specific power system was designed in a matter of days. It is an excellent tool to help designers make tradeoffs between system components, hardware architectures, and operation parameters in the early stages of the design cycle. The DSS is a user-friendly, menu-driven tool with online help and a custom graphical user interface. An example design and results are illustrated for a typical space power system with multiple types of power sources, frequencies, energy storage systems, and loads.
DockingApp: a user friendly interface for facilitated docking simulations with AutoDock Vina.
Di Muzio, Elena; Toti, Daniele; Polticelli, Fabio
2017-02-01
Molecular docking is a powerful technique that helps uncover the structural and energetic bases of the interaction between macromolecules and substrates, endogenous and exogenous ligands, and inhibitors. Moreover, this technique plays a pivotal role in accelerating the screening of large libraries of compounds for drug development purposes. The need to promote community-driven drug development efforts, especially as far as neglected diseases are concerned, calls for user-friendly tools to allow non-expert users to exploit the full potential of molecular docking. Along this path, here is described the implementation of DockingApp, a freely available, extremely user-friendly, platform-independent application for performing docking simulations and virtual screening tasks using AutoDock Vina. DockingApp sports an intuitive graphical user interface which greatly facilitates both the input phase and the analysis of the results, which can be visualized in graphical form using the embedded JMol applet. The application comes with the DrugBank set of more than 1400 ready-to-dock, FDA-approved drugs, to facilitate virtual screening and drug repurposing initiatives. Furthermore, other databases of compounds such as ZINC, available also in AutoDock format, can be readily and easily plugged in.
DockingApp: a user friendly interface for facilitated docking simulations with AutoDock Vina
NASA Astrophysics Data System (ADS)
Di Muzio, Elena; Toti, Daniele; Polticelli, Fabio
2017-02-01
Molecular docking is a powerful technique that helps uncover the structural and energetic bases of the interaction between macromolecules and substrates, endogenous and exogenous ligands, and inhibitors. Moreover, this technique plays a pivotal role in accelerating the screening of large libraries of compounds for drug development purposes. The need to promote community-driven drug development efforts, especially as far as neglected diseases are concerned, calls for user-friendly tools to allow non-expert users to exploit the full potential of molecular docking. Along this path, here is described the implementation of DockingApp, a freely available, extremely user-friendly, platform-independent application for performing docking simulations and virtual screening tasks using AutoDock Vina. DockingApp sports an intuitive graphical user interface which greatly facilitates both the input phase and the analysis of the results, which can be visualized in graphical form using the embedded JMol applet. The application comes with the DrugBank set of more than 1400 ready-to-dock, FDA-approved drugs, to facilitate virtual screening and drug repurposing initiatives. Furthermore, other databases of compounds such as ZINC, available also in AutoDock format, can be readily and easily plugged in.
Comparison Analysis among Large Amount of SNS Sites
NASA Astrophysics Data System (ADS)
Toriumi, Fujio; Yamamoto, Hitoshi; Suwa, Hirohiko; Okada, Isamu; Izumi, Kiyoshi; Hashimoto, Yasuhiro
In recent years, application of Social Networking Services (SNS) and Blogs are growing as new communication tools on the Internet. Several large-scale SNS sites are prospering; meanwhile, many sites with relatively small scale are offering services. Such small-scale SNSs realize small-group isolated type of communication while neither mixi nor MySpace can do that. However, the studies on SNS are almost about particular large-scale SNSs and cannot analyze whether their results apply for general features or for special characteristics on the SNSs. From the point of view of comparison analysis on SNS, comparison with just several types of those cannot reach a statistically significant level. We analyze many SNS sites with the aim of classifying them by using some approaches. Our paper classifies 50,000 sites for small-scale SNSs and gives their features from the points of network structure, patterns of communication, and growth rate of SNS. The result of analysis for network structure shows that many SNS sites have small-world attribute with short path lengths and high coefficients of their cluster. Distribution of degrees of the SNS sites is close to power law. This result indicates the small-scale SNS sites raise the percentage of users with many friends than mixi. According to the analysis of their coefficients of assortativity, those SNS sites have negative values of assortativity, and that means users with high degree tend to connect users with small degree. Next, we analyze the patterns of user communication. A friend network of SNS is explicit while users' communication behaviors are defined as an implicit network. What kind of relationships do these networks have? To address this question, we obtain some characteristics of users' communication structure and activation patterns of users on the SNS sites. By using new indexes, friend aggregation rate and friend coverage rate, we show that SNS sites with high value of friend coverage rate activate diary postings and their comments. Besides, they become activated when hub users with high degree do not behave actively on the sites with high value of friend aggregation rate and high value of friend coverage rate. On the other hand, activation emerges when hub users behave actively on the sites with low value of friend aggregation rate and high value of friend coverage rate. Finally, we observe SNS sites which are increasing the number of users considerably, from the viewpoint of network structure, and extract characteristics of high growth SNS sites. As a result of discrimination on the basis of the decision tree analysis, we can recognize the high growth SNS sites with a high degree of accuracy. Besides, this approach suggests mixi and the other small-scale SNS sites have different character trait.
Microsoft Producer: A Software Tool for Creating Multimedia PowerPoint[R] Presentations
ERIC Educational Resources Information Center
Leffingwell, Thad R.; Thomas, David G.; Elliott, William H.
2007-01-01
Microsoft[R] Producer[R] is a powerful yet user-friendly PowerPoint companion tool for creating on-demand multimedia presentations. Instructors can easily distribute these presentations via compact disc or streaming media over the Internet. We describe the features of the software, system requirements, and other required hardware. We also describe…
ERIC Educational Resources Information Center
O'Hara, Lyndsay; Bryce, Elizabeth Ann; Scharf, Sydney; Yassi, Annalee
2012-01-01
A user-friendly, high quality workplace assessment field guide and an accompanying worksheet are invaluable tools for recognizing hazards in the hospital environment. These tools ensure that both front line workers as well as health and safety and infection control professionals can systematically evaluate hazards and formulate recommendations.…
Student-Produced Podcasts as an Assessment Tool: An Example from Geomorphology
ERIC Educational Resources Information Center
Kemp, Justine; Mellor, Antony; Kotter, Richard; Oosthoek, Jan W.
2012-01-01
The emergence of user-friendly technologies has made podcasting an accessible learning tool in undergraduate teaching. In a geomorphology course, student-produced podcasts were used as part of the assessment in 2008-2010. Student groups constructed radio shows aimed at a general audience to interpret and communicate geomorphological data within…
FIESTA—An R estimation tool for FIA analysts
Tracey S. Frescino; Paul L. Patterson; Gretchen G. Moisen; Elizabeth A. Freeman
2015-01-01
FIESTA (Forest Inventory ESTimation for Analysis) is a user-friendly R package that was originally developed to support the production of estimates consistent with current tools available for the Forest Inventory and Analysis (FIA) National Program, such as FIDO (Forest Inventory Data Online) and EVALIDator. FIESTA provides an alternative data retrieval and reporting...
Developing user-friendly habitat suitability tools from regional stream fish survey data
Zorn, T.G.; Seelbach, P.; Wiley, M.J.
2011-01-01
We developed user-friendly fish habitat suitability tools (plots) for fishery managers in Michigan; these tools are based on driving habitat variables and fish population estimates for several hundred stream sites throughout the state. We generated contour plots to show patterns in fish biomass for over 60 common species (and for 120 species grouped at the family level) in relation to axes of catchment area and low-flow yield (90% exceedance flow divided by catchment area) and also in relation to axes of mean and weekly range of July temperatures. The plots showed distinct patterns in fish habitat suitability at each level of biological organization studied and were useful for quantitatively comparing river sites. We demonstrate how these plots can be used to support stream management, and we provide examples pertaining to resource assessment, trout stocking, angling regulations, chemical reclamation of marginal trout streams, indicator species, instream flow protection, and habitat restoration. These straightforward and effective tools are electronically available so that managers can easily access and incorporate them into decision protocols and presentations.
An Intuitive Dashboard for Bayesian Network Inference
NASA Astrophysics Data System (ADS)
Reddy, Vikas; Charisse Farr, Anna; Wu, Paul; Mengersen, Kerrie; Yarlagadda, Prasad K. D. V.
2014-03-01
Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++.
Chipster: user-friendly analysis software for microarray and other high-throughput data.
Kallio, M Aleksi; Tuimala, Jarno T; Hupponen, Taavi; Klemelä, Petri; Gentile, Massimiliano; Scheinin, Ilari; Koski, Mikko; Käki, Janne; Korpelainen, Eija I
2011-10-14
The growth of high-throughput technologies such as microarrays and next generation sequencing has been accompanied by active research in data analysis methodology, producing new analysis methods at a rapid pace. While most of the newly developed methods are freely available, their use requires substantial computational skills. In order to enable non-programming biologists to benefit from the method development in a timely manner, we have created the Chipster software. Chipster (http://chipster.csc.fi/) brings a powerful collection of data analysis methods within the reach of bioscientists via its intuitive graphical user interface. Users can analyze and integrate different data types such as gene expression, miRNA and aCGH. The analysis functionality is complemented with rich interactive visualizations, allowing users to select datapoints and create new gene lists based on these selections. Importantly, users can save the performed analysis steps as reusable, automatic workflows, which can also be shared with other users. Being a versatile and easily extendable platform, Chipster can be used for microarray, proteomics and sequencing data. In this article we describe its comprehensive collection of analysis and visualization tools for microarray data using three case studies. Chipster is a user-friendly analysis software for high-throughput data. Its intuitive graphical user interface enables biologists to access a powerful collection of data analysis and integration tools, and to visualize data interactively. Users can collaborate by sharing analysis sessions and workflows. Chipster is open source, and the server installation package is freely available.
Chipster: user-friendly analysis software for microarray and other high-throughput data
2011-01-01
Background The growth of high-throughput technologies such as microarrays and next generation sequencing has been accompanied by active research in data analysis methodology, producing new analysis methods at a rapid pace. While most of the newly developed methods are freely available, their use requires substantial computational skills. In order to enable non-programming biologists to benefit from the method development in a timely manner, we have created the Chipster software. Results Chipster (http://chipster.csc.fi/) brings a powerful collection of data analysis methods within the reach of bioscientists via its intuitive graphical user interface. Users can analyze and integrate different data types such as gene expression, miRNA and aCGH. The analysis functionality is complemented with rich interactive visualizations, allowing users to select datapoints and create new gene lists based on these selections. Importantly, users can save the performed analysis steps as reusable, automatic workflows, which can also be shared with other users. Being a versatile and easily extendable platform, Chipster can be used for microarray, proteomics and sequencing data. In this article we describe its comprehensive collection of analysis and visualization tools for microarray data using three case studies. Conclusions Chipster is a user-friendly analysis software for high-throughput data. Its intuitive graphical user interface enables biologists to access a powerful collection of data analysis and integration tools, and to visualize data interactively. Users can collaborate by sharing analysis sessions and workflows. Chipster is open source, and the server installation package is freely available. PMID:21999641
MollDE: a homology modeling framework you can click with.
Canutescu, Adrian A; Dunbrack, Roland L
2005-06-15
Molecular Integrated Development Environment (MolIDE) is an integrated application designed to provide homology modeling tools and protocols under a uniform, user-friendly graphical interface. Its main purpose is to combine the most frequent modeling steps in a semi-automatic, interactive way, guiding the user from the target protein sequence to the final three-dimensional protein structure. The typical basic homology modeling process is composed of building sequence profiles of the target sequence family, secondary structure prediction, sequence alignment with PDB structures, assisted alignment editing, side-chain prediction and loop building. All of these steps are available through a graphical user interface. MolIDE's user-friendly and streamlined interactive modeling protocol allows the user to focus on the important modeling questions, hiding from the user the raw data generation and conversion steps. MolIDE was designed from the ground up as an open-source, cross-platform, extensible framework. This allows developers to integrate additional third-party programs to MolIDE. http://dunbrack.fccc.edu/molide/molide.php rl_dunbrack@fccc.edu.
Chavarrías, Cristina; García-Vázquez, Verónica; Alemán-Gómez, Yasser; Montesinos, Paula; Pascau, Javier; Desco, Manuel
2016-05-01
The purpose of this study was to develop a multi-platform automatic software tool for full processing of fMRI rodent studies. Existing tools require the usage of several different plug-ins, a significant user interaction and/or programming skills. Based on a user-friendly interface, the tool provides statistical parametric brain maps (t and Z) and percentage of signal change for user-provided regions of interest. The tool is coded in MATLAB (MathWorks(®)) and implemented as a plug-in for SPM (Statistical Parametric Mapping, the Wellcome Trust Centre for Neuroimaging). The automatic pipeline loads default parameters that are appropriate for preclinical studies and processes multiple subjects in batch mode (from images in either Nifti or raw Bruker format). In advanced mode, all processing steps can be selected or deselected and executed independently. Processing parameters and workflow were optimized for rat studies and assessed using 460 male-rat fMRI series on which we tested five smoothing kernel sizes and three different hemodynamic models. A smoothing kernel of FWHM = 1.2 mm (four times the voxel size) yielded the highest t values at the somatosensorial primary cortex, and a boxcar response function provided the lowest residual variance after fitting. fMRat offers the features of a thorough SPM-based analysis combined with the functionality of several SPM extensions in a single automatic pipeline with a user-friendly interface. The code and sample images can be downloaded from https://github.com/HGGM-LIM/fmrat .
Kadenza: Kepler/K2 Raw Cadence Data Reader
NASA Astrophysics Data System (ADS)
Barentsen, Geert; Cardoso, José Vinícius de Miranda
2018-03-01
Kadenza enables time-critical data analyses to be carried out using NASA's Kepler Space Telescope. It enables users to convert Kepler's raw data files into user-friendly Target Pixel Files upon downlink from the spacecraft. The primary motivation for this tool is to enable the microlensing, supernova, and exoplanet communities to create quicklook lightcurves for transient events which require rapid follow-up.
FGMReview: design of a knowledge management tool on female genital mutilation.
Martínez Pérez, Guillermo; Turetsky, Risa
2015-11-01
Web-based literature search engines may not be user-friendly for some readers searching for information on female genital mutilation. This is a traditional practice that has no health benefits, and about 140 million girls and women worldwide have undergone it. In 2012, the website FGMReview was created with the aim to offer a user-friendly, accessible, scalable, and innovative knowledge management tool specialized in female genital mutilation. The design of this website was guided by a conceptual model based on the use of benchmarking techniques and requirements engineering, an area of knowledge from the computer informatics field, influenced by the Transcultural Nursing model. The purpose of this article is to describe this conceptual model. Nurses and other health care providers can use this conceptual model to guide their methodological approach to design and launch other eHealth projects. © The Author(s) 2014.
Thriving as a New Teacher: Tools and Strategies for Your First Year
ERIC Educational Resources Information Center
Eller, John F.; Eller, Sheila A.
2016-01-01
Discover strategies and tools for new teacher success. In this user-friendly guide, the authors draw from best practice and their extensive experience to identify the necessary skills and characteristics to thrive as a new educator. Explore the six critical areas related to teaching that most impact new teachers and their students, from…
Knob manager (KM) operators guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1993-10-08
KM, Knob Manager, is a tool which enables the user to use the SUNDIALS knob box to adjust the settings of the control system. The followings are some features of KM: dynamic knob assignments with the user friendly interface; user-defined gain for individual knob; graphical displays for operating range and status of each process variable is assigned; backup and restore one or multiple process variable; save current settings to a file and recall the settings from that file in future.
Array data extractor (ADE): a LabVIEW program to extract and merge gene array data.
Kurtenbach, Stefan; Kurtenbach, Sarah; Zoidl, Georg
2013-12-01
Large data sets from gene expression array studies are publicly available offering information highly valuable for research across many disciplines ranging from fundamental to clinical research. Highly advanced bioinformatics tools have been made available to researchers, but a demand for user-friendly software allowing researchers to quickly extract expression information for multiple genes from multiple studies persists. Here, we present a user-friendly LabVIEW program to automatically extract gene expression data for a list of genes from multiple normalized microarray datasets. Functionality was tested for 288 class A G protein-coupled receptors (GPCRs) and expression data from 12 studies comparing normal and diseased human hearts. Results confirmed known regulation of a beta 1 adrenergic receptor and further indicate novel research targets. Although existing software allows for complex data analyses, the LabVIEW based program presented here, "Array Data Extractor (ADE)", provides users with a tool to retrieve meaningful information from multiple normalized gene expression datasets in a fast and easy way. Further, the graphical programming language used in LabVIEW allows applying changes to the program without the need of advanced programming knowledge.
NCDOT level of service software program for highway capacity manual planning applications.
DOT National Transportation Integrated Search
2006-08-01
The Transportation Planning Branch (TPB) of the North Carolina Department of Transportation (NCDOT) desired a : user-friendly tool for determining highway capacity and service volumes for freeways, multilane highways, arterials, and : two-lane highwa...
CytoBayesJ: software tools for Bayesian analysis of cytogenetic radiation dosimetry data.
Ainsbury, Elizabeth A; Vinnikov, Volodymyr; Puig, Pedro; Maznyk, Nataliya; Rothkamm, Kai; Lloyd, David C
2013-08-30
A number of authors have suggested that a Bayesian approach may be most appropriate for analysis of cytogenetic radiation dosimetry data. In the Bayesian framework, probability of an event is described in terms of previous expectations and uncertainty. Previously existing, or prior, information is used in combination with experimental results to infer probabilities or the likelihood that a hypothesis is true. It has been shown that the Bayesian approach increases both the accuracy and quality assurance of radiation dose estimates. New software entitled CytoBayesJ has been developed with the aim of bringing Bayesian analysis to cytogenetic biodosimetry laboratory practice. CytoBayesJ takes a number of Bayesian or 'Bayesian like' methods that have been proposed in the literature and presents them to the user in the form of simple user-friendly tools, including testing for the most appropriate model for distribution of chromosome aberrations and calculations of posterior probability distributions. The individual tools are described in detail and relevant examples of the use of the methods and the corresponding CytoBayesJ software tools are given. In this way, the suitability of the Bayesian approach to biological radiation dosimetry is highlighted and its wider application encouraged by providing a user-friendly software interface and manual in English and Russian. Copyright © 2013 Elsevier B.V. All rights reserved.
Jha, Ashish Kumar
2015-01-01
Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software "GFR estimation software" which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows(®) as operating system and Visual Basic 6.0 as the front end and Microsoft Access(®) as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis.
FRIEND Engine Framework: a real time neurofeedback client-server system for neuroimaging studies
Basilio, Rodrigo; Garrido, Griselda J.; Sato, João R.; Hoefle, Sebastian; Melo, Bruno R. P.; Pamplona, Fabricio A.; Zahn, Roland; Moll, Jorge
2015-01-01
In this methods article, we present a new implementation of a recently reported FSL-integrated neurofeedback tool, the standalone version of “Functional Real-time Interactive Endogenous Neuromodulation and Decoding” (FRIEND). We will refer to this new implementation as the FRIEND Engine Framework. The framework comprises a client-server cross-platform solution for real time fMRI and fMRI/EEG neurofeedback studies, enabling flexible customization or integration of graphical interfaces, devices, and data processing. This implementation allows a fast setup of novel plug-ins and frontends, which can be shared with the user community at large. The FRIEND Engine Framework is freely distributed for non-commercial, research purposes. PMID:25688193
Ye, Zhan; Kadolph, Christopher; Strenn, Robert; Wall, Daniel; McPherson, Elizabeth; Lin, Simon
2015-01-01
Background Identification and evaluation of incidental findings in patients following whole exome (WGS) or whole genome sequencing (WGS) is challenging for both practicing physicians and researchers. The American College of Medical Genetics and Genomics (ACMG) recently recommended a list of reportable incidental genetic findings. However, no informatics tools are currently available to support evaluation of incidental findings in next-generation sequencing data. Methods The Wisconsin Hierarchical Analysis Tool for Incidental Findings (WHATIF), was developed as a stand-alone Windows-based desktop executable, to support the interactive analysis of incidental findings in the context of the ACMG recommendations. WHATIF integrates the European Bioinformatics Institute Variant Effect Predictor (VEP) tool for biological interpretation and the National Center for Biotechnology Information ClinVar tool for clinical interpretation. Results An open-source desktop program was created to annotate incidental findings and present the results with a user-friendly interface. Further, a meaningful index (WHATIF Index) was devised for each gene to facilitate ranking of the relative importance of the variants and estimate the potential workload associated with further evaluation of the variants. Our WHATIF application is available at: http://tinyurl.com/WHATIF-SOFTWARE Conclusions The WHATIF application offers a user-friendly interface and allows users to investigate the extracted variant information efficiently and intuitively while always accessing the up to date information on variants via application programming interfaces (API) connections. WHATIF’s highly flexible design and straightforward implementation aids users in customizing the source code to meet their own special needs. PMID:25890833
Bible, Paul W; Kanno, Yuka; Wei, Lai; Brooks, Stephen R; O'Shea, John J; Morasso, Maria I; Loganantharaj, Rasiah; Sun, Hong-Wei
2015-01-01
Comparative co-localization analysis of transcription factors (TFs) and epigenetic marks (EMs) in specific biological contexts is one of the most critical areas of ChIP-Seq data analysis beyond peak calling. Yet there is a significant lack of user-friendly and powerful tools geared towards co-localization analysis based exploratory research. Most tools currently used for co-localization analysis are command line only and require extensive installation procedures and Linux expertise. Online tools partially address the usability issues of command line tools, but slow response times and few customization features make them unsuitable for rapid data-driven interactive exploratory research. We have developed PAPST: Peak Assignment and Profile Search Tool, a user-friendly yet powerful platform with a unique design, which integrates both gene-centric and peak-centric co-localization analysis into a single package. Most of PAPST's functions can be completed in less than five seconds, allowing quick cycles of data-driven hypothesis generation and testing. With PAPST, a researcher with or without computational expertise can perform sophisticated co-localization pattern analysis of multiple TFs and EMs, either against all known genes or a set of genomic regions obtained from public repositories or prior analysis. PAPST is a versatile, efficient, and customizable tool for genome-wide data-driven exploratory research. Creatively used, PAPST can be quickly applied to any genomic data analysis that involves a comparison of two or more sets of genomic coordinate intervals, making it a powerful tool for a wide range of exploratory genomic research. We first present PAPST's general purpose features then apply it to several public ChIP-Seq data sets to demonstrate its rapid execution and potential for cutting-edge research with a case study in enhancer analysis. To our knowledge, PAPST is the first software of its kind to provide efficient and sophisticated post peak-calling ChIP-Seq data analysis as an easy-to-use interactive application. PAPST is available at https://github.com/paulbible/papst and is a public domain work.
Bible, Paul W.; Kanno, Yuka; Wei, Lai; Brooks, Stephen R.; O’Shea, John J.; Morasso, Maria I.; Loganantharaj, Rasiah; Sun, Hong-Wei
2015-01-01
Comparative co-localization analysis of transcription factors (TFs) and epigenetic marks (EMs) in specific biological contexts is one of the most critical areas of ChIP-Seq data analysis beyond peak calling. Yet there is a significant lack of user-friendly and powerful tools geared towards co-localization analysis based exploratory research. Most tools currently used for co-localization analysis are command line only and require extensive installation procedures and Linux expertise. Online tools partially address the usability issues of command line tools, but slow response times and few customization features make them unsuitable for rapid data-driven interactive exploratory research. We have developed PAPST: Peak Assignment and Profile Search Tool, a user-friendly yet powerful platform with a unique design, which integrates both gene-centric and peak-centric co-localization analysis into a single package. Most of PAPST’s functions can be completed in less than five seconds, allowing quick cycles of data-driven hypothesis generation and testing. With PAPST, a researcher with or without computational expertise can perform sophisticated co-localization pattern analysis of multiple TFs and EMs, either against all known genes or a set of genomic regions obtained from public repositories or prior analysis. PAPST is a versatile, efficient, and customizable tool for genome-wide data-driven exploratory research. Creatively used, PAPST can be quickly applied to any genomic data analysis that involves a comparison of two or more sets of genomic coordinate intervals, making it a powerful tool for a wide range of exploratory genomic research. We first present PAPST’s general purpose features then apply it to several public ChIP-Seq data sets to demonstrate its rapid execution and potential for cutting-edge research with a case study in enhancer analysis. To our knowledge, PAPST is the first software of its kind to provide efficient and sophisticated post peak-calling ChIP-Seq data analysis as an easy-to-use interactive application. PAPST is available at https://github.com/paulbible/papst and is a public domain work. PMID:25970601
ERIC Educational Resources Information Center
Ellen, Moriah E.; Lavis, John N.; Wilson, Michael G.; Grimshaw, Jeremy; Haynes, R. Brian; Ouimet, Mathieu; Raina, Parminder; Gruen, Russell
2014-01-01
Health system managers and policy makers need timely access to high quality, policy-relevant systematic reviews. Our objectives were to obtain managers' and policy makers' feedback about user-friendly summaries of systematic reviews and about tools related to supporting or assessing their use. Our interviews identified that participants prefer key…
Using FIESTA , an R-based tool for analysts, to look at temporal trends in forest estimates
Tracey S. Frescino; Paul L. Patterson; Elizabeth A. Freeman; Gretchen G. Moisen
2012-01-01
FIESTA (Forest Inventory Estimation for Analysis) is a user-friendly R package that supports the production of estimates for forest resources based on procedures from Bechtold and Patterson (2005). The package produces output consistent with current tools available for the Forest Inventory and Analysis National Program, such as FIDO (Forest Inventory Data Online) and...
WebArray: an online platform for microarray data analysis
Xia, Xiaoqin; McClelland, Michael; Wang, Yipeng
2005-01-01
Background Many cutting-edge microarray analysis tools and algorithms, including commonly used limma and affy packages in Bioconductor, need sophisticated knowledge of mathematics, statistics and computer skills for implementation. Commercially available software can provide a user-friendly interface at considerable cost. To facilitate the use of these tools for microarray data analysis on an open platform we developed an online microarray data analysis platform, WebArray, for bench biologists to utilize these tools to explore data from single/dual color microarray experiments. Results The currently implemented functions were based on limma and affy package from Bioconductor, the spacings LOESS histogram (SPLOSH) method, PCA-assisted normalization method and genome mapping method. WebArray incorporates these packages and provides a user-friendly interface for accessing a wide range of key functions of limma and others, such as spot quality weight, background correction, graphical plotting, normalization, linear modeling, empirical bayes statistical analysis, false discovery rate (FDR) estimation, chromosomal mapping for genome comparison. Conclusion WebArray offers a convenient platform for bench biologists to access several cutting-edge microarray data analysis tools. The website is freely available at . It runs on a Linux server with Apache and MySQL. PMID:16371165
Google-Earth Based Visualizations for Environmental Flows and Pollutant Dispersion in Urban Areas
Liu, Daoming; Kenjeres, Sasa
2017-01-01
In the present study, we address the development and application of an efficient tool for conversion of results obtained by an integrated computational fluid dynamics (CFD) and computational reaction dynamics (CRD) approach and their visualization in the Google Earth. We focus on results typical for environmental fluid mechanics studies at a city scale that include characteristic wind flow patterns and dispersion of reactive scalars. This is achieved by developing a code based on the Java language, which converts the typical four-dimensional structure (spatial and temporal dependency) of data results in the Keyhole Markup Language (KML) format. The visualization techniques most often used are revisited and implemented into the conversion tool. The potential of the tool is demonstrated in a case study of smog formation due to an intense traffic emission in Rotterdam (The Netherlands). It is shown that the Google Earth can provide a computationally efficient and user-friendly means of data representation. This feature can be very useful for visualization of pollution at street levels, which is of great importance for the city residents. Various meteorological and traffic emissions can be easily visualized and analyzed, providing a powerful, user-friendly tool for traffic regulations and urban climate adaptations. PMID:28257078
RDNAnalyzer: A tool for DNA secondary structure prediction and sequence analysis.
Afzal, Muhammad; Shahid, Ahmad Ali; Shehzadi, Abida; Nadeem, Shahid; Husnain, Tayyab
2012-01-01
RDNAnalyzer is an innovative computer based tool designed for DNA secondary structure prediction and sequence analysis. It can randomly generate the DNA sequence or user can upload the sequences of their own interest in RAW format. It uses and extends the Nussinov dynamic programming algorithm and has various application for the sequence analysis. It predicts the DNA secondary structure and base pairings. It also provides the tools for routinely performed sequence analysis by the biological scientists such as DNA replication, reverse compliment generation, transcription, translation, sequence specific information as total number of nucleotide bases, ATGC base contents along with their respective percentages and sequence cleaner. RDNAnalyzer is a unique tool developed in Microsoft Visual Studio 2008 using Microsoft Visual C# and Windows Presentation Foundation and provides user friendly environment for sequence analysis. It is freely available. http://www.cemb.edu.pk/sw.html RDNAnalyzer - Random DNA Analyser, GUI - Graphical user interface, XAML - Extensible Application Markup Language.
Feasibility of Using Lasers and Infrared Heaters as UNREP Icing Countermeasures
1989-12-29
water lance system out of commission, it is likely that the ship’s machine shop could fabricate the necessary parts for temporary repair. No such back...Sturbridge, MA 01566 High powered C02 laser systems and large inductrial machine tools. Coherent Laser Products (800) 527-3786 3210 Porter Drive P.O...friendly LASAG lasers are for user friendly applications The correct Laser Source for a particular in inoustrial apolications. Machining Task Mair
Mengarelli, Alessandro; Cardarelli, Stefano; Verdini, Federica; Burattini, Laura; Fioretti, Sandro; Di Nardo, Francesco
2016-08-01
In this paper a graphical user interface (GUI) built in MATLAB® environment is presented. This interactive tool has been developed for the analysis of superficial electromyography (sEMG) signals and in particular for the assessment of the muscle activation time intervals. After the signal import, the tool performs a first analysis in a totally user independent way, providing a reliable computation of the muscular activation sequences. Furthermore, the user has the opportunity to modify each parameter of the on/off identification algorithm implemented in the presented tool. The presence of an user-friendly GUI allows the immediate evaluation of the effects that the modification of every single parameter has on the activation intervals recognition, through the real-time updating and visualization of the muscular activation/deactivation sequences. The possibility to accept the initial signal analysis or to modify the on/off identification with respect to each considered signal, with a real-time visual feedback, makes this GUI-based tool a valuable instrument in clinical, research applications and also in an educational perspective.
Current trends for customized biomedical software tools.
Khan, Haseeb Ahmad
2017-01-01
In the past, biomedical scientists were solely dependent on expensive commercial software packages for various applications. However, the advent of user-friendly programming languages and open source platforms has revolutionized the development of simple and efficient customized software tools for solving specific biomedical problems. Many of these tools are designed and developed by biomedical scientists independently or with the support of computer experts and often made freely available for the benefit of scientific community. The current trends for customized biomedical software tools are highlighted in this short review.
ProteoSign: an end-user online differential proteomics statistical analysis platform.
Efstathiou, Georgios; Antonakis, Andreas N; Pavlopoulos, Georgios A; Theodosiou, Theodosios; Divanach, Peter; Trudgian, David C; Thomas, Benjamin; Papanikolaou, Nikolas; Aivaliotis, Michalis; Acuto, Oreste; Iliopoulos, Ioannis
2017-07-03
Profiling of proteome dynamics is crucial for understanding cellular behavior in response to intrinsic and extrinsic stimuli and maintenance of homeostasis. Over the last 20 years, mass spectrometry (MS) has emerged as the most powerful tool for large-scale identification and characterization of proteins. Bottom-up proteomics, the most common MS-based proteomics approach, has always been challenging in terms of data management, processing, analysis and visualization, with modern instruments capable of producing several gigabytes of data out of a single experiment. Here, we present ProteoSign, a freely available web application, dedicated in allowing users to perform proteomics differential expression/abundance analysis in a user-friendly and self-explanatory way. Although several non-commercial standalone tools have been developed for post-quantification statistical analysis of proteomics data, most of them are not end-user appealing as they often require very stringent installation of programming environments, third-party software packages and sometimes further scripting or computer programming. To avoid this bottleneck, we have developed a user-friendly software platform accessible via a web interface in order to enable proteomics laboratories and core facilities to statistically analyse quantitative proteomics data sets in a resource-efficient manner. ProteoSign is available at http://bioinformatics.med.uoc.gr/ProteoSign and the source code at https://github.com/yorgodillo/ProteoSign. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
QuadBase2: web server for multiplexed guanine quadruplex mining and visualization
Dhapola, Parashar; Chowdhury, Shantanu
2016-01-01
DNA guanine quadruplexes or G4s are non-canonical DNA secondary structures which affect genomic processes like replication, transcription and recombination. G4s are computationally identified by specific nucleotide motifs which are also called putative G4 (PG4) motifs. Despite the general relevance of these structures, there is currently no tool available that can allow batch queries and genome-wide analysis of these motifs in a user-friendly interface. QuadBase2 (quadbase.igib.res.in) presents a completely reinvented web server version of previously published QuadBase database. QuadBase2 enables users to mine PG4 motifs in up to 178 eukaryotes through the EuQuad module. This module interfaces with Ensembl Compara database, to allow users mine PG4 motifs in the orthologues of genes of interest across eukaryotes. PG4 motifs can be mined across genes and their promoter sequences in 1719 prokaryotes through ProQuad module. This module includes a feature that allows genome-wide mining of PG4 motifs and their visualization as circular histograms. TetraplexFinder, the module for mining PG4 motifs in user-provided sequences is now capable of handling up to 20 MB of data. QuadBase2 is a comprehensive PG4 motif mining tool that further expands the configurations and algorithms for mining PG4 motifs in a user-friendly way. PMID:27185890
User-friendly traffic incident management (TIM) program benefit-cost estimation tool, Version 1.2
DOT National Transportation Integrated Search
2016-01-01
Traffic incidents contribute significantly to the deterioration of the level of service of both freeways and arterials. Traffic Incident Management (TIM) programs have been introduced worldwide with the aim of mitigating the impact of traffic inciden...
SimHap GUI: an intuitive graphical user interface for genetic association analysis.
Carter, Kim W; McCaskie, Pamela A; Palmer, Lyle J
2008-12-25
Researchers wishing to conduct genetic association analysis involving single nucleotide polymorphisms (SNPs) or haplotypes are often confronted with the lack of user-friendly graphical analysis tools, requiring sophisticated statistical and informatics expertise to perform relatively straightforward tasks. Tools, such as the SimHap package for the R statistics language, provide the necessary statistical operations to conduct sophisticated genetic analysis, but lacks a graphical user interface that allows anyone but a professional statistician to effectively utilise the tool. We have developed SimHap GUI, a cross-platform integrated graphical analysis tool for conducting epidemiological, single SNP and haplotype-based association analysis. SimHap GUI features a novel workflow interface that guides the user through each logical step of the analysis process, making it accessible to both novice and advanced users. This tool provides a seamless interface to the SimHap R package, while providing enhanced functionality such as sophisticated data checking, automated data conversion, and real-time estimations of haplotype simulation progress. SimHap GUI provides a novel, easy-to-use, cross-platform solution for conducting a range of genetic and non-genetic association analyses. This provides a free alternative to commercial statistics packages that is specifically designed for genetic association analysis.
AstrodyToolsWeb an e-Science project in Astrodynamics and Celestial Mechanics fields
NASA Astrophysics Data System (ADS)
López, R.; San-Juan, J. F.
2013-05-01
Astrodynamics Web Tools, AstrodyToolsWeb (http://tastrody.unirioja.es), is an ongoing collaborative Web Tools computing infrastructure project which has been specially designed to support scientific computation. AstrodyToolsWeb provides project collaborators with all the technical and human facilities in order to wrap, manage, and use specialized noncommercial software tools in Astrodynamics and Celestial Mechanics fields, with the aim of optimizing the use of resources, both human and material. However, this project is open to collaboration from the whole scientific community in order to create a library of useful tools and their corresponding theoretical backgrounds. AstrodyToolsWeb offers a user-friendly web interface in order to choose applications, introduce data, and select appropriate constraints in an intuitive and easy way for the user. After that, the application is executed in real time, whenever possible; then the critical information about program behavior (errors and logs) and output, including the postprocessing and interpretation of its results (graphical representation of data, statistical analysis or whatever manipulation therein), are shown via the same web interface or can be downloaded to the user's computer.
Translating statistical species-habitat models to interactive decision support tools
Wszola, Lyndsie S.; Simonsen, Victoria L.; Stuber, Erica F.; Gillespie, Caitlyn R.; Messinger, Lindsey N.; Decker, Karie L.; Lusk, Jeffrey J.; Jorgensen, Christopher F.; Bishop, Andrew A.; Fontaine, Joseph J.
2017-01-01
Understanding species-habitat relationships is vital to successful conservation, but the tools used to communicate species-habitat relationships are often poorly suited to the information needs of conservation practitioners. Here we present a novel method for translating a statistical species-habitat model, a regression analysis relating ring-necked pheasant abundance to landcover, into an interactive online tool. The Pheasant Habitat Simulator combines the analytical power of the R programming environment with the user-friendly Shiny web interface to create an online platform in which wildlife professionals can explore the effects of variation in local landcover on relative pheasant habitat suitability within spatial scales relevant to individual wildlife managers. Our tool allows users to virtually manipulate the landcover composition of a simulated space to explore how changes in landcover may affect pheasant relative habitat suitability, and guides users through the economic tradeoffs of landscape changes. We offer suggestions for development of similar interactive applications and demonstrate their potential as innovative science delivery tools for diverse professional and public audiences.
Translating statistical species-habitat models to interactive decision support tools.
Wszola, Lyndsie S; Simonsen, Victoria L; Stuber, Erica F; Gillespie, Caitlyn R; Messinger, Lindsey N; Decker, Karie L; Lusk, Jeffrey J; Jorgensen, Christopher F; Bishop, Andrew A; Fontaine, Joseph J
2017-01-01
Understanding species-habitat relationships is vital to successful conservation, but the tools used to communicate species-habitat relationships are often poorly suited to the information needs of conservation practitioners. Here we present a novel method for translating a statistical species-habitat model, a regression analysis relating ring-necked pheasant abundance to landcover, into an interactive online tool. The Pheasant Habitat Simulator combines the analytical power of the R programming environment with the user-friendly Shiny web interface to create an online platform in which wildlife professionals can explore the effects of variation in local landcover on relative pheasant habitat suitability within spatial scales relevant to individual wildlife managers. Our tool allows users to virtually manipulate the landcover composition of a simulated space to explore how changes in landcover may affect pheasant relative habitat suitability, and guides users through the economic tradeoffs of landscape changes. We offer suggestions for development of similar interactive applications and demonstrate their potential as innovative science delivery tools for diverse professional and public audiences.
Translating statistical species-habitat models to interactive decision support tools
Simonsen, Victoria L.; Stuber, Erica F.; Gillespie, Caitlyn R.; Messinger, Lindsey N.; Decker, Karie L.; Lusk, Jeffrey J.; Jorgensen, Christopher F.; Bishop, Andrew A.; Fontaine, Joseph J.
2017-01-01
Understanding species-habitat relationships is vital to successful conservation, but the tools used to communicate species-habitat relationships are often poorly suited to the information needs of conservation practitioners. Here we present a novel method for translating a statistical species-habitat model, a regression analysis relating ring-necked pheasant abundance to landcover, into an interactive online tool. The Pheasant Habitat Simulator combines the analytical power of the R programming environment with the user-friendly Shiny web interface to create an online platform in which wildlife professionals can explore the effects of variation in local landcover on relative pheasant habitat suitability within spatial scales relevant to individual wildlife managers. Our tool allows users to virtually manipulate the landcover composition of a simulated space to explore how changes in landcover may affect pheasant relative habitat suitability, and guides users through the economic tradeoffs of landscape changes. We offer suggestions for development of similar interactive applications and demonstrate their potential as innovative science delivery tools for diverse professional and public audiences. PMID:29236707
EZ and GOSSIP, two new VO compliant tools for spectral analysis
NASA Astrophysics Data System (ADS)
Franzetti, P.; Garill, B.; Fumana, M.; Paioro, L.; Scodeggio, M.; Paltani, S.; Scaramella, R.
2008-10-01
We present EZ and GOSSIP, two new VO compliant tools dedicated to spectral analysis. EZ is a tool to perform automatic redshift measurement; GOSSIP is a tool created to perform the SED fitting procedure in a simple, user friendly and efficient way. These two tools have been developed by the PANDORA Group at INAF-IASF (Milano); EZ has been developed in collaboration with Osservatorio Monte Porzio (Roma) and Integral Science Data Center (Geneve). EZ is released to the astronomical community; GOSSIP is currently in beta-testing.
Methods for comparative metagenomics
Huson, Daniel H; Richter, Daniel C; Mitra, Suparna; Auch, Alexander F; Schuster, Stephan C
2009-01-01
Background Metagenomics is a rapidly growing field of research that aims at studying uncultured organisms to understand the true diversity of microbes, their functions, cooperation and evolution, in environments such as soil, water, ancient remains of animals, or the digestive system of animals and humans. The recent development of ultra-high throughput sequencing technologies, which do not require cloning or PCR amplification, and can produce huge numbers of DNA reads at an affordable cost, has boosted the number and scope of metagenomic sequencing projects. Increasingly, there is a need for new ways of comparing multiple metagenomics datasets, and for fast and user-friendly implementations of such approaches. Results This paper introduces a number of new methods for interactively exploring, analyzing and comparing multiple metagenomic datasets, which will be made freely available in a new, comparative version 2.0 of the stand-alone metagenome analysis tool MEGAN. Conclusion There is a great need for powerful and user-friendly tools for comparative analysis of metagenomic data and MEGAN 2.0 will help to fill this gap. PMID:19208111
Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter
2017-06-28
High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.
Visualizing Dynamic Weather and Ocean Data in Google Earth
NASA Astrophysics Data System (ADS)
Castello, C.; Giencke, P.
2008-12-01
Katrina. Climate change. Rising sea levels. Low lake levels. These headliners, and countless others like them, underscore the need to better understand our changing oceans and lakes. Over the past decade, efforts such as the Global Ocean Observing System (GOOS) have added to this understanding, through the creation of interoperable ocean observing systems. These systems, including buoy networks, gliders, UAV's, etc, have resulted in a dramatic increase in the amount of Earth observation data available to the public. Unfortunately, these data tend to be restrictive to mass consumption, owing to large file sizes, incompatible formats, and/or a dearth of user friendly visualization software. Google Earth offers a flexible way to visualize Earth observation data. Marrying high resolution orthoimagery, user friendly query and navigation tools, and the power of OGC's KML standard, Google Earth can make observation data universally understandable and accessible. This presentation will feature examples of meteorological and oceanographic data visualized using KML and Google Earth, along with tools and tips for integrating other such environmental datasets.
NASA Technical Reports Server (NTRS)
Saracino, G.; Greenberg, N. L.; Shiota, T.; Corsi, C.; Lamberti, C.; Thomas, J. D.
2002-01-01
Real-time three-dimensional echocardiography (RT3DE) is an innovative cardiac imaging modality. However, partly due to lack of user-friendly software, RT3DE has not been widely accepted as a clinical tool. The object of this study was to develop and implement a fast and interactive volume renderer of RT3DE datasets designed for a clinical environment where speed and simplicity are not secondary to accuracy. Thirty-six patients (20 regurgitation, 8 normal, 8 cardiomyopathy) were imaged using RT3DE. Using our newly developed software, all 3D data sets were rendered in real-time throughout the cardiac cycle and assessment of cardiac function and pathology was performed for each case. The real-time interactive volume visualization system is user friendly and instantly provides consistent and reliable 3D images without expensive workstations or dedicated hardware. We believe that this novel tool can be used clinically for dynamic visualization of cardiac anatomy.
NASA Astrophysics Data System (ADS)
Chang, C.; Li, M.; Yeh, G.
2010-12-01
The BIOGEOCHEM numerical model (Yeh and Fang, 2002; Fang et al., 2003) was developed with FORTRAN for simulating reaction-based geochemical and biochemical processes with mixed equilibrium and kinetic reactions in batch systems. A complete suite of reactions including aqueous complexation, adsorption/desorption, ion-exchange, redox, precipitation/dissolution, acid-base reactions, and microbial mediated reactions were embodied in this unique modeling tool. Any reaction can be treated as fast/equilibrium or slow/kinetic reaction. An equilibrium reaction is modeled with an implicit finite rate governed by a mass action equilibrium equation or by a user-specified algebraic equation. A kinetic reaction is modeled with an explicit finite rate with an elementary rate, microbial mediated enzymatic kinetics, or a user-specified rate equation. None of the existing models has encompassed this wide array of scopes. To ease the input/output learning curve using the unique feature of BIOGEOCHEM, an interactive graphic user interface was developed with the Microsoft Visual Studio and .Net tools. Several user-friendly features, such as pop-up help windows, typo warning messages, and on-screen input hints, were implemented, which are robust. All input data can be real-time viewed and automated to conform with the input file format of BIOGEOCHEM. A post-processor for graphic visualizations of simulated results was also embedded for immediate demonstrations. By following data input windows step by step, errorless BIOGEOCHEM input files can be created even if users have little prior experiences in FORTRAN. With this user-friendly interface, the time effort to conduct simulations with BIOGEOCHEM can be greatly reduced.
User-friendly program for multitask analysis
NASA Astrophysics Data System (ADS)
Caporali, Sergio A.; Akladios, Magdy; Becker, Paul E.
2000-10-01
Research on lifting activities has led to the design of several useful tools for evaluating tasks that involve lifting and material handling. The National Institute for Occupational Safety and Health (NIOSH) has developed a single task lifting equation. This formula has been frequently used as a guide in the field of ergonomics and material handling. While being much more complicated, the multi-task formula will provide a more realistic analysis for the evaluation of lifting and material handling jobs. A user friendly tool has been developed to assist professionals in the field of ergonomics in analyzing multitask types of material handling jobs. The program allows for up to 10 different tasks to be evaluated. The program requires a basic understanding of the NIOSH lifting guidelines and the six multipliers that are involved in the analysis of each single task. These multipliers are: Horizontal Distance Multiplier (HM), Vertical Distance Multiplier (VM), Vertical Displacement Multiplier (DM), Frequency of lifting Multiplier (FM), Coupling Multiplier (CM), and the Asymmetry Multiplier (AM). Once a given job is analyzed, a researched list of recommendations is provided to the user in an attempt to reduce the potential risk factors that are associated with each task.
Array data extractor (ADE): a LabVIEW program to extract and merge gene array data
2013-01-01
Background Large data sets from gene expression array studies are publicly available offering information highly valuable for research across many disciplines ranging from fundamental to clinical research. Highly advanced bioinformatics tools have been made available to researchers, but a demand for user-friendly software allowing researchers to quickly extract expression information for multiple genes from multiple studies persists. Findings Here, we present a user-friendly LabVIEW program to automatically extract gene expression data for a list of genes from multiple normalized microarray datasets. Functionality was tested for 288 class A G protein-coupled receptors (GPCRs) and expression data from 12 studies comparing normal and diseased human hearts. Results confirmed known regulation of a beta 1 adrenergic receptor and further indicate novel research targets. Conclusions Although existing software allows for complex data analyses, the LabVIEW based program presented here, “Array Data Extractor (ADE)”, provides users with a tool to retrieve meaningful information from multiple normalized gene expression datasets in a fast and easy way. Further, the graphical programming language used in LabVIEW allows applying changes to the program without the need of advanced programming knowledge. PMID:24289243
Neuswanger, Jason R.; Wipfli, Mark S.; Rosenberger, Amanda E.; Hughes, Nicholas F.
2017-01-01
Applications of video in fisheries research range from simple biodiversity surveys to three-dimensional (3D) measurement of complex swimming, schooling, feeding, and territorial behaviors. However, researchers lack a transparently developed, easy-to-use, general purpose tool for 3D video measurement and event logging. Thus, we developed a new measurement system, with freely available, user-friendly software, easily obtained hardware, and flexible underlying mathematical methods capable of high precision and accuracy. The software, VidSync, allows users to efficiently record, organize, and navigate complex 2D or 3D measurements of fish and their physical habitats. Laboratory tests showed submillimetre accuracy in length measurements of 50.8 mm targets at close range, with increasing errors (mostly <1%) at longer range and for longer targets. A field test on juvenile Chinook salmon (Oncorhynchus tshawytscha) feeding behavior in Alaska streams found that individuals within aggregations avoided the immediate proximity of their competitors, out to a distance of 1.0 to 2.9 body lengths. This system makes 3D video measurement a practical tool for laboratory and field studies of aquatic or terrestrial animal behavior and ecology.
Development of a site analysis tool for distributed wind projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, Shawn
The Cadmus Group, Inc., in collaboration with the National Renewable Energy Laboratory (NREL) and Encraft, was awarded a grant from the Department of Energy (DOE) to develop a site analysis tool for distributed wind technologies. As the principal investigator for this project, Mr. Shawn Shaw was responsible for overall project management, direction, and technical approach. The product resulting from this project is the Distributed Wind Site Analysis Tool (DSAT), a software tool for analyzing proposed sites for distributed wind technology (DWT) systems. This user-friendly tool supports the long-term growth and stability of the DWT market by providing reliable, realistic estimatesmore » of site and system energy output and feasibility. DSAT-which is accessible online and requires no purchase or download of software-is available in two account types; Standard: This free account allows the user to analyze a limited number of sites and to produce a system performance report for each; and Professional: For a small annual fee users can analyze an unlimited number of sites, produce system performance reports, and generate other customizable reports containing key information such as visual influence and wind resources. The tool’s interactive maps allow users to create site models that incorporate the obstructions and terrain types present. Users can generate site reports immediately after entering the requisite site information. Ideally, this tool also educates users regarding good site selection and effective evaluation practices.« less
Remote Data Exploration with the Interactive Data Language (IDL)
NASA Technical Reports Server (NTRS)
Galloy, Michael
2013-01-01
A difficulty for many NASA researchers is that often the data to analyze is located remotely from the scientist and the data is too large to transfer for local analysis. Researchers have developed the Data Access Protocol (DAP) for accessing remote data. Presently one can use DAP from within IDL, but the IDL-DAP interface is both limited and cumbersome. A more powerful and user-friendly interface to DAP for IDL has been developed. Users are able to browse remote data sets graphically, select partial data to retrieve, import that data and make customized plots, and have an interactive IDL command line session simultaneous with the remote visualization. All of these IDL-DAP tools are usable easily and seamlessly for any IDL user. IDL and DAP are both widely used in science, but were not easily used together. The IDL DAP bindings were incomplete and had numerous bugs that prevented their serious use. For example, the existing bindings did not read DAP Grid data, which is the organization of nearly all NASA datasets currently served via DAP. This project uniquely provides a fully featured, user-friendly interface to DAP from IDL, both from the command line and a GUI application. The DAP Explorer GUI application makes browsing a dataset more user-friendly, while also providing the capability to run user-defined functions on specified data. Methods for running remote functions on the DAP server were investigated, and a technique for accomplishing this task was decided upon.
SoftLab: A Soft-Computing Software for Experimental Research with Commercialization Aspects
NASA Technical Reports Server (NTRS)
Akbarzadeh-T, M.-R.; Shaikh, T. S.; Ren, J.; Hubbell, Rob; Kumbla, K. K.; Jamshidi, M
1998-01-01
SoftLab is a software environment for research and development in intelligent modeling/control using soft-computing paradigms such as fuzzy logic, neural networks, genetic algorithms, and genetic programs. SoftLab addresses the inadequacies of the existing soft-computing software by supporting comprehensive multidisciplinary functionalities from management tools to engineering systems. Furthermore, the built-in features help the user process/analyze information more efficiently by a friendly yet powerful interface, and will allow the user to specify user-specific processing modules, hence adding to the standard configuration of the software environment.
Interactive Voice/Web Response System in clinical research
Ruikar, Vrishabhsagar
2016-01-01
Emerging technologies in computer and telecommunication industry has eased the access to computer through telephone. An Interactive Voice/Web Response System (IxRS) is one of the user friendly systems for end users, with complex and tailored programs at its backend. The backend programs are specially tailored for easy understanding of users. Clinical research industry has experienced revolution in methodologies of data capture with time. Different systems have evolved toward emerging modern technologies and tools in couple of decades from past, for example, Electronic Data Capture, IxRS, electronic patient reported outcomes, etc. PMID:26952178
Graphics Software For VT Terminals
NASA Technical Reports Server (NTRS)
Wang, Caroline
1991-01-01
VTGRAPH graphics software tool for DEC/VT computer terminal or terminals compatible with it, widely used by government and industry. Callable in FORTRAN or C language, library program enabling user to cope with many computer environments in which VT terminals used for window management and graphic systems. Provides PLOT10-like package plus color or shade capability for VT240, VT241, and VT300 terminals. User can easily design more-friendly user-interface programs and design PLOT10 programs on VT terminals with different computer systems. Requires ReGis graphics set terminal and FORTRAN compiler.
Interactive Voice/Web Response System in clinical research.
Ruikar, Vrishabhsagar
2016-01-01
Emerging technologies in computer and telecommunication industry has eased the access to computer through telephone. An Interactive Voice/Web Response System (IxRS) is one of the user friendly systems for end users, with complex and tailored programs at its backend. The backend programs are specially tailored for easy understanding of users. Clinical research industry has experienced revolution in methodologies of data capture with time. Different systems have evolved toward emerging modern technologies and tools in couple of decades from past, for example, Electronic Data Capture, IxRS, electronic patient reported outcomes, etc.
E-TALEN: a web tool to design TALENs for genome engineering.
Heigwer, Florian; Kerr, Grainne; Walther, Nike; Glaeser, Kathrin; Pelz, Oliver; Breinig, Marco; Boutros, Michael
2013-11-01
Use of transcription activator-like effector nucleases (TALENs) is a promising new technique in the field of targeted genome engineering, editing and reverse genetics. Its applications span from introducing knockout mutations to endogenous tagging of proteins and targeted excision repair. Owing to this wide range of possible applications, there is a need for fast and user-friendly TALEN design tools. We developed E-TALEN (http://www.e-talen.org), a web-based tool to design TALENs for experiments of varying scale. E-TALEN enables the design of TALENs against a single target or a large number of target genes. We significantly extended previously published design concepts to consider genomic context and different applications. E-TALEN guides the user through an end-to-end design process of de novo TALEN pairs, which are specific to a certain sequence or genomic locus. Furthermore, E-TALEN offers a functionality to predict targeting and specificity for existing TALENs. Owing to the computational complexity of many of the steps in the design of TALENs, particular emphasis has been put on the implementation of fast yet accurate algorithms. We implemented a user-friendly interface, from the input parameters to the presentation of results. An additional feature of E-TALEN is the in-built sequence and annotation database available for many organisms, including human, mouse, zebrafish, Drosophila and Arabidopsis, which can be extended in the future.
An innovative SNP genotyping method adapting to multiple platforms and throughputs
USDA-ARS?s Scientific Manuscript database
Single nucleotide polymorphisms (SNPs) are highly abundant, distributed throughout the genome in various species, and therefore they are widely used as genetic markers. However, the usefulness of this genetic tool relies heavily on the availability of user-friendly SNP genotyping methods. We have d...
Rough flows and homogenization in stochastic turbulence
NASA Astrophysics Data System (ADS)
Bailleul, I.; Catellier, R.
2017-10-01
We provide in this work a tool-kit for the study of homogenisation of random ordinary differential equations, under the form of a friendly-user black box based on the technology of rough flows. We illustrate the use of this setting on the example of stochastic turbulence.
Creating a user friendly GIS tool to define functional process zones
The goal of this research is to develop methods and indicators that are useful for evaluating the condition of aquatic communities, for assessing the restoration of aquatic communities in response to mitigation and best management practices, and for determining the exposure of aq...
URPD: a specific product primer design tool
2012-01-01
Background Polymerase chain reaction (PCR) plays an important role in molecular biology. Primer design fundamentally determines its results. Here, we present a currently available software that is not located in analyzing large sequence but used for a rather straight-forward way of visualizing the primer design process for infrequent users. Findings URPD (yoUR Primer Design), a web-based specific product primer design tool, combines the NCBI Reference Sequences (RefSeq), UCSC In-Silico PCR, memetic algorithm (MA) and genetic algorithm (GA) primer design methods to obtain specific primer sets. A friendly user interface is accomplished by built-in parameter settings. The incorporated smooth pipeline operations effectively guide both occasional and advanced users. URPD contains an automated process, which produces feasible primer pairs that satisfy the specific needs of the experimental design with practical PCR amplifications. Visual virtual gel electrophoresis and in silico PCR provide a simulated PCR environment. The comparison of Practical gel electrophoresis comparison to virtual gel electrophoresis facilitates and verifies the PCR experiment. Wet-laboratory validation proved that the system provides feasible primers. Conclusions URPD is a user-friendly tool that provides specific primer design results. The pipeline design path makes it easy to operate for beginners. URPD also provides a high throughput primer design function. Moreover, the advanced parameter settings assist sophisticated researchers in performing experiential PCR. Several novel functions, such as a nucleotide accession number template sequence input, local and global specificity estimation, primer pair redesign, user-interactive sequence scale selection, and virtual and practical PCR gel electrophoresis discrepancies have been developed and integrated into URPD. The URPD program is implemented in JAVA and freely available at http://bio.kuas.edu.tw/urpd/. PMID:22713312
MetaNET--a web-accessible interactive platform for biological metabolic network analysis.
Narang, Pankaj; Khan, Shawez; Hemrom, Anmol Jaywant; Lynn, Andrew Michael
2014-01-01
Metabolic reactions have been extensively studied and compiled over the last century. These have provided a theoretical base to implement models, simulations of which are used to identify drug targets and optimize metabolic throughput at a systemic level. While tools for the perturbation of metabolic networks are available, their applications are limited and restricted as they require varied dependencies and often a commercial platform for full functionality. We have developed MetaNET, an open source user-friendly platform-independent and web-accessible resource consisting of several pre-defined workflows for metabolic network analysis. MetaNET is a web-accessible platform that incorporates a range of functions which can be combined to produce different simulations related to metabolic networks. These include (i) optimization of an objective function for wild type strain, gene/catalyst/reaction knock-out/knock-down analysis using flux balance analysis. (ii) flux variability analysis (iii) chemical species participation (iv) cycles and extreme paths identification and (v) choke point reaction analysis to facilitate identification of potential drug targets. The platform is built using custom scripts along with the open-source Galaxy workflow and Systems Biology Research Tool as components. Pre-defined workflows are available for common processes, and an exhaustive list of over 50 functions are provided for user defined workflows. MetaNET, available at http://metanet.osdd.net , provides a user-friendly rich interface allowing the analysis of genome-scale metabolic networks under various genetic and environmental conditions. The framework permits the storage of previous results, the ability to repeat analysis and share results with other users over the internet as well as run different tools simultaneously using pre-defined workflows, and user-created custom workflows.
URPD: a specific product primer design tool.
Chuang, Li-Yeh; Cheng, Yu-Huei; Yang, Cheng-Hong
2012-06-19
Polymerase chain reaction (PCR) plays an important role in molecular biology. Primer design fundamentally determines its results. Here, we present a currently available software that is not located in analyzing large sequence but used for a rather straight-forward way of visualizing the primer design process for infrequent users. URPD (yoUR Primer Design), a web-based specific product primer design tool, combines the NCBI Reference Sequences (RefSeq), UCSC In-Silico PCR, memetic algorithm (MA) and genetic algorithm (GA) primer design methods to obtain specific primer sets. A friendly user interface is accomplished by built-in parameter settings. The incorporated smooth pipeline operations effectively guide both occasional and advanced users. URPD contains an automated process, which produces feasible primer pairs that satisfy the specific needs of the experimental design with practical PCR amplifications. Visual virtual gel electrophoresis and in silico PCR provide a simulated PCR environment. The comparison of Practical gel electrophoresis comparison to virtual gel electrophoresis facilitates and verifies the PCR experiment. Wet-laboratory validation proved that the system provides feasible primers. URPD is a user-friendly tool that provides specific primer design results. The pipeline design path makes it easy to operate for beginners. URPD also provides a high throughput primer design function. Moreover, the advanced parameter settings assist sophisticated researchers in performing experiential PCR. Several novel functions, such as a nucleotide accession number template sequence input, local and global specificity estimation, primer pair redesign, user-interactive sequence scale selection, and virtual and practical PCR gel electrophoresis discrepancies have been developed and integrated into URPD. The URPD program is implemented in JAVA and freely available at http://bio.kuas.edu.tw/urpd/.
Panatto, Donatella; Domnich, Alexander; Gasparini, Roberto; Bonanni, Paolo; Icardi, Giancarlo; Amicizia, Daniela; Arata, Lucia; Bragazzi, Nicola Luigi; Signori, Alessio; Landa, Paolo; Bechini, Angela; Boccalini, Sara
2016-04-02
Given the growing use and great potential of mobile apps, this project aimed to develop and implement a user-friendly app to increase laypeople's knowledge and awareness of invasive pneumococcal disease (IPD). Despite the heavy burden of IPD, the documented low awareness of IPD among both laypeople and healthcare professionals and far from optimal pneumococcal vaccination coverage, no app specifically targeting IPD has been developed so far. The app was designed to be maximally functional and conceived in accordance with user-centered design. Its content, layout and usability were discussed and formally tested during several workshops that involved the principal stakeholders, including experts in IPD and information technology and potential end-users. Following several workshops, it was decided that, in order to make the app more interactive, its core should be a personal "checker" of the risk of contracting IPD and a user-friendly risk-communication strategy. The checker was populated with risk factors identified through both Italian and international official guidelines. Formal evaluation of the app revealed its good readability and usability properties. A sister web site with the same content was created to achieve higher population exposure. Seven months after being launched in a price- and registration-free modality, the app, named "Pneumo Rischio," averaged 20.9 new users/day and 1.3 sessions/user. The first in-field results suggest that "Pneumo Rischio" is a promising tool for increasing the population's awareness of IPD and its prevention through a user-friendly risk checker.
Construct and Compare Gene Coexpression Networks with DAPfinder and DAPview.
Skinner, Jeff; Kotliarov, Yuri; Varma, Sudhir; Mine, Karina L; Yambartsev, Anatoly; Simon, Richard; Huyen, Yentram; Morgun, Andrey
2011-07-14
DAPfinder and DAPview are novel BRB-ArrayTools plug-ins to construct gene coexpression networks and identify significant differences in pairwise gene-gene coexpression between two phenotypes. Each significant difference in gene-gene association represents a Differentially Associated Pair (DAP). Our tools include several choices of filtering methods, gene-gene association metrics, statistical testing methods and multiple comparison adjustments. Network results are easily displayed in Cytoscape. Analyses of glioma experiments and microarray simulations demonstrate the utility of these tools. DAPfinder is a new friendly-user tool for reconstruction and comparison of biological networks.
Sandoval-Castellanos, Edson; Palkopoulou, Eleftheria; Dalén, Love
2014-01-01
Inference of population demographic history has vastly improved in recent years due to a number of technological and theoretical advances including the use of ancient DNA. Approximate Bayesian computation (ABC) stands among the most promising methods due to its simple theoretical fundament and exceptional flexibility. However, limited availability of user-friendly programs that perform ABC analysis renders it difficult to implement, and hence programming skills are frequently required. In addition, there is limited availability of programs able to deal with heterochronous data. Here we present the software BaySICS: Bayesian Statistical Inference of Coalescent Simulations. BaySICS provides an integrated and user-friendly platform that performs ABC analyses by means of coalescent simulations from DNA sequence data. It estimates historical demographic population parameters and performs hypothesis testing by means of Bayes factors obtained from model comparisons. Although providing specific features that improve inference from datasets with heterochronous data, BaySICS also has several capabilities making it a suitable tool for analysing contemporary genetic datasets. Those capabilities include joint analysis of independent tables, a graphical interface and the implementation of Markov-chain Monte Carlo without likelihoods.
NASA Astrophysics Data System (ADS)
Plait, Philip
2008-05-01
Social networks are websites (or software that distributes media online) where users can distribute content to either a list of friends on that site or to anyone who surfs onto their page, and where those friends can interact and discuss the content. By linking to friends online, the users’ personal content (pictures, songs, favorite movies, diaries, websites, and so on) is dynamically distributed, and can "become viral", that is, get spread rapidly as more people see it and spread it themselves. Social networks are immensely popular around the planet, especially with younger users. The biggest social networks are Facebook and MySpace; an IYA2009 user already exists on Facebook, and one will be created for MySpace (in fact, several NASA satellites such as GLAST and Swift already have successful MySpace pages). Twitter is another network where data distribution is more limited; it is more like a mini-blog, but is very popular. IYA2009 already has a Twitter page, and will be updated more often with relevant information. In this talk I will review the existing social networks, show people how and why they are useful, and give them the tools they need to contribute meaningfully to IYA's online reach.
A user friendly system for ultrasound carotid intima-media thickness image interpretation
NASA Astrophysics Data System (ADS)
Zhu, Xiangjun; Kendall, Christopher B.; Hurst, R. Todd; Liang, Jianming
2011-03-01
Assessment of Carotid Intima-Media Thickness (CIMT) by B-mode ultrasound is a technically mature and reproducible technology. Given the high morbidity, mortality and the large societal burden associated with CV diseases, as a safe yet inexpensive tool, CIMT is increasingly utilized for cardiovascular (CV) risk stratification. However, CIMT requires a precise measure of the thickness of the intima and media layers of the carotid artery that can be tedious, time consuming, and demand specialized expertise and experience. To this end, we have developed a highly user-friendly system for semiautomatic CIMT image interpretation. Our contribution is the application of active contour models (snake models) with hard constraints, leading to an accurate, adaptive and user-friendly border detection algorithm. A comparison study with the CIMT measurement software in Siemens Syngo® Arterial Health Package shows that our system gives a small bias in mean (0.049 +/-0.051mm) and maximum (0.010 +/- 0.083 mm) CIMT measures and offers a higher reproducibility (average correlation coefficients were 0.948 and 0.844 in mean and maximum CIMT respectively (P <0.001)). This superior performance is attributed to our novel interface design for hard constraints in the snake models.
Systematic Propulsion Optimization Tools (SPOT)
NASA Technical Reports Server (NTRS)
Bower, Mark; Celestian, John
1992-01-01
This paper describes a computer program written by senior-level Mechanical Engineering students at the University of Alabama in Huntsville which is capable of optimizing user-defined delivery systems for carrying payloads into orbit. The custom propulsion system is designed by the user through the input of configuration, payload, and orbital parameters. The primary advantages of the software, called Systematic Propulsion Optimization Tools (SPOT), are a user-friendly interface and a modular FORTRAN 77 code designed for ease of modification. The optimization of variables in an orbital delivery system is of critical concern in the propulsion environment. The mass of the overall system must be minimized within the maximum stress, force, and pressure constraints. SPOT utilizes the Design Optimization Tools (DOT) program for the optimization techniques. The SPOT program is divided into a main program and five modules: aerodynamic losses, orbital parameters, liquid engines, solid engines, and nozzles. The program is designed to be upgraded easily and expanded to meet specific user needs. A user's manual and a programmer's manual are currently being developed to facilitate implementation and modification.
SimHap GUI: An intuitive graphical user interface for genetic association analysis
Carter, Kim W; McCaskie, Pamela A; Palmer, Lyle J
2008-01-01
Background Researchers wishing to conduct genetic association analysis involving single nucleotide polymorphisms (SNPs) or haplotypes are often confronted with the lack of user-friendly graphical analysis tools, requiring sophisticated statistical and informatics expertise to perform relatively straightforward tasks. Tools, such as the SimHap package for the R statistics language, provide the necessary statistical operations to conduct sophisticated genetic analysis, but lacks a graphical user interface that allows anyone but a professional statistician to effectively utilise the tool. Results We have developed SimHap GUI, a cross-platform integrated graphical analysis tool for conducting epidemiological, single SNP and haplotype-based association analysis. SimHap GUI features a novel workflow interface that guides the user through each logical step of the analysis process, making it accessible to both novice and advanced users. This tool provides a seamless interface to the SimHap R package, while providing enhanced functionality such as sophisticated data checking, automated data conversion, and real-time estimations of haplotype simulation progress. Conclusion SimHap GUI provides a novel, easy-to-use, cross-platform solution for conducting a range of genetic and non-genetic association analyses. This provides a free alternative to commercial statistics packages that is specifically designed for genetic association analysis. PMID:19109877
Changing State Digital Libraries
ERIC Educational Resources Information Center
Pappas, Marjorie L.
2006-01-01
Research has shown that state virtual or digital libraries are evolving into websites that are loaded with free resources, subscription databases, and instructional tools. In this article, the author explores these evolving libraries based on the following questions: (1) How user-friendly are the state digital libraries?; (2) How do state digital…
NASA Astrophysics Data System (ADS)
Ivanov, Stanislav; Kamzolkin, Vladimir; Konilov, Aleksandr; Aleshin, Igor
2014-05-01
There are many various methods of assessing the conditions of rocks formation based on determining the composition of the constituent minerals. Our objective was to create a universal tool for processing mineral's chemical analysis results and solving geothermobarometry problems by creating a database of existing sensors and providing a user-friendly standard interface. Similar computer assisted tools are based upon large collection of sensors (geothermometers and geobarometers) are known, for example, the project TPF (Konilov A.N., 1999) - text-based sensor collection tool written in PASCAL. The application contained more than 350 different sensors and has been used widely in petrochemical studies (see A.N. Konilov , A.A. Grafchikov, V.I. Fonarev 2010 for review). Our prototype uses the TPF project concept and is designed with modern application development techniques, which allows better flexibility. Main components of the designed system are 3 connected datasets: sensors collection (geothermometers, geobarometers, oxygen geobarometers, etc.), petrochemical data and modeling results. All data is maintained by special management and visualization tools and resides in sql database. System utilities allow user to import and export data in various file formats, edit records and plot graphs. Sensors database contains up to date collections of known methods. New sensors may be added by user. Measured database should be filled in by researcher. User friendly interface allows access to all available data and sensors, automates routine work, reduces the risk of common user mistakes and simplifies information exchange between research groups. We use prototype to evaluate peak pressure during the formation of garnet-amphibolite apoeclogites, gneisses and schists Blybsky metamorphic complex of the Front Range of the Northern Caucasus. In particular, our estimation of formation pressure range (18 ± 4 kbar) agrees on independent research results. The reported study was partially supported by RFBR, research project No. 14-05-00615.
RDNAnalyzer: A tool for DNA secondary structure prediction and sequence analysis
Afzal, Muhammad; Shahid, Ahmad Ali; Shehzadi, Abida; Nadeem, Shahid; Husnain, Tayyab
2012-01-01
RDNAnalyzer is an innovative computer based tool designed for DNA secondary structure prediction and sequence analysis. It can randomly generate the DNA sequence or user can upload the sequences of their own interest in RAW format. It uses and extends the Nussinov dynamic programming algorithm and has various application for the sequence analysis. It predicts the DNA secondary structure and base pairings. It also provides the tools for routinely performed sequence analysis by the biological scientists such as DNA replication, reverse compliment generation, transcription, translation, sequence specific information as total number of nucleotide bases, ATGC base contents along with their respective percentages and sequence cleaner. RDNAnalyzer is a unique tool developed in Microsoft Visual Studio 2008 using Microsoft Visual C# and Windows Presentation Foundation and provides user friendly environment for sequence analysis. It is freely available. Availability http://www.cemb.edu.pk/sw.html Abbreviations RDNAnalyzer - Random DNA Analyser, GUI - Graphical user interface, XAML - Extensible Application Markup Language. PMID:23055611
A user-friendly tool for incremental haemodialysis prescription.
Casino, Francesco Gaetano; Basile, Carlo
2018-01-05
There is a recently heightened interest in incremental haemodialysis (IHD), the main advantage of which could likely be a better preservation of the residual kidney function of the patients. The implementation of IHD, however, is hindered by many factors, among them, the mathematical complexity of its prescription. The aim of our study was to design a user-friendly tool for IHD prescription, consisting of only a few rows of a common spreadsheet. The keystone of our spreadsheet was the following fundamental concept: the dialysis dose to be prescribed in IHD depends only on the normalized urea clearance provided by the native kidneys (KRUn) of the patient for each frequency of treatment, according to the variable target model recently proposed by Casino and Basile (The variable target model: a paradigm shift in the incremental haemodialysis prescription. Nephrol Dial Transplant 2017; 32: 182-190). The first step was to put in sequence a series of equations in order to calculate, firstly, KRUn and, then, the key parameters to be prescribed for an adequate IHD; the second step was to compare KRUn values obtained with our spreadsheet with KRUn values obtainable with the gold standard Solute-solver (Daugirdas JT et al., Solute-solver: a web-based tool for modeling urea kinetics for a broad range of hemodialysis schedules in multiple patients. Am J Kidney Dis 2009; 54: 798-809) in a sample of 40 incident haemodialysis patients. Our spreadsheet provided excellent results. The differences with Solute-solver were clinically negligible. This was confirmed by the Bland-Altman plot built to analyse the agreement between KRUn values obtained with the two methods: the difference was 0.07 ± 0.05 mL/min/35 L. Our spreadsheet is a user-friendly tool able to provide clinically acceptable results in IHD prescription. Two immediate consequences could derive: (i) a larger dissemination of IHD might occur; and (ii) our spreadsheet could represent a useful tool for an ineludibly needed full-fledged clinical trial, comparing IHD with standard thrice-weekly HD. © The Author(s) 2018. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Ebels, Kelly; Faulx, Dunia; Gerth-Guyette, Emily; Murunga, Peninah; Mahapatro, Samarendra; Das, Manoja Kumar; Ginsburg, Amy Sarah
2016-01-01
Pneumonia is the leading cause of death from infection in children worldwide. Despite global treatment recommendations that call for children with pneumonia to receive amoxicillin dispersible tablets, only one-third of children with pneumonia receive any antibiotics and many do not complete the full course of treatment. Poor adherence to antibiotics may be driven in part by a lack of user-friendly treatment instructions. In order to optimise childhood pneumonia treatment adherence at the community level, we developed a user-friendly product presentation for caregivers and a job aid for healthcare providers (HCPs). This paper aims to document the development process and offers a model for future health communication tools. We employed an iterative design process that included document review, key stakeholder interviews, engagement with a graphic designer and pre-testing design concepts among target users in India and Kenya. The consolidated criteria for reporting qualitative research were used in the description of results. Though resources for pneumonia treatment are available in some countries, their content is incomplete and inconsistent with global recommendations. Document review and stakeholder interviews provided the information necessary to convey to caregivers and recommendations for how to present this information. Target users in India and Kenya confirmed the need to support better treatment adherence, recommended specific modifications to design concepts and suggested the development of a companion job aid. There was a consensus among caregivers and HCPs that these tools would be helpful and improve adherence behaviours. The development of user-friendly instructions for medications for use in low-resource settings is a critically important but time-intensive and resource-intensive process that should involve engagement with target audiences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Eijssen, Lars M T; Goelela, Varshna S; Kelder, Thomas; Adriaens, Michiel E; Evelo, Chris T; Radonjic, Marijana
2015-06-30
Illumina whole-genome expression bead arrays are a widely used platform for transcriptomics. Most of the tools available for the analysis of the resulting data are not easily applicable by less experienced users. ArrayAnalysis.org provides researchers with an easy-to-use and comprehensive interface to the functionality of R and Bioconductor packages for microarray data analysis. As a modular open source project, it allows developers to contribute modules that provide support for additional types of data or extend workflows. To enable data analysis of Illumina bead arrays for a broad user community, we have developed a module for ArrayAnalysis.org that provides a free and user-friendly web interface for quality control and pre-processing for these arrays. This module can be used together with existing modules for statistical and pathway analysis to provide a full workflow for Illumina gene expression data analysis. The module accepts data exported from Illumina's GenomeStudio, and provides the user with quality control plots and normalized data. The outputs are directly linked to the existing statistics module of ArrayAnalysis.org, but can also be downloaded for further downstream analysis in third-party tools. The Illumina bead arrays analysis module is available at http://www.arrayanalysis.org . A user guide, a tutorial demonstrating the analysis of an example dataset, and R scripts are available. The module can be used as a starting point for statistical evaluation and pathway analysis provided on the website or to generate processed input data for a broad range of applications in life sciences research.
Biomedical image analysis and processing in clouds
NASA Astrophysics Data System (ADS)
Bednarz, Tomasz; Szul, Piotr; Arzhaeva, Yulia; Wang, Dadong; Burdett, Neil; Khassapov, Alex; Chen, Shiping; Vallotton, Pascal; Lagerstrom, Ryan; Gureyev, Tim; Taylor, John
2013-10-01
Cloud-based Image Analysis and Processing Toolbox project runs on the Australian National eResearch Collaboration Tools and Resources (NeCTAR) cloud infrastructure and allows access to biomedical image processing and analysis services to researchers via remotely accessible user interfaces. By providing user-friendly access to cloud computing resources and new workflow-based interfaces, our solution enables researchers to carry out various challenging image analysis and reconstruction tasks. Several case studies will be presented during the conference.
Goñi-Moreno, Ángel; Kim, Juhyun; de Lorenzo, Víctor
2017-02-01
Visualization of the intracellular constituents of individual bacteria while performing as live biocatalysts is in principle doable through more or less sophisticated fluorescence microscopy. Unfortunately, rigorous quantitation of the wealth of data embodied in the resulting images requires bioinformatic tools that are not widely extended within the community-let alone that they are often subject to licensing that impedes software reuse. In this context we have developed CellShape, a user-friendly platform for image analysis with subpixel precision and double-threshold segmentation system for quantification of fluorescent signals stemming from single-cells. CellShape is entirely coded in Python, a free, open-source programming language with widespread community support. For a developer, CellShape enhances extensibility (ease of software improvements) by acting as an interface to access and use existing Python modules; for an end-user, CellShape presents standalone executable files ready to open without installation. We have adopted this platform to analyse with an unprecedented detail the tridimensional distribution of the constituents of the gene expression flow (DNA, RNA polymerase, mRNA and ribosomal proteins) in individual cells of the industrial platform strain Pseudomonas putida KT2440. While the CellShape first release version (v0.8) is readily operational, users and/or developers are enabled to expand the platform further. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
SimArray: a user-friendly and user-configurable microarray design tool
Auburn, Richard P; Russell, Roslin R; Fischer, Bettina; Meadows, Lisa A; Sevillano Matilla, Santiago; Russell, Steven
2006-01-01
Background Microarrays were first developed to assess gene expression but are now also used to map protein-binding sites and to assess allelic variation between individuals. Regardless of the intended application, efficient production and appropriate array design are key determinants of experimental success. Inefficient production can make larger-scale studies prohibitively expensive, whereas poor array design makes normalisation and data analysis problematic. Results We have developed a user-friendly tool, SimArray, which generates a randomised spot layout, computes a maximum meta-grid area, and estimates the print time, in response to user-specified design decisions. Selected parameters include: the number of probes to be printed; the microtitre plate format; the printing pin configuration, and the achievable spot density. SimArray is compatible with all current robotic spotters that employ 96-, 384- or 1536-well microtitre plates, and can be configured to reflect most production environments. Print time and maximum meta-grid area estimates facilitate evaluation of each array design for its suitability. Randomisation of the spot layout facilitates correction of systematic biases by normalisation. Conclusion SimArray is intended to help both established researchers and those new to the microarray field to develop microarray designs with randomised spot layouts that are compatible with their specific production environment. SimArray is an open-source program and is available from . PMID:16509966
Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert
2018-05-08
In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.
D-MATRIX: A web tool for constructing weight matrix of conserved DNA motifs
Sen, Naresh; Mishra, Manoj; Khan, Feroz; Meena, Abha; Sharma, Ashok
2009-01-01
Despite considerable efforts to date, DNA motif prediction in whole genome remains a challenge for researchers. Currently the genome wide motif prediction tools required either direct pattern sequence (for single motif) or weight matrix (for multiple motifs). Although there are known motif pattern databases and tools for genome level prediction but no tool for weight matrix construction. Considering this, we developed a D-MATRIX tool which predicts the different types of weight matrix based on user defined aligned motif sequence set and motif width. For retrieval of known motif sequences user can access the commonly used databases such as TFD, RegulonDB, DBTBS, Transfac. DMATRIX program uses a simple statistical approach for weight matrix construction, which can be converted into different file formats according to user requirement. It provides the possibility to identify the conserved motifs in the coregulated genes or whole genome. As example, we successfully constructed the weight matrix of LexA transcription factor binding site with the help of known sosbox cisregulatory elements in Deinococcus radiodurans genome. The algorithm is implemented in C-Sharp and wrapped in ASP.Net to maintain a user friendly web interface. DMATRIX tool is accessible through the CIMAP domain network. Availability http://203.190.147.116/dmatrix/ PMID:19759861
D-MATRIX: a web tool for constructing weight matrix of conserved DNA motifs.
Sen, Naresh; Mishra, Manoj; Khan, Feroz; Meena, Abha; Sharma, Ashok
2009-07-27
Despite considerable efforts to date, DNA motif prediction in whole genome remains a challenge for researchers. Currently the genome wide motif prediction tools required either direct pattern sequence (for single motif) or weight matrix (for multiple motifs). Although there are known motif pattern databases and tools for genome level prediction but no tool for weight matrix construction. Considering this, we developed a D-MATRIX tool which predicts the different types of weight matrix based on user defined aligned motif sequence set and motif width. For retrieval of known motif sequences user can access the commonly used databases such as TFD, RegulonDB, DBTBS, Transfac. D-MATRIX program uses a simple statistical approach for weight matrix construction, which can be converted into different file formats according to user requirement. It provides the possibility to identify the conserved motifs in the co-regulated genes or whole genome. As example, we successfully constructed the weight matrix of LexA transcription factor binding site with the help of known sos-box cis-regulatory elements in Deinococcus radiodurans genome. The algorithm is implemented in C-Sharp and wrapped in ASP.Net to maintain a user friendly web interface. D-MATRIX tool is accessible through the CIMAP domain network. http://203.190.147.116/dmatrix/
NASA Astrophysics Data System (ADS)
Bilal, Muhammad; Asfand-e-Yar, Mockford, Steve; Khan, Wasiq; Awan, Irfan
2012-11-01
Mobile technology is among the fastest growing technologies in today's world with low cost and highly effective benefits. Most important and entertaining areas in mobile technology development and usage are location based services, user friendly networked applications and gaming applications. However, concern towards network operator service provision and improvement has been very low. The portable applications available for a range of mobile operating systems which help improve the network operator services are desirable by the mobile operators. This paper proposes a state of the art mobile application Tracesaver, which provides a great achievement over the barriers in gathering device and network related information, for network operators to improve their network service provision. Tracesaver is available for a broad range of mobile devices with different mobile operating systems and computational capabilities. The availability of Tracesaver in market has proliferated over the last year since it was published. The survey and results show that Tracesaver is being used by millions of mobile users and provides novel ways of network service improvement with its highly user friendly interface.
The DREO Elint Browser Utility (DEBU) reference manual
NASA Astrophysics Data System (ADS)
Ford, Barbara; Jones, David
1992-04-01
An electronic intelligent database browsing tool called DEBU has been developed that allows databases such as ELP, Kilting, EWIR, and AFEWC to be reviewed and analyzed from a user-friendly environment on a personal computer. DEBU's basic function is to allow users to examine the contents of user-selected subfiles of user-selected emitters of user-selected databases. DEBU augments this functionality with support for selecting (filtering) and combining subsets of emitters by user-selected attributes such as name, parameter type, or parameter value. DEBU provides facilities for examining histograms and x-y plots of selected parameters, for doing ambiguity analysis and mode level analysis, and for generating and printing a variety of reports. A manual is provided for users of DEBU, including descriptions and illustrations of menus and windows.
BRepertoire: a user-friendly web server for analysing antibody repertoire data.
Margreitter, Christian; Lu, Hui-Chun; Townsend, Catherine; Stewart, Alexander; Dunn-Walters, Deborah K; Fraternali, Franca
2018-04-14
Antibody repertoire analysis by high throughput sequencing is now widely used, but a persisting challenge is enabling immunologists to explore their data to discover discriminating repertoire features for their own particular investigations. Computational methods are necessary for large-scale evaluation of antibody properties. We have developed BRepertoire, a suite of user-friendly web-based software tools for large-scale statistical analyses of repertoire data. The software is able to use data preprocessed by IMGT, and performs statistical and comparative analyses with versatile plotting options. BRepertoire has been designed to operate in various modes, for example analysing sequence-specific V(D)J gene usage, discerning physico-chemical properties of the CDR regions and clustering of clonotypes. Those analyses are performed on the fly by a number of R packages and are deployed by a shiny web platform. The user can download the analysed data in different table formats and save the generated plots as image files ready for publication. We believe BRepertoire to be a versatile analytical tool that complements experimental studies of immune repertoires. To illustrate the server's functionality, we show use cases including differential gene usage in a vaccination dataset and analysis of CDR3H properties in old and young individuals. The server is accessible under http://mabra.biomed.kcl.ac.uk/BRepertoire.
Making Big Data, Safe Data: A Test Optimization Approach
2016-06-15
catalyzed by the need to put a value on testing. Included with this project report is a proof of concept created in MS Excel utilizing its VBA ...Language To make the proof of concept more user friendly, MS Excel was chosen for its convenient user interface and its developer tool, VBA . Another...reason it was selected is everyone has easy access to MS Excel, so the file accompanying this project paper can be easily viewed, used, and modified by
A survey of motif finding Web tools for detecting binding site motifs in ChIP-Seq data
2014-01-01
Abstract ChIP-Seq (chromatin immunoprecipitation sequencing) has provided the advantage for finding motifs as ChIP-Seq experiments narrow down the motif finding to binding site locations. Recent motif finding tools facilitate the motif detection by providing user-friendly Web interface. In this work, we reviewed nine motif finding Web tools that are capable for detecting binding site motifs in ChIP-Seq data. We showed each motif finding Web tool has its own advantages for detecting motifs that other tools may not discover. We recommended the users to use multiple motif finding Web tools that implement different algorithms for obtaining significant motifs, overlapping resemble motifs, and non-overlapping motifs. Finally, we provided our suggestions for future development of motif finding Web tool that better assists researchers for finding motifs in ChIP-Seq data. Reviewers This article was reviewed by Prof. Sandor Pongor, Dr. Yuriy Gusev, and Dr. Shyam Prabhakar (nominated by Prof. Limsoon Wong). PMID:24555784
In Search of the Most Likely Value
ERIC Educational Resources Information Center
Letkowski, Jerzy
2014-01-01
Descripting Statistics provides methodology and tools for user-friendly presentation of random data. Among the summary measures that describe focal tendencies in random data, the mode is given the least amount of attention and it is frequently misinterpreted in many introductory textbooks on statistics. The purpose of the paper is to provide a…
About JEDI | Jobs and Economic Development Impact Models | NREL
About JEDI About JEDI The Jobs and Economic Development Impact (JEDI) models are user-friendly screening tools that estimate the economic impacts of constructing and operating power plants, fuel from industry norms), JEDI estimates the number of jobs and economic impacts to a local area that can
A Decision Support System for Predicting Students' Performance
ERIC Educational Resources Information Center
Livieris, Ioannis E.; Mikropoulos, Tassos A.; Pintelas, Panagiotis
2016-01-01
Educational data mining is an emerging research field concerned with developing methods for exploring the unique types of data that come from educational context. These data allow the educational stakeholders to discover new, interesting and valuable knowledge about students. In this paper, we present a new user-friendly decision support tool for…
The Case for the Perceived Social Competence Scale II
ERIC Educational Resources Information Center
Anderson-Butcher, Dawn; Amorose, Anthony J.; Lower, Leeann M.; Riley, Allison; Gibson, Allison; Ruch, Donna
2016-01-01
Objective: This study examines the psychometric properties of the revised Perceived Social Competence Scale (PSCS), a brief, user-friendly tool used to assess social competence among youth. Method: Confirmatory factor analyses (CFAs) examined the factor structure and invariance of an enhanced scale (PSCS-II), among a sample of 420 youth.…
USDA-ARS?s Scientific Manuscript database
One finding of the Conservation Effects Assessment Program (CEAP) watershed studies was that Best Management practices (BMPs) were not always installed where most needed: in many watersheds, only a fraction of BMPs were implemented in the most vulnerable areas. While complex computer simulation mode...
Dental Informatics tool "SOFPRO" for the study of oral submucous fibrosis.
Erlewad, Dinesh Masajirao; Mundhe, Kalpana Anandrao; Hazarey, Vinay K
2016-01-01
Dental informatics is an evolving branch widely used in dental education and practice. Numerous applications that support clinical care, education and research have been developed. However, very few such applications are developed and utilized in the epidemiological studies of oral submucous fibrosis (OSF) which is affecting a significant population of Asian countries. To design and develop an user friendly software for the descriptive epidemiological study of OSF. With the help of a software engineer a computer program SOFPRO was designed and developed by using, Ms-Visual Basic 6.0 (VB), Ms-Access 2000, Crystal Report 7.0 and Ms-Paint in operating system XP. For the analysis purpose the available OSF data from the departmental precancer registry was fed into the SOFPRO. Known data, not known and null data are successfully accepted in data entry and represented in data analysis of OSF. Smooth working of SOFPRO and its correct data flow was tested against real-time data of OSF. SOFPRO was found to be a user friendly automated tool for easy data collection, retrieval, management and analysis of OSF patients.
Hestand, Matthew S; van Galen, Michiel; Villerius, Michel P; van Ommen, Gert-Jan B; den Dunnen, Johan T; 't Hoen, Peter AC
2008-01-01
Background The identification of transcription factor binding sites is difficult since they are only a small number of nucleotides in size, resulting in large numbers of false positives and false negatives in current approaches. Computational methods to reduce false positives are to look for over-representation of transcription factor binding sites in a set of similarly regulated promoters or to look for conservation in orthologous promoter alignments. Results We have developed a novel tool, "CORE_TF" (Conserved and Over-REpresented Transcription Factor binding sites) that identifies common transcription factor binding sites in promoters of co-regulated genes. To improve upon existing binding site predictions, the tool searches for position weight matrices from the TRANSFACR database that are over-represented in an experimental set compared to a random set of promoters and identifies cross-species conservation of the predicted transcription factor binding sites. The algorithm has been evaluated with expression and chromatin-immunoprecipitation on microarray data. We also implement and demonstrate the importance of matching the random set of promoters to the experimental promoters by GC content, which is a unique feature of our tool. Conclusion The program CORE_TF is accessible in a user friendly web interface at . It provides a table of over-represented transcription factor binding sites in the users input genes' promoters and a graphical view of evolutionary conserved transcription factor binding sites. In our test data sets it successfully predicts target transcription factors and their binding sites. PMID:19036135
pROC: an open-source package for R and S+ to analyze and compare ROC curves.
Robin, Xavier; Turck, Natacha; Hainard, Alexandre; Tiberti, Natalia; Lisacek, Frédérique; Sanchez, Jean-Charles; Müller, Markus
2011-03-17
Receiver operating characteristic (ROC) curves are useful tools to evaluate classifiers in biomedical and bioinformatics applications. However, conclusions are often reached through inconsistent use or insufficient statistical analysis. To support researchers in their ROC curves analysis we developed pROC, a package for R and S+ that contains a set of tools displaying, analyzing, smoothing and comparing ROC curves in a user-friendly, object-oriented and flexible interface. With data previously imported into the R or S+ environment, the pROC package builds ROC curves and includes functions for computing confidence intervals, statistical tests for comparing total or partial area under the curve or the operating points of different classifiers, and methods for smoothing ROC curves. Intermediary and final results are visualised in user-friendly interfaces. A case study based on published clinical and biomarker data shows how to perform a typical ROC analysis with pROC. pROC is a package for R and S+ specifically dedicated to ROC analysis. It proposes multiple statistical tests to compare ROC curves, and in particular partial areas under the curve, allowing proper ROC interpretation. pROC is available in two versions: in the R programming language or with a graphical user interface in the S+ statistical software. It is accessible at http://expasy.org/tools/pROC/ under the GNU General Public License. It is also distributed through the CRAN and CSAN public repositories, facilitating its installation.
Visualizing vascular structures in virtual environments
NASA Astrophysics Data System (ADS)
Wischgoll, Thomas
2013-01-01
In order to learn more about the cause of coronary heart diseases and develop diagnostic tools, the extraction and visualization of vascular structures from volumetric scans for further analysis is an important step. By determining a geometric representation of the vasculature, the geometry can be inspected and additional quantitative data calculated and incorporated into the visualization of the vasculature. To provide a more user-friendly visualization tool, virtual environment paradigms can be utilized. This paper describes techniques for interactive rendering of large-scale vascular structures within virtual environments. This can be applied to almost any virtual environment configuration, such as CAVE-type displays. Specifically, the tools presented in this paper were tested on a Barco I-Space and a large 62x108 inch passive projection screen with a Kinect sensor for user tracking.
A simple tool for stereological assessment of digital images: the STEPanizer.
Tschanz, S A; Burri, P H; Weibel, E R
2011-07-01
STEPanizer is an easy-to-use computer-based software tool for the stereological assessment of digitally captured images from all kinds of microscopical (LM, TEM, LSM) and macroscopical (radiology, tomography) imaging modalities. The program design focuses on providing the user a defined workflow adapted to most basic stereological tasks. The software is compact, that is user friendly without being bulky. STEPanizer comprises the creation of test systems, the appropriate display of digital images with superimposed test systems, a scaling facility, a counting module and an export function for the transfer of results to spreadsheet programs. Here we describe the major workflow of the tool illustrating the application on two examples from transmission electron microscopy and light microscopy, respectively. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.
FILTSoft: A computational tool for microstrip planar filter design
NASA Astrophysics Data System (ADS)
Elsayed, M. H.; Abidin, Z. Z.; Dahlan, S. H.; Cholan N., A.; Ngu, Xavier T. I.; Majid, H. A.
2017-09-01
Filters are key component of any communication system to control spectrum and suppress interferences. Designing a filter involves long process as well as good understanding of the basic hardware technology. Hence this paper introduces an automated design tool based on Matlab-GUI, called the FILTSoft (acronym for Filter Design Software) to ease the process. FILTSoft is a user friendly filter design tool to aid, guide and expedite calculations from lumped elements level to microstrip structure. Users just have to provide the required filter specifications as well as the material description. FILTSoft will calculate and display the lumped element details, the planar filter structure, and the expected filter's response. An example of a lowpass filter design was calculated using FILTSoft and the results were validated through prototype measurement for comparison purposes.
Wildlife in the cloud: a new approach for engaging stakeholders in wildlife management.
Chapron, Guillaume
2015-11-01
Research in wildlife management increasingly relies on quantitative population models. However, a remaining challenge is to have end-users, who are often alienated by mathematics, benefiting from this research. I propose a new approach, 'wildlife in the cloud,' to enable active learning by practitioners from cloud-based ecological models whose complexity remains invisible to the user. I argue that this concept carries the potential to overcome limitations of desktop-based software and allows new understandings of human-wildlife systems. This concept is illustrated by presenting an online decision-support tool for moose management in areas with predators in Sweden. The tool takes the form of a user-friendly cloud-app through which users can compare the effects of alternative management decisions, and may feed into adjustment of their hunting strategy. I explain how the dynamic nature of cloud-apps opens the door to different ways of learning, informed by ecological models that can benefit both users and researchers.
Horiguchi, Hiromasa; Yasunaga, Hideo; Hashimoto, Hideki; Ohe, Kazuhiko
2012-12-22
Secondary use of large scale administrative data is increasingly popular in health services and clinical research, where a user-friendly tool for data management is in great demand. MapReduce technology such as Hadoop is a promising tool for this purpose, though its use has been limited by the lack of user-friendly functions for transforming large scale data into wide table format, where each subject is represented by one row, for use in health services and clinical research. Since the original specification of Pig provides very few functions for column field management, we have developed a novel system called GroupFilterFormat to handle the definition of field and data content based on a Pig Latin script. We have also developed, as an open-source project, several user-defined functions to transform the table format using GroupFilterFormat and to deal with processing that considers date conditions. Having prepared dummy discharge summary data for 2.3 million inpatients and medical activity log data for 950 million events, we used the Elastic Compute Cloud environment provided by Amazon Inc. to execute processing speed and scaling benchmarks. In the speed benchmark test, the response time was significantly reduced and a linear relationship was observed between the quantity of data and processing time in both a small and a very large dataset. The scaling benchmark test showed clear scalability. In our system, doubling the number of nodes resulted in a 47% decrease in processing time. Our newly developed system is widely accessible as an open resource. This system is very simple and easy to use for researchers who are accustomed to using declarative command syntax for commercial statistical software and Structured Query Language. Although our system needs further sophistication to allow more flexibility in scripts and to improve efficiency in data processing, it shows promise in facilitating the application of MapReduce technology to efficient data processing with large scale administrative data in health services and clinical research.
A JAVA-based multimedia tool for clinical practice guidelines.
Maojo, V; Herrero, C; Valenzuela, F; Crespo, J; Lazaro, P; Pazos, A
1997-01-01
We have developed a specific language for the representation of Clinical Practice Guidelines (CPGs) and Windows C++ and platform independent JAVA applications for multimedia presentation and edition of electronically stored CPGs. This approach facilitates translation of guidelines and protocols from paper to computer-based flowchart representations. Users can navigate through the algorithm with a friendly user interface and access related multimedia information within the context of each clinical problem. CPGs can be stored in a computer server and distributed over the World Wide Web, facilitating dissemination, local adaptation, and use as a reference element in medical care. We have chosen the Agency for Health Care and Policy Research's heart failure guideline to demonstrate the capabilities of our tool.
NASA Astrophysics Data System (ADS)
Sargis, J. C.; Gray, W. A.
1999-03-01
The APWS allows user friendly access to several legacy systems which would normally each demand domain expertise for proper utilization. The generalized model, including objects, classes, strategies and patterns is presented. The core components of the APWS are the Microsoft Windows 95 Operating System, Oracle, Oracle Power Objects, Artificial Intelligence tools, a medical hyperlibrary and a web site. The paper includes a discussion of how could be automated by taking advantage of the expert system, object oriented programming and intelligent relational database tools within the APWS.
Toward better Alzheimer's research information sources for the public.
Payne, Perry W
2013-03-01
The National Plan to Address Alzheimer's Disease calls for a new relationship between researchers and members of the public. This relationship is one that provides research information to patients and allows patients to provide ideas to researchers. One way to describe it is a "bidirectional translational relationship." Despite the numerous sources of online and offline information about Alzheimer's disease, there is no information source which currently provides this interaction. This article proposes the creation an Alzheimer's research information source dedicated to monitoring Alzheimer's research literature and providing user friendly, publicly accessible summaries of data written specifically for a lay audience. This information source should contain comprehensive, updated, user friendly, publicly available, reviews of Alzheimer's research and utilize existing online multimedia/social networking tools to provide information in useful formats that help patients, caregivers, and researchers learn rapidly from one another.
STORMSeq: an open-source, user-friendly pipeline for processing personal genomics data in the cloud.
Karczewski, Konrad J; Fernald, Guy Haskin; Martin, Alicia R; Snyder, Michael; Tatonetti, Nicholas P; Dudley, Joel T
2014-01-01
The increasing public availability of personal complete genome sequencing data has ushered in an era of democratized genomics. However, read mapping and variant calling software is constantly improving and individuals with personal genomic data may prefer to customize and update their variant calls. Here, we describe STORMSeq (Scalable Tools for Open-Source Read Mapping), a graphical interface cloud computing solution that does not require a parallel computing environment or extensive technical experience. This customizable and modular system performs read mapping, read cleaning, and variant calling and annotation. At present, STORMSeq costs approximately $2 and 5-10 hours to process a full exome sequence and $30 and 3-8 days to process a whole genome sequence. We provide this open-access and open-source resource as a user-friendly interface in Amazon EC2.
Villard, Pierre; Malausa, Thibaut
2013-07-01
SP-Designer is an open-source program providing a user-friendly tool for the design of specific PCR primer pairs from a DNA sequence alignment containing sequences from various taxa. SP-Designer selects PCR primer pairs for the amplification of DNA from a target species on the basis of several criteria: (i) primer specificity, as assessed by interspecific sequence polymorphism in the annealing regions, (ii) the biochemical characteristics of the primers and (iii) the intended PCR conditions. SP-Designer generates tables, detailing the primer pair and PCR characteristics, and a FASTA file locating the primer sequences in the original sequence alignment. SP-Designer is Windows-compatible and freely available from http://www2.sophia.inra.fr/urih/sophia_mart/sp_designer/info_sp_designer.php. © 2013 John Wiley & Sons Ltd.
Gromita: a fully integrated graphical user interface to gromacs 4.
Sellis, Diamantis; Vlachakis, Dimitrios; Vlassi, Metaxia
2009-09-07
Gromita is a fully integrated and efficient graphical user interface (GUI) to the recently updated molecular dynamics suite Gromacs, version 4. Gromita is a cross-platform, perl/tcl-tk based, interactive front end designed to break the command line barrier and introduce a new user-friendly environment to run molecular dynamics simulations through Gromacs. Our GUI features a novel workflow interface that guides the user through each logical step of the molecular dynamics setup process, making it accessible to both advanced and novice users. This tool provides a seamless interface to the Gromacs package, while providing enhanced functionality by speeding up and simplifying the task of setting up molecular dynamics simulations of biological systems. Gromita can be freely downloaded from http://bio.demokritos.gr/gromita/.
TmoleX--a graphical user interface for TURBOMOLE.
Steffen, Claudia; Thomas, Klaus; Huniar, Uwe; Hellweg, Arnim; Rubner, Oliver; Schroer, Alexander
2010-12-01
We herein present the graphical user interface (GUI) TmoleX for the quantum chemical program package TURBOMOLE. TmoleX allows users to execute the complete workflow of a quantum chemical investigation from the initial building of a structure to the visualization of the results in a user friendly graphical front end. The purpose of TmoleX is to make TURBOMOLE easy to use and to provide a high degree of flexibility. Hence, it should be a valuable tool for most users from beginners to experts. The program is developed in Java and runs on Linux, Windows, and Mac platforms. It can be used to run calculations on local desktops as well as on remote computers. © 2010 Wiley Periodicals, Inc.
UTOPIA-User-Friendly Tools for Operating Informatics Applications.
Pettifer, S R; Sinnott, J R; Attwood, T K
2004-01-01
Bioinformaticians routinely analyse vast amounts of information held both in large remote databases and in flat data files hosted on local machines. The contemporary toolkit available for this purpose consists of an ad hoc collection of data manipulation tools, scripting languages and visualization systems; these must often be combined in complex and bespoke ways, the result frequently being an unwieldy artefact capable of one specific task, which cannot easily be exploited or extended by other practitioners. Owing to the sizes of current databases and the scale of the analyses necessary, routine bioinformatics tasks are often automated, but many still require the unique experience and intuition of human researchers: this requires tools that support real-time interaction with complex datasets. Many existing tools have poor user interfaces and limited real-time performance when applied to realistically large datasets; much of the user's cognitive capacity is therefore focused on controlling the tool rather than on performing the research. The UTOPIA project is addressing some of these issues by building reusable software components that can be combined to make useful applications in the field of bioinformatics. Expertise in the fields of human computer interaction, high-performance rendering, and distributed systems is being guided by bioinformaticians and end-user biologists to create a toolkit that is both architecturally sound from a computing point of view, and directly addresses end-user and application-developer requirements.
An intelligent tool for activity data collection.
Sarkar, A M Jehad
2011-01-01
Activity recognition systems using simple and ubiquitous sensors require a large variety of real-world sensor data for not only evaluating their performance but also training the systems for better functioning. However, a tremendous amount of effort is required to setup an environment for collecting such data. For example, expertise and resources are needed to design and install the sensors, controllers, network components, and middleware just to perform basic data collections. It is therefore desirable to have a data collection method that is inexpensive, flexible, user-friendly, and capable of providing large and diverse activity datasets. In this paper, we propose an intelligent activity data collection tool which has the ability to provide such datasets inexpensively without physically deploying the testbeds. It can be used as an inexpensive and alternative technique to collect human activity data. The tool provides a set of web interfaces to create a web-based activity data collection environment. It also provides a web-based experience sampling tool to take the user's activity input. The tool generates an activity log using its activity knowledge and the user-given inputs. The activity knowledge is mined from the web. We have performed two experiments to validate the tool's performance in producing reliable datasets.
Developing sustainable software solutions for bioinformatics by the “ Butterfly” paradigm
Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas
2014-01-01
Software design and sustainable software engineering are essential for the long-term development of bioinformatics software. Typical challenges in an academic environment are short-term contracts, island solutions, pragmatic approaches and loose documentation. Upcoming new challenges are big data, complex data sets, software compatibility and rapid changes in data representation. Our approach to cope with these challenges consists of iterative intertwined cycles of development (“ Butterfly” paradigm) for key steps in scientific software engineering. User feedback is valued as well as software planning in a sustainable and interoperable way. Tool usage should be easy and intuitive. A middleware supports a user-friendly Graphical User Interface (GUI) as well as a database/tool development independently. We validated the approach of our own software development and compared the different design paradigms in various software solutions. PMID:25383181
WILBER and PyWEED: Event-based Seismic Data Request Tools
NASA Astrophysics Data System (ADS)
Falco, N.; Clark, A.; Trabant, C. M.
2017-12-01
WILBER and PyWEED are two user-friendly tools for requesting event-oriented seismic data. Both tools provide interactive maps and other controls for browsing and filtering event and station catalogs, and downloading data for selected event/station combinations, where the data window for each event/station pair may be defined relative to the arrival time of seismic waves from the event to that particular station. Both tools allow data to be previewed visually, and can download data in standard miniSEED, SAC, and other formats, complete with relevant metadata for performing instrument correction. WILBER is a web application requiring only a modern web browser. Once the user has selected an event, WILBER identifies all data available for that time period, and allows the user to select stations based on criteria such as the station's distance and orientation relative to the event. When the user has finalized their request, the data is collected and packaged on the IRIS server, and when it is ready the user is sent a link to download. PyWEED is a downloadable, cross-platform (Macintosh / Windows / Linux) application written in Python. PyWEED allows a user to select multiple events and stations, and will download data for each event/station combination selected. PyWEED is built around the ObsPy seismic toolkit, and allows direct interaction and control of the application through a Python interactive console.
User-friendly cognitive training for the elderly: a technical report.
Boquete, Luciano; Rodríguez-Ascariz, José Manuel; Amo-Usanos, Carlos; Martínez-Arribas, Alejandro; Amo-Usanos, Javier; Otón, Salvador
2011-01-01
This article presents a system that implements a cognitive training program in users' homes. The system comprises various applications designed to create a daily brain-fitness regime. The proposed mental training system uses television and a remote control specially designed for the elderly. This system integrates Java applications to promote brain-fitness training in three areas: arithmetic, memory, and idea association. The system comprises the following: Standard television set, simplified wireless remote control, black box (system's core hardware and software), brain-fitness games (language Java), and Wi-Fi-enabled Internet-connected router. All data from the user training sessions are monitored through a control center. This control center analyzes the evolution of the user and the proper performance of the system during the test. The implemented system has been tested by six healthy volunteers. The results for this user group demonstrated the accessibility and usability of the system in a controlled real environment. The impressions of the users were very favorable, and they reported high adaptability to the system. The mean score for usability and accessibility assigned by the users was 3.56 out of 5 points. The operation stress test (over 200 h) was successful. The proposed system was used to implement a cognitive training program in users' homes, which was developed to be a low-cost tool with a high degree of user interactivity. The results of this preliminary study indicate that this user-friendly system could be adopted as a form of cognitive training for the elderly.
DNAAlignEditor: DNA alignment editor tool
Sanchez-Villeda, Hector; Schroeder, Steven; Flint-Garcia, Sherry; Guill, Katherine E; Yamasaki, Masanori; McMullen, Michael D
2008-01-01
Background With advances in DNA re-sequencing methods and Next-Generation parallel sequencing approaches, there has been a large increase in genomic efforts to define and analyze the sequence variability present among individuals within a species. For very polymorphic species such as maize, this has lead to a need for intuitive, user-friendly software that aids the biologist, often with naïve programming capability, in tracking, editing, displaying, and exporting multiple individual sequence alignments. To fill this need we have developed a novel DNA alignment editor. Results We have generated a nucleotide sequence alignment editor (DNAAlignEditor) that provides an intuitive, user-friendly interface for manual editing of multiple sequence alignments with functions for input, editing, and output of sequence alignments. The color-coding of nucleotide identity and the display of associated quality score aids in the manual alignment editing process. DNAAlignEditor works as a client/server tool having two main components: a relational database that collects the processed alignments and a user interface connected to database through universal data access connectivity drivers. DNAAlignEditor can be used either as a stand-alone application or as a network application with multiple users concurrently connected. Conclusion We anticipate that this software will be of general interest to biologists and population genetics in editing DNA sequence alignments and analyzing natural sequence variation regardless of species, and will be particularly useful for manual alignment editing of sequences in species with high levels of polymorphism. PMID:18366684
An Assessment of IMPAC - Integrated Methodology for Propulsion and Airframe Controls
NASA Technical Reports Server (NTRS)
Walker, G. P.; Wagner, E. A.; Bodden, D. S.
1996-01-01
This report documents the work done under a NASA sponsored contract to transition to industry technologies developed under the NASA Lewis Research Center IMPAC (Integrated Methodology for Propulsion and Airframe Control) program. The critical steps in IMPAC are exercised on an example integrated flight/propulsion control design for linear airframe/engine models of a conceptual STOVL (Short Take-Off and Vertical Landing) aircraft, and MATRIXX (TM) executive files to implement each step are developed. The results from the example study are analyzed and lessons learned are listed along with recommendations that will improve the application of each design step. The end product of this research is a set of software requirements for developing a user-friendly control design tool which will automate the steps in the IMPAC methodology. Prototypes for a graphical user interface (GUI) are sketched to specify how the tool will interact with the user, and it is recommended to build the tool around existing computer aided control design software packages.
BATS: a Bayesian user-friendly software for analyzing time series microarray experiments.
Angelini, Claudia; Cutillo, Luisa; De Canditiis, Daniela; Mutarelli, Margherita; Pensky, Marianna
2008-10-06
Gene expression levels in a given cell can be influenced by different factors, namely pharmacological or medical treatments. The response to a given stimulus is usually different for different genes and may depend on time. One of the goals of modern molecular biology is the high-throughput identification of genes associated with a particular treatment or a biological process of interest. From methodological and computational point of view, analyzing high-dimensional time course microarray data requires very specific set of tools which are usually not included in standard software packages. Recently, the authors of this paper developed a fully Bayesian approach which allows one to identify differentially expressed genes in a 'one-sample' time-course microarray experiment, to rank them and to estimate their expression profiles. The method is based on explicit expressions for calculations and, hence, very computationally efficient. The software package BATS (Bayesian Analysis of Time Series) presented here implements the methodology described above. It allows an user to automatically identify and rank differentially expressed genes and to estimate their expression profiles when at least 5-6 time points are available. The package has a user-friendly interface. BATS successfully manages various technical difficulties which arise in time-course microarray experiments, such as a small number of observations, non-uniform sampling intervals and replicated or missing data. BATS is a free user-friendly software for the analysis of both simulated and real microarray time course experiments. The software, the user manual and a brief illustrative example are freely available online at the BATS website: http://www.na.iac.cnr.it/bats.
SLIMS--a user-friendly sample operations and inventory management system for genotyping labs.
Van Rossum, Thea; Tripp, Ben; Daley, Denise
2010-07-15
We present the Sample-based Laboratory Information Management System (SLIMS), a powerful and user-friendly open source web application that provides all members of a laboratory with an interface to view, edit and create sample information. SLIMS aims to simplify common laboratory tasks with tools such as a user-friendly shopping cart for subjects, samples and containers that easily generates reports, shareable lists and plate designs for genotyping. Further key features include customizable data views, database change-logging and dynamically filled pre-formatted reports. Along with being feature-rich, SLIMS' power comes from being able to handle longitudinal data from multiple time-points and biological sources. This type of data is increasingly common from studies searching for susceptibility genes for common complex diseases that collect thousands of samples generating millions of genotypes and overwhelming amounts of data. LIMSs provide an efficient way to deal with this data while increasing accessibility and reducing laboratory errors; however, professional LIMS are often too costly to be practical. SLIMS gives labs a feasible alternative that is easily accessible, user-centrically designed and feature-rich. To facilitate system customization, and utilization for other groups, manuals have been written for users and developers. Documentation, source code and manuals are available at http://genapha.icapture.ubc.ca/SLIMS/index.jsp. SLIMS was developed using Java 1.6.0, JSPs, Hibernate 3.3.1.GA, DB2 and mySQL, Apache Tomcat 6.0.18, NetBeans IDE 6.5, Jasper Reports 3.5.1 and JasperSoft's iReport 3.5.1.
SplicingTypesAnno: annotating and quantifying alternative splicing events for RNA-Seq data.
Sun, Xiaoyong; Zuo, Fenghua; Ru, Yuanbin; Guo, Jiqiang; Yan, Xiaoyan; Sablok, Gaurav
2015-04-01
Alternative splicing plays a key role in the regulation of the central dogma. Four major types of alternative splicing have been classified as intron retention, exon skipping, alternative 5 splice sites or alternative donor sites, and alternative 3 splice sites or alternative acceptor sites. A few algorithms have been developed to detect splice junctions from RNA-Seq reads. However, there are few tools targeting at the major alternative splicing types at the exon/intron level. This type of analysis may reveal subtle, yet important events of alternative splicing, and thus help gain deeper understanding of the mechanism of alternative splicing. This paper describes a user-friendly R package, extracting, annotating and analyzing alternative splicing types for sequence alignment files from RNA-Seq. SplicingTypesAnno can: (1) provide annotation for major alternative splicing at exon/intron level. By comparing the annotation from GTF/GFF file, it identifies the novel alternative splicing sites; (2) offer a convenient two-level analysis: genome-scale annotation for users with high performance computing environment, and gene-scale annotation for users with personal computers; (3) generate a user-friendly web report and additional BED files for IGV visualization. SplicingTypesAnno is a user-friendly R package for extracting, annotating and analyzing alternative splicing types at exon/intron level for sequence alignment files from RNA-Seq. It is publically available at https://sourceforge.net/projects/splicingtypes/files/ or http://genome.sdau.edu.cn/research/software/SplicingTypesAnno.html. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Tripathi, Kumar Parijat; Evangelista, Daniela; Zuccaro, Antonio; Guarracino, Mario Rosario
2015-01-01
RNA-seq is a new tool to measure RNA transcript counts, using high-throughput sequencing at an extraordinary accuracy. It provides quantitative means to explore the transcriptome of an organism of interest. However, interpreting this extremely large data into biological knowledge is a problem, and biologist-friendly tools are lacking. In our lab, we developed Transcriptator, a web application based on a computational Python pipeline with a user-friendly Java interface. This pipeline uses the web services available for BLAST (Basis Local Search Alignment Tool), QuickGO and DAVID (Database for Annotation, Visualization and Integrated Discovery) tools. It offers a report on statistical analysis of functional and Gene Ontology (GO) annotation's enrichment. It helps users to identify enriched biological themes, particularly GO terms, pathways, domains, gene/proteins features and protein-protein interactions related informations. It clusters the transcripts based on functional annotations and generates a tabular report for functional and gene ontology annotations for each submitted transcript to the web server. The implementation of QuickGo web-services in our pipeline enable the users to carry out GO-Slim analysis, whereas the integration of PORTRAIT (Prediction of transcriptomic non coding RNA (ncRNA) by ab initio methods) helps to identify the non coding RNAs and their regulatory role in transcriptome. In summary, Transcriptator is a useful software for both NGS and array data. It helps the users to characterize the de-novo assembled reads, obtained from NGS experiments for non-referenced organisms, while it also performs the functional enrichment analysis of differentially expressed transcripts/genes for both RNA-seq and micro-array experiments. It generates easy to read tables and interactive charts for better understanding of the data. The pipeline is modular in nature, and provides an opportunity to add new plugins in the future. Web application is freely available at: http://www-labgtp.na.icar.cnr.it/Transcriptator.
Modeling soil erosion and transport on forest landscape
Ge Sun; Steven G McNulty
1998-01-01
Century-long studies on the impacts of forest management in North America suggest sediment can cause major reduction on stream water quality. Soil erosion patterns in forest watersheds are patchy and heterogeneous. Therefore, patterns of soil erosion are difficult to model and predict. The objective of this study is to develop a user friendly management tool for land...
ERIC Educational Resources Information Center
Whitefield, Elizabeth; Schmidt, David; Witt-Swanson, Lindsay; Smith, David; Pronto, Jennifer; Knox, Pam; Powers, Crystal
2016-01-01
There is a need to create competency among Extension professionals on the topic of climate change adaptation and mitigation in animal agriculture. The Animal Agriculture in a Changing Climate online course provides an easily accessible, user-friendly, free, and interactive experience for learning science-based information on a national and…
Making Differentiation a Habit: How to Ensure Success in Academically Diverse Classrooms
ERIC Educational Resources Information Center
Heacox, Diane
2009-01-01
If you're a teacher with an academically diverse classroom (and what classrooms aren't today?), you need this resource. Framed around the critical elements for success in today's classrooms, "Making Differentiation a Habit" gives educators specific, user-friendly tools to optimize teaching, learning, and assessment. Following on the heels of Diane…
Solar Radiation: Harnessing the Power
ERIC Educational Resources Information Center
Rowland, Teri; Chambers, Lin; Holzer, Missy; Moore, Susan
2009-01-01
My NASA Data (Chambers et al. 2008) is a teaching tool available on NASA's website that offers microsets of real data in an easily accessible, user-friendly format. In this article, the authors describe a lesson plan based on an activity from My NASA Data, in which students explore parts of the United States where they would want to live if they…
ERIC Educational Resources Information Center
Heath, Melissa Allen; Sheen, Dawn
2005-01-01
When a student is in dire need of emotional support, caring adults in the school can make a difference. This essential resource helps practitioners prepare all school personnel to respond sensitively and effectively to children and adolescents in crisis. Packed with user-friendly features--including over 50 reproducible tools--the book provides…
Algodoo: A Tool for Encouraging Creativity in Physics Teaching and Learning
ERIC Educational Resources Information Center
Gregorcic, Bor; Bodin, Madelen
2017-01-01
Algodoo (http://www.algodoo.com) is a digital sandbox for physics 2D simulations. It allows students and teachers to easily create simulated "scenes" and explore physics through a user-friendly and visually attractive interface. In this paper, we present different ways in which students and teachers can use Algodoo to visualize and solve…
Three Applications of Automated Test Assembly within a User-Friendly Modeling Environment
ERIC Educational Resources Information Center
Cor, Ken; Alves, Cecilia; Gierl, Mark
2009-01-01
While linear programming is a common tool in business and industry, there have not been many applications in educational assessment and only a handful of individuals have been actively involved in conducting psychometric research in this area. Perhaps this is due, at least in part, to the complexity of existing software packages. This article…
ERIC Educational Resources Information Center
Helen Keller National Center - Technical Assistance Center, Sands Point, NY.
This community living assessment tool for parents of children with deaf-blindness was developed to help parents identify the strengths and weaknesses of their child's residential program using a user-friendly instrument. Three areas of assessment are covered: physical attributes of the home, available resources for promoting capabilities, and…
Li, Guipeng; Li, Ming; Zhang, Yiwei; Wang, Dong; Li, Rong; Guimerà, Roger; Gao, Juntao Tony; Zhang, Michael Q
2014-01-01
Rapidly increasing amounts of (physical and genetic) protein-protein interaction (PPI) data are produced by various high-throughput techniques, and interpretation of these data remains a major challenge. In order to gain insight into the organization and structure of the resultant large complex networks formed by interacting molecules, using simulated annealing, a method based on the node connectivity, we developed ModuleRole, a user-friendly web server tool which finds modules in PPI network and defines the roles for every node, and produces files for visualization in Cytoscape and Pajek. For given proteins, it analyzes the PPI network from BioGRID database, finds and visualizes the modules these proteins form, and then defines the role every node plays in this network, based on two topological parameters Participation Coefficient and Z-score. This is the first program which provides interactive and very friendly interface for biologists to find and visualize modules and roles of proteins in PPI network. It can be tested online at the website http://www.bioinfo.org/modulerole/index.php, which is free and open to all users and there is no login requirement, with demo data provided by "User Guide" in the menu Help. Non-server application of this program is considered for high-throughput data with more than 200 nodes or user's own interaction datasets. Users are able to bookmark the web link to the result page and access at a later time. As an interactive and highly customizable application, ModuleRole requires no expert knowledge in graph theory on the user side and can be used in both Linux and Windows system, thus a very useful tool for biologist to analyze and visualize PPI networks from databases such as BioGRID. ModuleRole is implemented in Java and C, and is freely available at http://www.bioinfo.org/modulerole/index.php. Supplementary information (user guide, demo data) is also available at this website. API for ModuleRole used for this program can be obtained upon request.
Schön, Ulla-Karin; Grim, Katarina; Wallin, Lars; Rosenberg, David; Svedberg, Petra
2018-01-01
ABSTRACT Purpose: Shared decision making, SDM, in psychiatric services, supports users to experience a greater sense of involvement in treatment, self-efficacy, autonomy and reduced coercion. Decision tools adapted to the needs of users have the potential to support SDM and restructure how users and staff work together to arrive at shared decisions. The aim of this study was to describe and analyse the implementation process of an SDM intervention for users of psychiatric services in Sweden. Method: The implementation was studied through a process evaluation utilizing both quantitative and qualitative methods. In designing the process evaluation for the intervention, three evaluation components were emphasized: contextual factors, implementation issues and mechanisms of impact. Results: The study addresses critical implementation issues related to decision-making authority, the perceived decision-making ability of users and the readiness of the service to increase influence and participation. It also emphasizes the importance of facilitation, as well as suggesting contextual adaptations that may be relevant for the local organizations. Conclusion: The results indicate that staff perceived the decision support tool as user-friendly and useful in supporting participation in decision-making, and suggest that such concrete supports to participation can be a factor in implementation if adequate attention is paid to organizational contexts and structures. PMID:29405889
Schön, Ulla-Karin; Grim, Katarina; Wallin, Lars; Rosenberg, David; Svedberg, Petra
2018-12-01
Shared decision making, SDM, in psychiatric services, supports users to experience a greater sense of involvement in treatment, self-efficacy, autonomy and reduced coercion. Decision tools adapted to the needs of users have the potential to support SDM and restructure how users and staff work together to arrive at shared decisions. The aim of this study was to describe and analyse the implementation process of an SDM intervention for users of psychiatric services in Sweden. The implementation was studied through a process evaluation utilizing both quantitative and qualitative methods. In designing the process evaluation for the intervention, three evaluation components were emphasized: contextual factors, implementation issues and mechanisms of impact. The study addresses critical implementation issues related to decision-making authority, the perceived decision-making ability of users and the readiness of the service to increase influence and participation. It also emphasizes the importance of facilitation, as well as suggesting contextual adaptations that may be relevant for the local organizations. The results indicate that staff perceived the decision support tool as user-friendly and useful in supporting participation in decision-making, and suggest that such concrete supports to participation can be a factor in implementation if adequate attention is paid to organizational contexts and structures.
Webpress: An Internet Outreach from NASA Dryden
NASA Technical Reports Server (NTRS)
Biezad, Daniel J.
1996-01-01
The Technology and Commercialization Office at NASA DRyden has developed many educational outreach programs for K-12 educators. This project concentrates on the internet portion of that effort, specifically focusing on the development of an internet tool for educators called Webpress. This tool will not only provide a user-friendly access to aeronautical topics and interesting individuals on the world wide web (web), but will also enable teachers to rapidly submit and display their own materials and links for use in the classroom.
NASA Astrophysics Data System (ADS)
Chaudhary, A.; DeMarle, D.; Burnett, B.; Harris, C.; Silva, W.; Osmari, D.; Geveci, B.; Silva, C.; Doutriaux, C.; Williams, D. N.
2013-12-01
The impact of climate change will resonate through a broad range of fields including public health, infrastructure, water resources, and many others. Long-term coordinated planning, funding, and action are required for climate change adaptation and mitigation. Unfortunately, widespread use of climate data (simulated and observed) in non-climate science communities is impeded by factors such as large data size, lack of adequate metadata, poor documentation, and lack of sufficient computational and visualization resources. We present ClimatePipes to address many of these challenges by creating an open source platform that provides state-of-the-art, user-friendly data access, analysis, and visualization for climate and other relevant geospatial datasets, making the climate data available to non-researchers, decision-makers, and other stakeholders. The overarching goals of ClimatePipes are: - Enable users to explore real-world questions related to climate change. - Provide tools for data access, analysis, and visualization. - Facilitate collaboration by enabling users to share datasets, workflows, and visualization. ClimatePipes uses a web-based application platform for its widespread support on mainstream operating systems, ease-of-use, and inherent collaboration support. The front-end of ClimatePipes uses HTML5 (WebGL, Canvas2D, CSS3) to deliver state-of-the-art visualization and to provide a best-in-class user experience. The back-end of the ClimatePipes is built around Python using the Visualization Toolkit (VTK, http://vtk.org), Climate Data Analysis Tools (CDAT, http://uv-cdat.llnl.gov), and other climate and geospatial data processing tools such as GDAL and PROJ4. ClimatePipes web-interface to query and access data from remote sources (such as ESGF). Shown in the figure is climate data layer from ESGF on top of map data layer from OpenStreetMap. The ClimatePipes workflow editor provides flexibility and fine grained control, and uses the VisTrails (http://www.vistrails.org) workflow engine in the backend.
A Full-Featured User Friendly CO 2-EOR and Sequestration Planning Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savage, Bill
A Full-Featured, User Friendly CO 2-EOR and Sequestration Planning Software This project addressed the development of an integrated software solution that includes a graphical user interface, numerical simulation, visualization tools and optimization processes for reservoir simulation modeling of CO 2-EOR. The objective was to assist the industry in the development of domestic energy resources by expanding the application of CO 2-EOR technologies, and ultimately to maximize the CO 2} sequestration capacity of the U.S. The software resulted in a field-ready application for the industry to address the current CO 2-EOR technologies. The software has been made available to the publicmore » without restrictions and with user friendly operating documentation and tutorials. The software (executable only) can be downloaded from NITEC’s website at www.nitecllc.com. This integrated solution enables the design, optimization and operation of CO 2-EOR processes for small and mid-sized operators, who currently cannot afford the expensive, time intensive solutions that the major oil companies enjoy. Based on one estimate, small oil fields comprise 30% of the of total economic resource potential for the application of CO 2-EOR processes in the U.S. This corresponds to 21.7 billion barrels of incremental, technically recoverable oil using the current “best practices”, and 31.9 billion barrels using “next-generation” CO 2-EOR techniques. The project included a Case Study of a prospective CO 2-EOR candidate field in Wyoming by a small independent, Linc Energy Petroleum Wyoming, Inc. NITEC LLC has an established track record of developing innovative and user friendly software. The Principle Investigator is an experienced manager and engineer with expertise in software development, numerical techniques, and GUI applications. Unique, presently-proprietary NITEC technologies have been integrated into this application to further its ease of use and technical functionality.« less
Anslan, Sten; Bahram, Mohammad; Hiiesalu, Indrek; Tedersoo, Leho
2017-11-01
High-throughput sequencing methods have become a routine analysis tool in environmental sciences as well as in public and private sector. These methods provide vast amount of data, which need to be analysed in several steps. Although the bioinformatics may be applied using several public tools, many analytical pipelines allow too few options for the optimal analysis for more complicated or customized designs. Here, we introduce PipeCraft, a flexible and handy bioinformatics pipeline with a user-friendly graphical interface that links several public tools for analysing amplicon sequencing data. Users are able to customize the pipeline by selecting the most suitable tools and options to process raw sequences from Illumina, Pacific Biosciences, Ion Torrent and Roche 454 sequencing platforms. We described the design and options of PipeCraft and evaluated its performance by analysing the data sets from three different sequencing platforms. We demonstrated that PipeCraft is able to process large data sets within 24 hr. The graphical user interface and the automated links between various bioinformatics tools enable easy customization of the workflow. All analytical steps and options are recorded in log files and are easily traceable. © 2017 John Wiley & Sons Ltd.
GenIce: Hydrogen-Disordered Ice Generator.
Matsumoto, Masakazu; Yagasaki, Takuma; Tanaka, Hideki
2018-01-05
GenIce is an efficient and user-friendly tool to generate hydrogen-disordered ice structures. It makes ice and clathrate hydrate structures in various file formats. More than 100 kinds of structures are preset. Users can install their own crystal structures, guest molecules, and file formats as plugins. The algorithm certifies that the generated structures are completely randomized hydrogen-disordered networks obeying the ice rule with zero net polarization. © 2017 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. © 2017 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
Computational tools for multi-linked flexible structures
NASA Technical Reports Server (NTRS)
Lee, Gordon K. F.; Brubaker, Thomas A.; Shults, James R.
1990-01-01
A software module which designs and tests controllers and filters in Kalman Estimator form, based on a polynomial state-space model is discussed. The user-friendly program employs an interactive graphics approach to simplify the design process. A variety of input methods are provided to test the effectiveness of the estimator. Utilities are provided which address important issues in filter design such as graphical analysis, statistical analysis, and calculation time. The program also provides the user with the ability to save filter parameters, inputs, and outputs for future use.
STORMSeq: An Open-Source, User-Friendly Pipeline for Processing Personal Genomics Data in the Cloud
Karczewski, Konrad J.; Fernald, Guy Haskin; Martin, Alicia R.; Snyder, Michael; Tatonetti, Nicholas P.; Dudley, Joel T.
2014-01-01
The increasing public availability of personal complete genome sequencing data has ushered in an era of democratized genomics. However, read mapping and variant calling software is constantly improving and individuals with personal genomic data may prefer to customize and update their variant calls. Here, we describe STORMSeq (Scalable Tools for Open-Source Read Mapping), a graphical interface cloud computing solution that does not require a parallel computing environment or extensive technical experience. This customizable and modular system performs read mapping, read cleaning, and variant calling and annotation. At present, STORMSeq costs approximately $2 and 5–10 hours to process a full exome sequence and $30 and 3–8 days to process a whole genome sequence. We provide this open-access and open-source resource as a user-friendly interface in Amazon EC2. PMID:24454756
User manual for SPLASH (Single Panel Lamp and Shroud Helper).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, Marvin Elwood
2006-02-01
The radiant heat test facility develops test sets providing well-characterized thermal environments, often representing fires. Many of the components and procedures have become standardized to such an extent that the development of a specialized design tool to determine optimal configurations for radiant heat experiments was appropriate. SPLASH (Single Panel Lamp and Shroud Helper) is that tool. SPLASH is implemented as a user-friendly, Windows-based program that allows a designer to describe a test setup in terms of parameters such as number of lamps, power, position, and separation distance. This document is a user manual for that software. Any incidental descriptions ofmore » theory are only for the purpose of defining the model inputs. The theory for the underlying model is described in SAND2005-2947 (Ref. [1]). SPLASH provides a graphical user interface to define lamp panel and shroud designs parametrically, solves the resulting radiation enclosure problem for up to 2500 surfaces, and provides post-processing to facilitate understanding and documentation of analyzed designs.« less
D. Linhares, Natália; Pena, Sérgio D. J.
2017-01-01
Whole exome and whole genome sequencing have both become widely adopted methods for investigating and diagnosing human Mendelian disorders. As pangenomic agnostic tests, they are capable of more accurate and agile diagnosis compared to traditional sequencing methods. This article describes new software called Mendel,MD, which combines multiple types of filter options and makes use of regularly updated databases to facilitate exome and genome annotation, the filtering process and the selection of candidate genes and variants for experimental validation and possible diagnosis. This tool offers a user-friendly interface, and leads clinicians through simple steps by limiting the number of candidates to achieve a final diagnosis of a medical genetics case. A useful innovation is the “1-click” method, which enables listing all the relevant variants in genes present at OMIM for perusal by clinicians. Mendel,MD was experimentally validated using clinical cases from the literature and was tested by students at the Universidade Federal de Minas Gerais, at GENE–Núcleo de Genética Médica in Brazil and at the Children’s University Hospital in Dublin, Ireland. We show in this article how it can simplify and increase the speed of identifying the culprit mutation in each of the clinical cases that were received for further investigation. Mendel,MD proved to be a reliable web-based tool, being open-source and time efficient for identifying the culprit mutation in different clinical cases of patients with Mendelian Disorders. It is also freely accessible for academic users on the following URL: https://mendelmd.org. PMID:28594829
EnviroNET: An on-line environment data base for LDEF data
NASA Technical Reports Server (NTRS)
Lauriente, Michael
1992-01-01
EnviroNET is an on-line, free form data base intended to provide a centralized depository for a wide range of technical information on environmentally induced interactions of use to Space Shuttle customers and spacecraft designers. It provides a user friendly, menu driven format on networks that are connected globally and is available twenty-four hours a day, every day. The information updated regularly, includes expository text, tabular numerical data, charts and graphs, and models. The system pools space data collected over the years by NASA, USAF, other government facilities, industry, universities, and ESA. The models accept parameter input from the user and calculate and display the derived values corresponding to that input. In addition to the archive, interactive graphics programs are also available on space debris, the neutral atmosphere, radiation, magnetic field, and ionosphere. A user friendly informative interface is standard for all the models with a pop-up window, help window with information on inputs, outputs, and caveats. The system will eventually simplify mission analysis with analytical tools and deliver solution for computational intense graphical applications to do 'What if' scenarios. A proposed plan for developing a repository of LDEF information for a user group concludes the presentation.
Nematode.net update 2011: addition of data sets and tools featuring next-generation sequencing data
Martin, John; Abubucker, Sahar; Heizer, Esley; Taylor, Christina M.; Mitreva, Makedonka
2012-01-01
Nematode.net (http://nematode.net) has been a publicly available resource for studying nematodes for over a decade. In the past 3 years, we reorganized Nematode.net to provide more user-friendly navigation through the site, a necessity due to the explosion of data from next-generation sequencing platforms. Organism-centric portals containing dynamically generated data are available for over 56 different nematode species. Next-generation data has been added to the various data-mining portals hosted, including NemaBLAST and NemaBrowse. The NemaPath metabolic pathway viewer builds associations using KOs, rather than ECs to provide more accurate and fine-grained descriptions of proteins. Two new features for data analysis and comparative genomics have been added to the site. NemaSNP enables the user to perform population genetics studies in various nematode populations using next-generation sequencing data. HelmCoP (Helminth Control and Prevention) as an independent component of Nematode.net provides an integrated resource for storage, annotation and comparative genomics of helminth genomes to aid in learning more about nematode genomes, as well as drug, pesticide, vaccine and drug target discovery. With this update, Nematode.net will continue to realize its original goal to disseminate diverse bioinformatic data sets and provide analysis tools to the broad scientific community in a useful and user-friendly manner. PMID:22139919
Teach Astronomy: An Educational Resource for Formal and Informal Learners
NASA Astrophysics Data System (ADS)
Impey, Chris David
2018-01-01
Teach Astronomy is an educational resource, available in the form of a user-friendly, platform-agnostic website. Ideal for college-level, introductory astronomy courses, Teach Astronomy can be a valuable reference for astronomers at all levels, especially informal learners. Over the past year, multiple changes have been made to the infrastructure behind Teach Astronomy to provide high availability to our tens of thousands of monthly, unique users, as well as fostering in new features. Teach Astronomy contains interactive tools which supplement the free textbook, such as a Quiz Tool with real-time feedback. The site also provides a searchable collection of Chris Impey’s responses to questions frequently asked by our users. The developers and educators behind Teach Astronomy are working to create an environment which encourages astronomy students of all levels to continue to increase their knowledge and help others learn.
Dadaev, Tokhir; Leongamornlert, Daniel A; Saunders, Edward J; Eeles, Rosalind; Kote-Jarai, Zsofia
2016-03-15
: In this article, we present LocusExplorer, a data visualization and exploration tool for genetic association data. LocusExplorer is written in R using the Shiny library, providing access to powerful R-based functions through a simple user interface. LocusExplorer allows users to simultaneously display genetic, statistical and biological data for humans in a single image and allows dynamic zooming and customization of the plot features. Publication quality plots may then be produced in a variety of file formats. LocusExplorer is open source and runs through R and a web browser. It is available at www.oncogenetics.icr.ac.uk/LocusExplorer/ or can be installed locally and the source code accessed from https://github.com/oncogenetics/LocusExplorer tokhir.dadaev@icr.ac.uk. © The Author 2015. Published by Oxford University Press.
A computer tool to support in design of industrial Ethernet.
Lugli, Alexandre Baratella; Santos, Max Mauro Dias; Franco, Lucia Regina Horta Rodrigues
2009-04-01
This paper presents a computer tool to support in the project and development of an industrial Ethernet network, verifying the physical layer (cables-resistance and capacitance, scan time, network power supply-POE's concept "Power Over Ethernet" and wireless), and occupation rate (amount of information transmitted to the network versus the controller network scan time). These functions are accomplished without a single physical element installed in the network, using only simulation. The computer tool has a software that presents a detailed vision of the network to the user, besides showing some possible problems in the network, and having an extremely friendly environment.
TreeQ-VISTA: An Interactive Tree Visualization Tool withFunctional Annotation Query Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Shengyin; Anderson, Iain; Kunin, Victor
2007-05-07
Summary: We describe a general multiplatform exploratorytool called TreeQ-Vista, designed for presenting functional annotationsin a phylogenetic context. Traits, such as phenotypic and genomicproperties, are interactively queried from a relational database with auser-friendly interface which provides a set of tools for users with orwithout SQL knowledge. The query results are projected onto aphylogenetic tree and can be displayed in multiple color groups. A richset of browsing, grouping and query tools are provided to facilitatetrait exploration, comparison and analysis.Availability: The program,detailed tutorial and examples are available online athttp://genome-test.lbl.gov/vista/TreeQVista.
La Torre, Giuseppe; Miccoli, Silvia; Ricciardi, Walter
2014-01-01
The Italian Alliance of vaccination strategies project was born with the aim of informing healthcare workers and the general population about vaccination through Facebook. The evaluation of the account has been carried out using 3 indicators: friend membership, numbers of "I like," and amount of "share" of contents for type of news and for day of the week. The survey was performed on 743 users. Institutional events were the most popular type of news; the day of the week in which users were most likely to be attracted by links was Friday. Press releases were the communication form most shared by users. Social media marketing carries the advantages of low cost, rapid transmission and user interaction.
ToxRefDB - Release user-friendly web-based tool for mining ToxRefDB
The updated URL link is for a table of NCCT ToxCast public datasets. The next to last row of the table has the link for the US EPA ToxCast ToxRefDB Data Release October 2014. ToxRefDB provides detailed chemical toxicity data in a publically accessible searchable format. ToxRefD...
Facilities Performance Indicators Report 2010-11: Tracking Your Facilities Vital Signs
ERIC Educational Resources Information Center
APPA: Association of Higher Education Facilities Officers, 2012
2012-01-01
APPA's Information and Research Committee's goal for this year was to enhance the survey and report tools by making them both more navigable, user friendly, and accurate. Significant progress has been made with all of these initiatives. APPA also automated many of the internal processes for the survey and report, which resulted in a…
JEDI: Jobs and Economic Development Impacts Model Fact Sheet
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. Hendrickson; S.Tegen
2009-12-01
The Jobs and Economic Development Impact (JEDI) models are user-friendly tools that estimate the economic impacts of constructing and operating power generation and biofuel plants at the local(usually state) level. First developed by NREL's Wind Powering America program to model wind energy jobs and impacts, JEDI has been expanded to biofuels,concentrating solar power, coal, and natural gas power plants.
Dental Informatics tool “SOFPRO” for the study of oral submucous fibrosis
Erlewad, Dinesh Masajirao; Mundhe, Kalpana Anandrao; Hazarey, Vinay K
2016-01-01
Background: Dental informatics is an evolving branch widely used in dental education and practice. Numerous applications that support clinical care, education and research have been developed. However, very few such applications are developed and utilized in the epidemiological studies of oral submucous fibrosis (OSF) which is affecting a significant population of Asian countries. Aims and Objectives: To design and develop an user friendly software for the descriptive epidemiological study of OSF. Materials and Methods: With the help of a software engineer a computer program SOFPRO was designed and developed by using, Ms-Visual Basic 6.0 (VB), Ms-Access 2000, Crystal Report 7.0 and Ms-Paint in operating system XP. For the analysis purpose the available OSF data from the departmental precancer registry was fed into the SOFPRO. Results: Known data, not known and null data are successfully accepted in data entry and represented in data analysis of OSF. Smooth working of SOFPRO and its correct data flow was tested against real-time data of OSF. Conclusion: SOFPRO was found to be a user friendly automated tool for easy data collection, retrieval, management and analysis of OSF patients. PMID:27601808
Lim, Chun Shen; Brown, Chris M
2017-01-01
Structured RNA elements may control virus replication, transcription and translation, and their distinct features are being exploited by novel antiviral strategies. Viral RNA elements continue to be discovered using combinations of experimental and computational analyses. However, the wealth of sequence data, notably from deep viral RNA sequencing, viromes, and metagenomes, necessitates computational approaches being used as an essential discovery tool. In this review, we describe practical approaches being used to discover functional RNA elements in viral genomes. In addition to success stories in new and emerging viruses, these approaches have revealed some surprising new features of well-studied viruses e.g., human immunodeficiency virus, hepatitis C virus, influenza, and dengue viruses. Some notable discoveries were facilitated by new comparative analyses of diverse viral genome alignments. Importantly, comparative approaches for finding RNA elements embedded in coding and non-coding regions differ. With the exponential growth of computer power we have progressed from stem-loop prediction on single sequences to cutting edge 3D prediction, and from command line to user friendly web interfaces. Despite these advances, many powerful, user friendly prediction tools and resources are underutilized by the virology community.
Lim, Chun Shen; Brown, Chris M.
2018-01-01
Structured RNA elements may control virus replication, transcription and translation, and their distinct features are being exploited by novel antiviral strategies. Viral RNA elements continue to be discovered using combinations of experimental and computational analyses. However, the wealth of sequence data, notably from deep viral RNA sequencing, viromes, and metagenomes, necessitates computational approaches being used as an essential discovery tool. In this review, we describe practical approaches being used to discover functional RNA elements in viral genomes. In addition to success stories in new and emerging viruses, these approaches have revealed some surprising new features of well-studied viruses e.g., human immunodeficiency virus, hepatitis C virus, influenza, and dengue viruses. Some notable discoveries were facilitated by new comparative analyses of diverse viral genome alignments. Importantly, comparative approaches for finding RNA elements embedded in coding and non-coding regions differ. With the exponential growth of computer power we have progressed from stem-loop prediction on single sequences to cutting edge 3D prediction, and from command line to user friendly web interfaces. Despite these advances, many powerful, user friendly prediction tools and resources are underutilized by the virology community. PMID:29354101
StructRNAfinder: an automated pipeline and web server for RNA families prediction.
Arias-Carrasco, Raúl; Vásquez-Morán, Yessenia; Nakaya, Helder I; Maracaja-Coutinho, Vinicius
2018-02-17
The function of many noncoding RNAs (ncRNAs) depend upon their secondary structures. Over the last decades, several methodologies have been developed to predict such structures or to use them to functionally annotate RNAs into RNA families. However, to fully perform this analysis, researchers should utilize multiple tools, which require the constant parsing and processing of several intermediate files. This makes the large-scale prediction and annotation of RNAs a daunting task even to researchers with good computational or bioinformatics skills. We present an automated pipeline named StructRNAfinder that predicts and annotates RNA families in transcript or genome sequences. This single tool not only displays the sequence/structural consensus alignments for each RNA family, according to Rfam database but also provides a taxonomic overview for each assigned functional RNA. Moreover, we implemented a user-friendly web service that allows researchers to upload their own nucleotide sequences in order to perform the whole analysis. Finally, we provided a stand-alone version of StructRNAfinder to be used in large-scale projects. The tool was developed under GNU General Public License (GPLv3) and is freely available at http://structrnafinder.integrativebioinformatics.me . The main advantage of StructRNAfinder relies on the large-scale processing and integrating the data obtained by each tool and database employed along the workflow, of which several files are generated and displayed in user-friendly reports, useful for downstream analyses and data exploration.
Gigwa-Genotype investigator for genome-wide analyses.
Sempéré, Guilhem; Philippe, Florian; Dereeper, Alexis; Ruiz, Manuel; Sarah, Gautier; Larmande, Pierre
2016-06-06
Exploring the structure of genomes and analyzing their evolution is essential to understanding the ecological adaptation of organisms. However, with the large amounts of data being produced by next-generation sequencing, computational challenges arise in terms of storage, search, sharing, analysis and visualization. This is particularly true with regards to studies of genomic variation, which are currently lacking scalable and user-friendly data exploration solutions. Here we present Gigwa, a web-based tool that provides an easy and intuitive way to explore large amounts of genotyping data by filtering it not only on the basis of variant features, including functional annotations, but also on genotype patterns. The data storage relies on MongoDB, which offers good scalability properties. Gigwa can handle multiple databases and may be deployed in either single- or multi-user mode. In addition, it provides a wide range of popular export formats. The Gigwa application is suitable for managing large amounts of genomic variation data. Its user-friendly web interface makes such processing widely accessible. It can either be simply deployed on a workstation or be used to provide a shared data portal for a given community of researchers.
A user-friendly, open-source tool to project impact and cost of diagnostic tests for tuberculosis
Dowdy, David W; Andrews, Jason R; Dodd, Peter J; Gilman, Robert H
2014-01-01
Most models of infectious diseases, including tuberculosis (TB), do not provide results customized to local conditions. We created a dynamic transmission model to project TB incidence, TB mortality, multidrug-resistant (MDR) TB prevalence, and incremental costs over 5 years after scale-up of nine alternative diagnostic strategies. A corresponding web-based interface allows users to specify local costs and epidemiology. In settings with little capacity for up-front investment, same-day microscopy had the greatest impact on TB incidence and became cost-saving within 5 years if delivered at $10/test. With greater initial investment, population-level scale-up of Xpert MTB/RIF or microcolony-based culture often averted 10 times more TB cases than narrowly-targeted strategies, at minimal incremental long-term cost. Xpert for smear-positive TB had reasonable impact on MDR-TB incidence, but at substantial price and little impact on overall TB incidence and mortality. This user-friendly modeling framework improves decision-makers' ability to evaluate the local impact of TB diagnostic strategies. DOI: http://dx.doi.org/10.7554/eLife.02565.001 PMID:24898755
NASA Astrophysics Data System (ADS)
Choudhary, Kamal; Congo, Faical Yannick P.; Liang, Tao; Becker, Chandler; Hennig, Richard G.; Tavazza, Francesca
2017-01-01
Classical empirical potentials/force-fields (FF) provide atomistic insights into material phenomena through molecular dynamics and Monte Carlo simulations. Despite their wide applicability, a systematic evaluation of materials properties using such potentials and, especially, an easy-to-use user-interface for their comparison is still lacking. To address this deficiency, we computed energetics and elastic properties of variety of materials such as metals and ceramics using a wide range of empirical potentials and compared them to density functional theory (DFT) as well as to experimental data, where available. The database currently consists of 3248 entries including energetics and elastic property calculations, and it is still increasing. We also include computational tools for convex-hull plots for DFT and FF calculations. The data covers 1471 materials and 116 force-fields. In addition, both the complete database and the software coding used in the process have been released for public use online (presently at http://www.ctcms.nist.gov/˜knc6/periodic.html) in a user-friendly way designed to enable further material design and discovery.
Ragonnet, Romain; Trauer, James M; Denholm, Justin T; Marais, Ben J; McBryde, Emma S
2017-05-30
Multidrug-resistant and rifampicin-resistant tuberculosis (MDR/RR-TB) represent an important challenge for global tuberculosis (TB) control. The high rates of MDR/RR-TB observed among re-treatment cases can arise from diverse pathways: de novo amplification during initial treatment, inappropriate treatment of undiagnosed MDR/RR-TB, relapse despite appropriate treatment, or reinfection with MDR/RR-TB. Mathematical modelling allows quantification of the contribution made by these pathways in different settings. This information provides valuable insights for TB policy-makers, allowing better contextualised solutions. However, mathematical modelling outputs need to consider local data and be easily accessible to decision makers in order to improve their usefulness. We present a user-friendly web-based modelling interface, which can be used by people without technical knowledge. Users can input their own parameter values and produce estimates for their specific setting. This innovative tool provides easy access to mathematical modelling outputs that are highly relevant to national TB control programs. In future, the same approach could be applied to a variety of modelling applications, enhancing local decision making.
Choudhary, Kamal; Congo, Faical Yannick P.; Liang, Tao; Becker, Chandler; Hennig, Richard G.; Tavazza, Francesca
2017-01-01
Classical empirical potentials/force-fields (FF) provide atomistic insights into material phenomena through molecular dynamics and Monte Carlo simulations. Despite their wide applicability, a systematic evaluation of materials properties using such potentials and, especially, an easy-to-use user-interface for their comparison is still lacking. To address this deficiency, we computed energetics and elastic properties of variety of materials such as metals and ceramics using a wide range of empirical potentials and compared them to density functional theory (DFT) as well as to experimental data, where available. The database currently consists of 3248 entries including energetics and elastic property calculations, and it is still increasing. We also include computational tools for convex-hull plots for DFT and FF calculations. The data covers 1471 materials and 116 force-fields. In addition, both the complete database and the software coding used in the process have been released for public use online (presently at http://www.ctcms.nist.gov/∼knc6/periodic.html) in a user-friendly way designed to enable further material design and discovery. PMID:28140407
Banerjee, Shyamashree; Gupta, Parth Sarthi Sen; Nayek, Arnab; Das, Sunit; Sur, Vishma Pratap; Seth, Pratyay; Islam, Rifat Nawaz Ul; Bandyopadhyay, Amal K
2015-01-01
Automated genome sequencing procedure is enriching the sequence database very fast. To achieve a balance between the entry of sequences in the database and their analyses, efficient software is required. In this end PHYSICO2, compare to earlier PHYSICO and other public domain tools, is most efficient in that it i] extracts physicochemical, window-dependent and homologousposition-based-substitution (PWS) properties including positional and BLOCK-specific diversity and conservation, ii] provides users with optional-flexibility in setting relevant input-parameters, iii] helps users to prepare BLOCK-FASTA-file by the use of Automated Block Preparation Tool of the program, iv] performs fast, accurate and user-friendly analyses and v] redirects itemized outputs in excel format along with detailed methodology. The program package contains documentation describing application of methods. Overall the program acts as efficient PWS-analyzer and finds application in sequence-bioinformatics. PHYSICO2: is freely available at http://sourceforge.net/projects/physico2/ along with its documentation at https://sourceforge.net/projects/physico2/files/Documentation.pdf/download for all users.
Banerjee, Shyamashree; Gupta, Parth Sarthi Sen; Nayek, Arnab; Das, Sunit; Sur, Vishma Pratap; Seth, Pratyay; Islam, Rifat Nawaz Ul; Bandyopadhyay, Amal K
2015-01-01
Automated genome sequencing procedure is enriching the sequence database very fast. To achieve a balance between the entry of sequences in the database and their analyses, efficient software is required. In this end PHYSICO2, compare to earlier PHYSICO and other public domain tools, is most efficient in that it i] extracts physicochemical, window-dependent and homologousposition-based-substitution (PWS) properties including positional and BLOCK-specific diversity and conservation, ii] provides users with optional-flexibility in setting relevant input-parameters, iii] helps users to prepare BLOCK-FASTA-file by the use of Automated Block Preparation Tool of the program, iv] performs fast, accurate and user-friendly analyses and v] redirects itemized outputs in excel format along with detailed methodology. The program package contains documentation describing application of methods. Overall the program acts as efficient PWS-analyzer and finds application in sequence-bioinformatics. Availability PHYSICO2: is freely available at http://sourceforge.net/projects/physico2/ along with its documentation at https://sourceforge.net/projects/physico2/files/Documentation.pdf/download for all users. PMID:26339154
DyNAVacS: an integrative tool for optimized DNA vaccine design.
Harish, Nagarajan; Gupta, Rekha; Agarwal, Parul; Scaria, Vinod; Pillai, Beena
2006-07-01
DNA vaccines have slowly emerged as keystones in preventive immunology due to their versatility in inducing both cell-mediated as well as humoral immune responses. The design of an efficient DNA vaccine, involves choice of a suitable expression vector, ensuring optimal expression by codon optimization, engineering CpG motifs for enhancing immune responses and providing additional sequence signals for efficient translation. DyNAVacS is a web-based tool created for rapid and easy design of DNA vaccines. It follows a step-wise design flow, which guides the user through the various sequential steps in the design of the vaccine. Further, it allows restriction enzyme mapping, design of primers spanning user specified sequences and provides information regarding the vectors currently used for generation of DNA vaccines. The web version uses Apache HTTP server. The interface was written in HTML and utilizes the Common Gateway Interface scripts written in PERL for functionality. DyNAVacS is an integrated tool consisting of user-friendly programs, which require minimal information from the user. The software is available free of cost, as a web based application at URL: http://miracle.igib.res.in/dynavac/.
Pandey, Ram Vinay; Kofler, Robert; Orozco-terWengel, Pablo; Nolte, Viola; Schlötterer, Christian
2011-03-02
The enormous potential of natural variation for the functional characterization of genes has been neglected for a long time. Only since recently, functional geneticists are starting to account for natural variation in their analyses. With the new sequencing technologies it has become feasible to collect sequence information for multiple individuals on a genomic scale. In particular sequencing pooled DNA samples has been shown to provide a cost-effective approach for characterizing variation in natural populations. While a range of software tools have been developed for mapping these reads onto a reference genome and extracting SNPs, linking this information to population genetic estimators and functional information still poses a major challenge to many researchers. We developed PoPoolation DB a user-friendly integrated database. Popoolation DB links variation in natural populations with functional information, allowing a wide range of researchers to take advantage of population genetic data. PoPoolation DB provides the user with population genetic parameters (Watterson's θ or Tajima's π), Tajima's D, SNPs, allele frequencies and indels in regions of interest. The database can be queried by gene name, chromosomal position, or a user-provided query sequence or GTF file. We anticipate that PoPoolation DB will be a highly versatile tool for functional geneticists as well as evolutionary biologists. PoPoolation DB, available at http://www.popoolation.at/pgt, provides an integrated platform for researchers to investigate natural polymorphism and associated functional annotations from UCSC and Flybase genome browsers, population genetic estimators and RNA-seq information.
Morgan, K.S.; Pattyn, G.J.; Morgan, M.L.
2005-01-01
Internet mapping applications for geologic data allow simultaneous data delivery and collection, enabling quick data modification while efficiently supplying the end user with information. Utilizing Web-based technologies, the Colorado Geological Survey's Colorado Late Cenozoic Fault and Fold Database was transformed from a monothematic, nonspatial Microsoft Access database into a complex information set incorporating multiple data sources. The resulting user-friendly format supports easy analysis and browsing. The core of the application is the Microsoft Access database, which contains information compiled from available literature about faults and folds that are known or suspected to have moved during the late Cenozoic. The database contains nonspatial fields such as structure type, age, and rate of movement. Geographic locations of the fault and fold traces were compiled from previous studies at 1:250,000 scale to form a spatial database containing information such as length and strike. Integration of the two databases allowed both spatial and nonspatial information to be presented on the Internet as a single dataset (http://geosurvey.state.co.us/pubs/ceno/). The user-friendly interface enables users to view and query the data in an integrated manner, thus providing multiple ways to locate desired information. Retaining the digital data format also allows continuous data updating and quick delivery of newly acquired information. This dataset is a valuable resource to anyone interested in earthquake hazards and the activity of faults and folds in Colorado. Additional geologic hazard layers and imagery may aid in decision support and hazard evaluation. The up-to-date and customizable maps are invaluable tools for researchers or the public.
Spotsizer: High-throughput quantitative analysis of microbial growth.
Bischof, Leanne; Převorovský, Martin; Rallis, Charalampos; Jeffares, Daniel C; Arzhaeva, Yulia; Bähler, Jürg
2016-10-01
Microbial colony growth can serve as a useful readout in assays for studying complex genetic interactions or the effects of chemical compounds. Although computational tools for acquiring quantitative measurements of microbial colonies have been developed, their utility can be compromised by inflexible input image requirements, non-trivial installation procedures, or complicated operation. Here, we present the Spotsizer software tool for automated colony size measurements in images of robotically arrayed microbial colonies. Spotsizer features a convenient graphical user interface (GUI), has both single-image and batch-processing capabilities, and works with multiple input image formats and different colony grid types. We demonstrate how Spotsizer can be used for high-throughput quantitative analysis of fission yeast growth. The user-friendly Spotsizer tool provides rapid, accurate, and robust quantitative analyses of microbial growth in a high-throughput format. Spotsizer is freely available at https://data.csiro.au/dap/landingpage?pid=csiro:15330 under a proprietary CSIRO license.
Oluwagbemi, Olugbenga O; Adewumi, Adewole; Esuruoso, Abimbola
2012-01-01
Computational biology and bioinformatics are gradually gaining grounds in Africa and other developing nations of the world. However, in these countries, some of the challenges of computational biology and bioinformatics education are inadequate infrastructures, and lack of readily-available complementary and motivational tools to support learning as well as research. This has lowered the morale of many promising undergraduates, postgraduates and researchers from aspiring to undertake future study in these fields. In this paper, we developed and described MACBenAbim (Multi-platform Mobile Application for Computational Biology and Bioinformatics), a flexible user-friendly tool to search for, define and describe the meanings of keyterms in computational biology and bioinformatics, thus expanding the frontiers of knowledge of the users. This tool also has the capability of achieving visualization of results on a mobile multi-platform context. MACBenAbim is available from the authors for non-commercial purposes.
Chang, Hui-Yin; Chen, Ching-Tai; Lih, T. Mamie; Lynn, Ke-Shiuan; Juo, Chiun-Gung; Hsu, Wen-Lian; Sung, Ting-Yi
2016-01-01
Efficient and accurate quantitation of metabolites from LC-MS data has become an important topic. Here we present an automated tool, called iMet-Q (intelligent Metabolomic Quantitation), for label-free metabolomics quantitation from high-throughput MS1 data. By performing peak detection and peak alignment, iMet-Q provides a summary of quantitation results and reports ion abundance at both replicate level and sample level. Furthermore, it gives the charge states and isotope ratios of detected metabolite peaks to facilitate metabolite identification. An in-house standard mixture and a public Arabidopsis metabolome data set were analyzed by iMet-Q. Three public quantitation tools, including XCMS, MetAlign, and MZmine 2, were used for performance comparison. From the mixture data set, seven standard metabolites were detected by the four quantitation tools, for which iMet-Q had a smaller quantitation error of 12% in both profile and centroid data sets. Our tool also correctly determined the charge states of seven standard metabolites. By searching the mass values for those standard metabolites against Human Metabolome Database, we obtained a total of 183 metabolite candidates. With the isotope ratios calculated by iMet-Q, 49% (89 out of 183) metabolite candidates were filtered out. From the public Arabidopsis data set reported with two internal standards and 167 elucidated metabolites, iMet-Q detected all of the peaks corresponding to the internal standards and 167 metabolites. Meanwhile, our tool had small abundance variation (≤0.19) when quantifying the two internal standards and had higher abundance correlation (≥0.92) when quantifying the 167 metabolites. iMet-Q provides user-friendly interfaces and is publicly available for download at http://ms.iis.sinica.edu.tw/comics/Software_iMet-Q.html. PMID:26784691
Guilbeault, Peggy; Momtahan, Kathryn; Hudson, Jordan
2015-01-01
In an effort by The Ottawa Hospital (TOH) to become one of the top 10% performers in patient safety and quality of care, the hospital embarked on improving the communication process during handover between physicians by building an electronic handover tool. It is expected that this tool will decrease information loss during handover. The Information Systems (IS) department engaged a workgroup of physicians to become involved in defining requirements to build an electronic handover tool that suited their clinical handover needs. This group became ultimately responsible for defining the graphical user interface (GUI) and all functionality related to the tool. Prior to the pilot, the Information Systems team will run a usability testing session to ensure the application is user friendly and has met the goals and objectives of the workgroup. As a result, The Ottawa Hospital has developed a fully integrated electronic handover tool built on the Clinical Mobile Application (CMA) which allows clinicians to enter patient problems, notes and tasks available to all physicians to facilitate the handover process.
Haak, Maria; Slaug, Björn; Oswald, Frank; Schmidt, Steven M.; Rimland, Joseph M.; Tomsone, Signe; Ladö, Thomas; Svensson, Torbjörn; Iwarsson, Susanne
2015-01-01
To develop an innovative information and communication technology (ICT) tool intended to help older people in their search for optimal housing solutions, a first step in the development process is to gain knowledge from the intended users. Thus the aim of this study was to deepen the knowledge about needs and expectations about housing options as expressed and prioritized by older people, people ageing with disabilities and professionals. A participatory design focus was adopted; 26 people with a range of functional limitations representing the user perspective and 15 professionals with a variety of backgrounds, participated in research circles that were conducted in four European countries. An additional 20 experts were invited as guests to the different research circle meetings. Three themes illustrating cross-national user priorities for housing provision and accessibility were identified: “Information barrier: accessible housing”, “Information barrier: housing adaptation benefits”, and “Cost barrier: housing adaptations”. In conclusion, early user involvement and identification of cross-national differences in priorities and housing options will strengthen the development of a user-friendly ICT tool that can empower older people and people with disabilities to be more active consumers regarding housing provision. PMID:25739003
MuSim, a Graphical User Interface for Multiple Simulation Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, Thomas; Cummings, Mary Anne; Johnson, Rolland
2016-06-01
MuSim is a new user-friendly program designed to interface to many different particle simulation codes, regardless of their data formats or geometry descriptions. It presents the user with a compelling graphical user interface that includes a flexible 3-D view of the simulated world plus powerful editing and drag-and-drop capabilities. All aspects of the design can be parametrized so that parameter scans and optimizations are easy. It is simple to create plots and display events in the 3-D viewer (with a slider to vary the transparency of solids), allowing for an effortless comparison of different simulation codes. Simulation codes: G4beamline, MAD-X,more » and MCNP; more coming. Many accelerator design tools and beam optics codes were written long ago, with primitive user interfaces by today's standards. MuSim is specifically designed to make it easy to interface to such codes, providing a common user experience for all, and permitting the construction and exploration of models with very little overhead. For today's technology-driven students, graphical interfaces meet their expectations far better than text-based tools, and education in accelerator physics is one of our primary goals.« less
FASTER - A tool for DSN forecasting and scheduling
NASA Technical Reports Server (NTRS)
Werntz, David; Loyola, Steven; Zendejas, Silvino
1993-01-01
FASTER (Forecasting And Scheduling Tool for Earth-based Resources) is a suite of tools designed for forecasting and scheduling JPL's Deep Space Network (DSN). The DSN is a set of antennas and other associated resources that must be scheduled for satellite communications, astronomy, maintenance, and testing. FASTER consists of MS-Windows based programs that replace two existing programs (RALPH and PC4CAST). FASTER was designed to be more flexible, maintainable, and user friendly. FASTER makes heavy use of commercial software to allow for customization by users. FASTER implements scheduling as a two pass process: the first pass calculates a predictive profile of resource utilization; the second pass uses this information to calculate a cost function used in a dynamic programming optimization step. This information allows the scheduler to 'look ahead' at activities that are not as yet scheduled. FASTER has succeeded in allowing wider access to data and tools, reducing the amount of effort expended and increasing the quality of analysis.
Software-supported USER cloning strategies for site-directed mutagenesis and DNA assembly.
Genee, Hans Jasper; Bonde, Mads Tvillinggaard; Bagger, Frederik Otzen; Jespersen, Jakob Berg; Sommer, Morten O A; Wernersson, Rasmus; Olsen, Lars Rønn
2015-03-20
USER cloning is a fast and versatile method for engineering of plasmid DNA. We have developed a user friendly Web server tool that automates the design of optimal PCR primers for several distinct USER cloning-based applications. Our Web server, named AMUSER (Automated DNA Modifications with USER cloning), facilitates DNA assembly and introduction of virtually any type of site-directed mutagenesis by designing optimal PCR primers for the desired genetic changes. To demonstrate the utility, we designed primers for a simultaneous two-position site-directed mutagenesis of green fluorescent protein (GFP) to yellow fluorescent protein (YFP), which in a single step reaction resulted in a 94% cloning efficiency. AMUSER also supports degenerate nucleotide primers, single insert combinatorial assembly, and flexible parameters for PCR amplification. AMUSER is freely available online at http://www.cbs.dtu.dk/services/AMUSER/.
Rigbolt, Kristoffer T G; Vanselow, Jens T; Blagoev, Blagoy
2011-08-01
Recent technological advances have made it possible to identify and quantify thousands of proteins in a single proteomics experiment. As a result of these developments, the analysis of data has become the bottleneck of proteomics experiment. To provide the proteomics community with a user-friendly platform for comprehensive analysis, inspection and visualization of quantitative proteomics data we developed the Graphical Proteomics Data Explorer (GProX)(1). The program requires no special bioinformatics training, as all functions of GProX are accessible within its graphical user-friendly interface which will be intuitive to most users. Basic features facilitate the uncomplicated management and organization of large data sets and complex experimental setups as well as the inspection and graphical plotting of quantitative data. These are complemented by readily available high-level analysis options such as database querying, clustering based on abundance ratios, feature enrichment tests for e.g. GO terms and pathway analysis tools. A number of plotting options for visualization of quantitative proteomics data is available and most analysis functions in GProX create customizable high quality graphical displays in both vector and bitmap formats. The generic import requirements allow data originating from essentially all mass spectrometry platforms, quantitation strategies and software to be analyzed in the program. GProX represents a powerful approach to proteomics data analysis providing proteomics experimenters with a toolbox for bioinformatics analysis of quantitative proteomics data. The program is released as open-source and can be freely downloaded from the project webpage at http://gprox.sourceforge.net.
Rigbolt, Kristoffer T. G.; Vanselow, Jens T.; Blagoev, Blagoy
2011-01-01
Recent technological advances have made it possible to identify and quantify thousands of proteins in a single proteomics experiment. As a result of these developments, the analysis of data has become the bottleneck of proteomics experiment. To provide the proteomics community with a user-friendly platform for comprehensive analysis, inspection and visualization of quantitative proteomics data we developed the Graphical Proteomics Data Explorer (GProX)1. The program requires no special bioinformatics training, as all functions of GProX are accessible within its graphical user-friendly interface which will be intuitive to most users. Basic features facilitate the uncomplicated management and organization of large data sets and complex experimental setups as well as the inspection and graphical plotting of quantitative data. These are complemented by readily available high-level analysis options such as database querying, clustering based on abundance ratios, feature enrichment tests for e.g. GO terms and pathway analysis tools. A number of plotting options for visualization of quantitative proteomics data is available and most analysis functions in GProX create customizable high quality graphical displays in both vector and bitmap formats. The generic import requirements allow data originating from essentially all mass spectrometry platforms, quantitation strategies and software to be analyzed in the program. GProX represents a powerful approach to proteomics data analysis providing proteomics experimenters with a toolbox for bioinformatics analysis of quantitative proteomics data. The program is released as open-source and can be freely downloaded from the project webpage at http://gprox.sourceforge.net. PMID:21602510
NASA Astrophysics Data System (ADS)
Pagliarone, C. E.; Uttaro, S.; Cappelli, L.; Fallone, M.; Kartal, S.
2017-02-01
CAT, Cryogenic Analysis Tools is a software package developed using LabVIEW and ROOT environments to analyze the performances of large size cryostats, where many parameters, input, and control variables need to be acquired and studied at the same time. The present paper describes how CAT works and which are the main improvements achieved in the new version: CAT 2. New Graphical User Interfaces have been developed in order to make the use of the full package more user-friendly as well as a process of resource optimization has been carried out. The offline analysis of the full cryostat performances is available both trough ROOT line command interface band also by using the new graphical interfaces.
Rees, Tom
2002-01-01
East Jefferson General Hospital in Metairie, La., launched a new Web site in October 2001. Its user-friendly home page offers links to hospital services, medical staff, and employer information. Its jobline is a powerful tool for recruitment. The site was awarded the 2002 Pelican Award for Best Consumer Web site by the Louisiana Society for Hospital Public Relations & Marketing.
ERIC Educational Resources Information Center
Marques, Bertil P.; Carvalho, Piedade; Escudeiro, Paula; Barata, Ana; Silva, Ana; Queiros, Sandra
2017-01-01
Promoted by the significant increase of large scale internet access, many audiences have turned to the web and to its resources for learning and inspiration, with diverse sets of skills and intents. In this context, Multimedia Online Open Courses (MOOC) consist in learning models supported on user-friendly web tools that allow anyone with minimum…
Web-based GIS: the vector-borne disease airline importation risk (VBD-AIR) tool
2012-01-01
Background Over the past century, the size and complexity of the air travel network has increased dramatically. Nowadays, there are 29.6 million scheduled flights per year and around 2.7 billion passengers are transported annually. The rapid expansion of the network increasingly connects regions of endemic vector-borne disease with the rest of the world, resulting in challenges to health systems worldwide in terms of vector-borne pathogen importation and disease vector invasion events. Here we describe the development of a user-friendly Web-based GIS tool: the Vector-Borne Disease Airline Importation Risk Tool (VBD-AIR), to help better define the roles of airports and airlines in the transmission and spread of vector-borne diseases. Methods Spatial datasets on modeled global disease and vector distributions, as well as climatic and air network traffic data were assembled. These were combined to derive relative risk metrics via air travel for imported infections, imported vectors and onward transmission, and incorporated into a three-tier server architecture in a Model-View-Controller framework with distributed GIS components. A user-friendly web-portal was built that enables dynamic querying of the spatial databases to provide relevant information. Results The VBD-AIR tool constructed enables the user to explore the interrelationships among modeled global distributions of vector-borne infectious diseases (malaria. dengue, yellow fever and chikungunya) and international air service routes to quantify seasonally changing risks of vector and vector-borne disease importation and spread by air travel, forming an evidence base to help plan mitigation strategies. The VBD-AIR tool is available at http://www.vbd-air.com. Conclusions VBD-AIR supports a data flow that generates analytical results from disparate but complementary datasets into an organized cartographical presentation on a web map for the assessment of vector-borne disease movements on the air travel network. The framework built provides a flexible and robust informatics infrastructure by separating the modules of functionality through an ontological model for vector-borne disease. The VBD‒AIR tool is designed as an evidence base for visualizing the risks of vector-borne disease by air travel for a wide range of users, including planners and decisions makers based in state and local government, and in particular, those at international and domestic airports tasked with planning for health risks and allocating limited resources. PMID:22892045
Web-based GIS: the vector-borne disease airline importation risk (VBD-AIR) tool.
Huang, Zhuojie; Das, Anirrudha; Qiu, Youliang; Tatem, Andrew J
2012-08-14
Over the past century, the size and complexity of the air travel network has increased dramatically. Nowadays, there are 29.6 million scheduled flights per year and around 2.7 billion passengers are transported annually. The rapid expansion of the network increasingly connects regions of endemic vector-borne disease with the rest of the world, resulting in challenges to health systems worldwide in terms of vector-borne pathogen importation and disease vector invasion events. Here we describe the development of a user-friendly Web-based GIS tool: the Vector-Borne Disease Airline Importation Risk Tool (VBD-AIR), to help better define the roles of airports and airlines in the transmission and spread of vector-borne diseases. Spatial datasets on modeled global disease and vector distributions, as well as climatic and air network traffic data were assembled. These were combined to derive relative risk metrics via air travel for imported infections, imported vectors and onward transmission, and incorporated into a three-tier server architecture in a Model-View-Controller framework with distributed GIS components. A user-friendly web-portal was built that enables dynamic querying of the spatial databases to provide relevant information. The VBD-AIR tool constructed enables the user to explore the interrelationships among modeled global distributions of vector-borne infectious diseases (malaria. dengue, yellow fever and chikungunya) and international air service routes to quantify seasonally changing risks of vector and vector-borne disease importation and spread by air travel, forming an evidence base to help plan mitigation strategies. The VBD-AIR tool is available at http://www.vbd-air.com. VBD-AIR supports a data flow that generates analytical results from disparate but complementary datasets into an organized cartographical presentation on a web map for the assessment of vector-borne disease movements on the air travel network. The framework built provides a flexible and robust informatics infrastructure by separating the modules of functionality through an ontological model for vector-borne disease. The VBD‒AIR tool is designed as an evidence base for visualizing the risks of vector-borne disease by air travel for a wide range of users, including planners and decisions makers based in state and local government, and in particular, those at international and domestic airports tasked with planning for health risks and allocating limited resources.
Jobs and Economic Development Impact (JEDI) Model Geothermal User Reference Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, C.; Augustine, C.; Goldberg, M.
2012-09-01
The Geothermal Jobs and Economic Development Impact (JEDI) model, developed through the National Renewable Energy Laboratory (NREL), is an Excel-based user-friendly tools that estimates the economic impacts of constructing and operating hydrothermal and Enhanced Geothermal System (EGS) power generation projects at the local level for a range of conventional and renewable energy technologies. The JEDI Model Geothermal User Reference Guide was developed to assist users in using and understanding the model. This guide provides information on the model's underlying methodology, as well as the parameters and references used to develop the cost data utilized in the model. This guide alsomore » provides basic instruction on model add-in features, operation of the model, and a discussion of how the results should be interpreted.« less
An interactive computer code for calculation of gas-phase chemical equilibrium (EQLBRM)
NASA Technical Reports Server (NTRS)
Pratt, B. S.; Pratt, D. T.
1984-01-01
A user friendly, menu driven, interactive computer program known as EQLBRM which calculates the adiabatic equilibrium temperature and product composition resulting from the combustion of hydrocarbon fuels with air, at specified constant pressure and enthalpy is discussed. The program is developed primarily as an instructional tool to be run on small computers to allow the user to economically and efficiency explore the effects of varying fuel type, air/fuel ratio, inlet air and/or fuel temperature, and operating pressure on the performance of continuous combustion devices such as gas turbine combustors, Stirling engine burners, and power generation furnaces.
Arneson, Douglas; Bhattacharya, Anindya; Shu, Le; Mäkinen, Ville-Petteri; Yang, Xia
2016-09-09
Human diseases are commonly the result of multidimensional changes at molecular, cellular, and systemic levels. Recent advances in genomic technologies have enabled an outpour of omics datasets that capture these changes. However, separate analyses of these various data only provide fragmented understanding and do not capture the holistic view of disease mechanisms. To meet the urgent needs for tools that effectively integrate multiple types of omics data to derive biological insights, we have developed Mergeomics, a computational pipeline that integrates multidimensional disease association data with functional genomics and molecular networks to retrieve biological pathways, gene networks, and central regulators critical for disease development. To make the Mergeomics pipeline available to a wider research community, we have implemented an online, user-friendly web server ( http://mergeomics. idre.ucla.edu/ ). The web server features a modular implementation of the Mergeomics pipeline with detailed tutorials. Additionally, it provides curated genomic resources including tissue-specific expression quantitative trait loci, ENCODE functional annotations, biological pathways, and molecular networks, and offers interactive visualization of analytical results. Multiple computational tools including Marker Dependency Filtering (MDF), Marker Set Enrichment Analysis (MSEA), Meta-MSEA, and Weighted Key Driver Analysis (wKDA) can be used separately or in flexible combinations. User-defined summary-level genomic association datasets (e.g., genetic, transcriptomic, epigenomic) related to a particular disease or phenotype can be uploaded and computed real-time to yield biologically interpretable results, which can be viewed online and downloaded for later use. Our Mergeomics web server offers researchers flexible and user-friendly tools to facilitate integration of multidimensional data into holistic views of disease mechanisms in the form of tissue-specific key regulators, biological pathways, and gene networks.
Global Impact Estimation of ISO 50001 Energy Management System for Industrial and Service Sectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghajanzadeh, Arian; Therkelsen, Peter L.; Rao, Prakash
A methodology has been developed to determine the impacts of ISO 50001 Energy Management System (EnMS) at a region or country level. The impacts of ISO 50001 EnMS include energy, CO2 emissions, and cost savings. This internationally recognized and transparent methodology has been embodied in a user friendly Microsoft Excel® based tool called ISO 50001 Impact Estimator Tool (IET 50001). However, the tool inputs are critical in order to get accurate and defensible results. This report is intended to document the data sources used and assumptions made to calculate the global impact of ISO 50001 EnMS.
Open Source Live Distributions for Computer Forensics
NASA Astrophysics Data System (ADS)
Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele
Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.
Army Energy and Water Reporting System Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deprez, Peggy C.; Giardinelli, Michael J.; Burke, John S.
There are many areas of desired improvement for the Army Energy and Water Reporting System. The purpose of system is to serve as a data repository for collecting information from energy managers, which is then compiled into an annual energy report. This document summarizes reported shortcomings of the system and provides several alternative approaches for improving application usability and adding functionality. The U.S. Army has been using Army Energy and Water Reporting System (AEWRS) for many years to collect and compile energy data from installations for facilitating compliance with Federal and Department of Defense energy management program reporting requirements. Inmore » this analysis, staff from Pacific Northwest National Laboratory found that substantial opportunities exist to expand AEWRS functions to better assist the Army to effectively manage energy programs. Army leadership must decide if it wants to invest in expanding AEWRS capabilities as a web-based, enterprise-wide tool for improving the Army Energy and Water Management Program or simply maintaining a bottom-up reporting tool. This report looks at both improving system functionality from an operational perspective and increasing user-friendliness, but also as a tool for potential improvements to increase program effectiveness. The authors of this report recommend focusing on making the system easier for energy managers to input accurate data as the top priority for improving AEWRS. The next major focus of improvement would be improved reporting. The AEWRS user interface is dated and not user friendly, and a new system is recommended. While there are relatively minor improvements that could be made to the existing system to make it easier to use, significant improvements will be achieved with a user-friendly interface, new architecture, and a design that permits scalability and reliability. An expanded data set would naturally have need of additional requirements gathering and a focus on integrating with other existing data sources, thus minimizing manually entered data.« less
EnviroNET: On-line information for LDEF
NASA Technical Reports Server (NTRS)
Lauriente, Michael
1993-01-01
EnviroNET is an on-line, free-form database intended to provide a centralized repository for a wide range of technical information on environmentally induced interactions of use to Space Shuttle customers and spacecraft designers. It provides a user-friendly, menu-driven format on networks that are connected globally and is available twenty-four hours a day - every day. The information, updated regularly, includes expository text, tabular numerical data, charts and graphs, and models. The system pools space data collected over the years by NASA, USAF, other government research facilities, industry, universities, and the European Space Agency. The models accept parameter input from the user, then calculate and display the derived values corresponding to that input. In addition to the archive, interactive graphics programs are also available on space debris, the neutral atmosphere, radiation, magnetic fields, and the ionosphere. A user-friendly, informative interface is standard for all the models and includes a pop-up help window with information on inputs, outputs, and caveats. The system will eventually simplify mission analysis with analytical tools and deliver solutions for computationally intense graphical applications to do 'What if...' scenarios. A proposed plan for developing a repository of information from the Long Duration Exposure Facility (LDEF) for a user group is presented.
Inferring transposons activity chronology by TRANScendence - TEs database and de-novo mining tool.
Startek, Michał Piotr; Nogły, Jakub; Gromadka, Agnieszka; Grzebelus, Dariusz; Gambin, Anna
2017-10-16
The constant progress in sequencing technology leads to ever increasing amounts of genomic data. In the light of current evidence transposable elements (TEs for short) are becoming useful tools for learning about the evolution of host genome. Therefore the software for genome-wide detection and analysis of TEs is of great interest. Here we describe the computational tool for mining, classifying and storing TEs from newly sequenced genomes. This is an online, web-based, user-friendly service, enabling users to upload their own genomic data, and perform de-novo searches for TEs. The detected TEs are automatically analyzed, compared to reference databases, annotated, clustered into families, and stored in TEs repository. Also, the genome-wide nesting structure of found elements are detected and analyzed by new method for inferring evolutionary history of TEs. We illustrate the functionality of our tool by performing a full-scale analyses of TE landscape in Medicago truncatula genome. TRANScendence is an effective tool for the de-novo annotation and classification of transposable elements in newly-acquired genomes. Its streamlined interface makes it well-suited for evolutionary studies.
P-TRAP: a Panicle TRAit Phenotyping tool.
A L-Tam, Faroq; Adam, Helene; Anjos, António dos; Lorieux, Mathias; Larmande, Pierre; Ghesquière, Alain; Jouannic, Stefan; Shahbazkia, Hamid Reza
2013-08-29
In crops, inflorescence complexity and the shape and size of the seed are among the most important characters that influence yield. For example, rice panicles vary considerably in the number and order of branches, elongation of the axis, and the shape and size of the seed. Manual low-throughput phenotyping methods are time consuming, and the results are unreliable. However, high-throughput image analysis of the qualitative and quantitative traits of rice panicles is essential for understanding the diversity of the panicle as well as for breeding programs. This paper presents P-TRAP software (Panicle TRAit Phenotyping), a free open source application for high-throughput measurements of panicle architecture and seed-related traits. The software is written in Java and can be used with different platforms (the user-friendly Graphical User Interface (GUI) uses Netbeans Platform 7.3). The application offers three main tools: a tool for the analysis of panicle structure, a spikelet/grain counting tool, and a tool for the analysis of seed shape. The three tools can be used independently or simultaneously for analysis of the same image. Results are then reported in the Extensible Markup Language (XML) and Comma Separated Values (CSV) file formats. Images of rice panicles were used to evaluate the efficiency and robustness of the software. Compared to data obtained by manual processing, P-TRAP produced reliable results in a much shorter time. In addition, manual processing is not repeatable because dry panicles are vulnerable to damage. The software is very useful, practical and collects much more data than human operators. P-TRAP is a new open source software that automatically recognizes the structure of a panicle and the seeds on the panicle in numeric images. The software processes and quantifies several traits related to panicle structure, detects and counts the grains, and measures their shape parameters. In short, P-TRAP offers both efficient results and a user-friendly environment for experiments. The experimental results showed very good accuracy compared to field operator, expert verification and well-known academic methods.
P-TRAP: a Panicle Trait Phenotyping tool
2013-01-01
Background In crops, inflorescence complexity and the shape and size of the seed are among the most important characters that influence yield. For example, rice panicles vary considerably in the number and order of branches, elongation of the axis, and the shape and size of the seed. Manual low-throughput phenotyping methods are time consuming, and the results are unreliable. However, high-throughput image analysis of the qualitative and quantitative traits of rice panicles is essential for understanding the diversity of the panicle as well as for breeding programs. Results This paper presents P-TRAP software (Panicle TRAit Phenotyping), a free open source application for high-throughput measurements of panicle architecture and seed-related traits. The software is written in Java and can be used with different platforms (the user-friendly Graphical User Interface (GUI) uses Netbeans Platform 7.3). The application offers three main tools: a tool for the analysis of panicle structure, a spikelet/grain counting tool, and a tool for the analysis of seed shape. The three tools can be used independently or simultaneously for analysis of the same image. Results are then reported in the Extensible Markup Language (XML) and Comma Separated Values (CSV) file formats. Images of rice panicles were used to evaluate the efficiency and robustness of the software. Compared to data obtained by manual processing, P-TRAP produced reliable results in a much shorter time. In addition, manual processing is not repeatable because dry panicles are vulnerable to damage. The software is very useful, practical and collects much more data than human operators. Conclusions P-TRAP is a new open source software that automatically recognizes the structure of a panicle and the seeds on the panicle in numeric images. The software processes and quantifies several traits related to panicle structure, detects and counts the grains, and measures their shape parameters. In short, P-TRAP offers both efficient results and a user-friendly environment for experiments. The experimental results showed very good accuracy compared to field operator, expert verification and well-known academic methods. PMID:23987653
iDrug: a web-accessible and interactive drug discovery and design platform
2014-01-01
Background The progress in computer-aided drug design (CADD) approaches over the past decades accelerated the early-stage pharmaceutical research. Many powerful standalone tools for CADD have been developed in academia. As programs are developed by various research groups, a consistent user-friendly online graphical working environment, combining computational techniques such as pharmacophore mapping, similarity calculation, scoring, and target identification is needed. Results We presented a versatile, user-friendly, and efficient online tool for computer-aided drug design based on pharmacophore and 3D molecular similarity searching. The web interface enables binding sites detection, virtual screening hits identification, and drug targets prediction in an interactive manner through a seamless interface to all adapted packages (e.g., Cavity, PocketV.2, PharmMapper, SHAFTS). Several commercially available compound databases for hit identification and a well-annotated pharmacophore database for drug targets prediction were integrated in iDrug as well. The web interface provides tools for real-time molecular building/editing, converting, displaying, and analyzing. All the customized configurations of the functional modules can be accessed through featured session files provided, which can be saved to the local disk and uploaded to resume or update the history work. Conclusions iDrug is easy to use, and provides a novel, fast and reliable tool for conducting drug design experiments. By using iDrug, various molecular design processing tasks can be submitted and visualized simply in one browser without installing locally any standalone modeling softwares. iDrug is accessible free of charge at http://lilab.ecust.edu.cn/idrug. PMID:24955134
NASA Astrophysics Data System (ADS)
Murrill, Steven R.; Franck, Charmaine C.; Espinola, Richard L.; Petkie, Douglas T.; De Lucia, Frank C.; Jacobs, Eddie L.
2011-11-01
The U.S. Army Research Laboratory (ARL) and the U.S. Army Night Vision and Electronic Sensors Directorate (NVESD) have developed a terahertz-band imaging system performance model/tool for detection and identification of concealed weaponry. The details of the MATLAB-based model which accounts for the effects of all critical sensor and display components, and for the effects of atmospheric attenuation, concealment material attenuation, and active illumination, were reported on at the 2005 SPIE Europe Security & Defence Symposium (Brugge). An advanced version of the base model that accounts for both the dramatic impact that target and background orientation can have on target observability as related to specular and Lambertian reflections captured by an active-illumination-based imaging system, and for the impact of target and background thermal emission, was reported on at the 2007 SPIE Defense and Security Symposium (Orlando). This paper will provide a comprehensive review of an enhanced, user-friendly, Windows-executable, terahertz-band imaging system performance analysis and design tool that now includes additional features such as a MODTRAN-based atmospheric attenuation calculator and advanced system architecture configuration inputs that allow for straightforward performance analysis of active or passive systems based on scanning (single- or line-array detector element(s)) or staring (focal-plane-array detector elements) imaging architectures. This newly enhanced THz imaging system design tool is an extension of the advanced THz imaging system performance model that was developed under the Defense Advanced Research Project Agency's (DARPA) Terahertz Imaging Focal-Plane Technology (TIFT) program. This paper will also provide example system component (active-illumination source and detector) trade-study analyses using the new features of this user-friendly THz imaging system performance analysis and design tool.
Antibiogramj: A tool for analysing images from disk diffusion tests.
Alonso, C A; Domínguez, C; Heras, J; Mata, E; Pascual, V; Torres, C; Zarazaga, M
2017-05-01
Disk diffusion testing, known as antibiogram, is widely applied in microbiology to determine the antimicrobial susceptibility of microorganisms. The measurement of the diameter of the zone of growth inhibition of microorganisms around the antimicrobial disks in the antibiogram is frequently performed manually by specialists using a ruler. This is a time-consuming and error-prone task that might be simplified using automated or semi-automated inhibition zone readers. However, most readers are usually expensive instruments with embedded software that require significant changes in laboratory design and workflow. Based on the workflow employed by specialists to determine the antimicrobial susceptibility of microorganisms, we have designed a software tool that, from images of disk diffusion tests, semi-automatises the process. Standard computer vision techniques are employed to achieve such an automatisation. We present AntibiogramJ, a user-friendly and open-source software tool to semi-automatically determine, measure and categorise inhibition zones of images from disk diffusion tests. AntibiogramJ is implemented in Java and deals with images captured with any device that incorporates a camera, including digital cameras and mobile phones. The fully automatic procedure of AntibiogramJ for measuring inhibition zones achieves an overall agreement of 87% with an expert microbiologist; moreover, AntibiogramJ includes features to easily detect when the automatic reading is not correct and fix it manually to obtain the correct result. AntibiogramJ is a user-friendly, platform-independent, open-source, and free tool that, up to the best of our knowledge, is the most complete software tool for antibiogram analysis without requiring any investment in new equipment or changes in the laboratory. Copyright © 2017 Elsevier B.V. All rights reserved.
The GenABEL Project for statistical genomics.
Karssen, Lennart C; van Duijn, Cornelia M; Aulchenko, Yurii S
2016-01-01
Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the "core team", facilitating agile statistical omics methodology development and fast dissemination.
Digital test assembly of truck parts with the IMMA-tool--an illustrative case.
Hanson, L; Högberg, D; Söderholm, M
2012-01-01
Several digital human modelling (DHM) tools have been developed for simulation and visualisation of human postures and motions. In 2010 the DHM tool IMMA (Intelligently Moving Manikins) was introduced as a DHM tool that uses advanced path planning techniques to generate collision free and biomechanically acceptable motions for digital human models (as well as parts) in complex assembly situations. The aim of the paper is to illustrate how the IPS/IMMA tool is used at Scania CV AB in a digital test assembly process, and to compare the tool with other DHM tools on the market. The illustrated case of using the IMMA tool, here combined with the path planner tool IPS, indicates that the tool is promising. The major strengths of the tool are its user friendly interface, the motion generation algorithms, the batch simulation of manikins and the ergonomics assessment methods that consider time.
RAP: RNA-Seq Analysis Pipeline, a new cloud-based NGS web application.
D'Antonio, Mattia; D'Onorio De Meo, Paolo; Pallocca, Matteo; Picardi, Ernesto; D'Erchia, Anna Maria; Calogero, Raffaele A; Castrignanò, Tiziana; Pesole, Graziano
2015-01-01
The study of RNA has been dramatically improved by the introduction of Next Generation Sequencing platforms allowing massive and cheap sequencing of selected RNA fractions, also providing information on strand orientation (RNA-Seq). The complexity of transcriptomes and of their regulative pathways make RNA-Seq one of most complex field of NGS applications, addressing several aspects of the expression process (e.g. identification and quantification of expressed genes and transcripts, alternative splicing and polyadenylation, fusion genes and trans-splicing, post-transcriptional events, etc.). In order to provide researchers with an effective and friendly resource for analyzing RNA-Seq data, we present here RAP (RNA-Seq Analysis Pipeline), a cloud computing web application implementing a complete but modular analysis workflow. This pipeline integrates both state-of-the-art bioinformatics tools for RNA-Seq analysis and in-house developed scripts to offer to the user a comprehensive strategy for data analysis. RAP is able to perform quality checks (adopting FastQC and NGS QC Toolkit), identify and quantify expressed genes and transcripts (with Tophat, Cufflinks and HTSeq), detect alternative splicing events (using SpliceTrap) and chimeric transcripts (with ChimeraScan). This pipeline is also able to identify splicing junctions and constitutive or alternative polyadenylation sites (implementing custom analysis modules) and call for statistically significant differences in genes and transcripts expression, splicing pattern and polyadenylation site usage (using Cuffdiff2 and DESeq). Through a user friendly web interface, the RAP workflow can be suitably customized by the user and it is automatically executed on our cloud computing environment. This strategy allows to access to bioinformatics tools and computational resources without specific bioinformatics and IT skills. RAP provides a set of tabular and graphical results that can be helpful to browse, filter and export analyzed data, according to the user needs.
phylo-node: A molecular phylogenetic toolkit using Node.js.
O'Halloran, Damien M
2017-01-01
Node.js is an open-source and cross-platform environment that provides a JavaScript codebase for back-end server-side applications. JavaScript has been used to develop very fast and user-friendly front-end tools for bioinformatic and phylogenetic analyses. However, no such toolkits are available using Node.js to conduct comprehensive molecular phylogenetic analysis. To address this problem, I have developed, phylo-node, which was developed using Node.js and provides a stable and scalable toolkit that allows the user to perform diverse molecular and phylogenetic tasks. phylo-node can execute the analysis and process the resulting outputs from a suite of software options that provides tools for read processing and genome alignment, sequence retrieval, multiple sequence alignment, primer design, evolutionary modeling, and phylogeny reconstruction. Furthermore, phylo-node enables the user to deploy server dependent applications, and also provides simple integration and interoperation with other Node modules and languages using Node inheritance patterns, and a customized piping module to support the production of diverse pipelines. phylo-node is open-source and freely available to all users without sign-up or login requirements. All source code and user guidelines are openly available at the GitHub repository: https://github.com/dohalloran/phylo-node.
Screening_mgmt: a Python module for managing screening data.
Helfenstein, Andreas; Tammela, Päivi
2015-02-01
High-throughput screening is an established technique in drug discovery and, as such, has also found its way into academia. High-throughput screening generates a considerable amount of data, which is why specific software is used for its analysis and management. The commercially available software packages are often beyond the financial limits of small-scale academic laboratories and, furthermore, lack the flexibility to fulfill certain user-specific requirements. We have developed a Python module, screening_mgmt, which is a lightweight tool for flexible data retrieval, analysis, and storage for different screening assays in one central database. The module reads custom-made analysis scripts and plotting instructions, and it offers a graphical user interface to import, modify, and display the data in a uniform manner. During the test phase, we used this module for the management of 10,000 data points of various origins. It has provided a practical, user-friendly tool for sharing and exchanging information between researchers. © 2014 Society for Laboratory Automation and Screening.
imDEV: a graphical user interface to R multivariate analysis tools in Microsoft Excel
Grapov, Dmitry; Newman, John W.
2012-01-01
Summary: Interactive modules for Data Exploration and Visualization (imDEV) is a Microsoft Excel spreadsheet embedded application providing an integrated environment for the analysis of omics data through a user-friendly interface. Individual modules enables interactive and dynamic analyses of large data by interfacing R's multivariate statistics and highly customizable visualizations with the spreadsheet environment, aiding robust inferences and generating information-rich data visualizations. This tool provides access to multiple comparisons with false discovery correction, hierarchical clustering, principal and independent component analyses, partial least squares regression and discriminant analysis, through an intuitive interface for creating high-quality two- and a three-dimensional visualizations including scatter plot matrices, distribution plots, dendrograms, heat maps, biplots, trellis biplots and correlation networks. Availability and implementation: Freely available for download at http://sourceforge.net/projects/imdev/. Implemented in R and VBA and supported by Microsoft Excel (2003, 2007 and 2010). Contact: John.Newman@ars.usda.gov Supplementary Information: Installation instructions, tutorials and users manual are available at http://sourceforge.net/projects/imdev/. PMID:22815358
Bergamino, Maurizio; Hamilton, David J; Castelletti, Lara; Barletta, Laura; Castellan, Lucio
2015-03-01
In this study, we describe the development and utilization of a relational database designed to manage the clinical and radiological data of patients with brain tumors. The Brain Tumor Database was implemented using MySQL v.5.0, while the graphical user interface was created using PHP and HTML, thus making it easily accessible through a web browser. This web-based approach allows for multiple institutions to potentially access the database. The BT Database can record brain tumor patient information (e.g. clinical features, anatomical attributes, and radiological characteristics) and be used for clinical and research purposes. Analytic tools to automatically generate statistics and different plots are provided. The BT Database is a free and powerful user-friendly tool with a wide range of possible clinical and research applications in neurology and neurosurgery. The BT Database graphical user interface source code and manual are freely available at http://tumorsdatabase.altervista.org. © The Author(s) 2013.
Using the Browser for Science: A Collaborative Toolkit for Astronomy
NASA Astrophysics Data System (ADS)
Connolly, A. J.; Smith, I.; Krughoff, K. S.; Gibson, R.
2011-07-01
Astronomical surveys have yielded hundreds of terabytes of catalogs and images that span many decades of the electromagnetic spectrum. Even when observatories provide user-friendly web interfaces, exploring these data resources remains a complex and daunting task. In contrast, gadgets and widgets have become popular in social networking (e.g. iGoogle, Facebook). They provide a simple way to make complex data easily accessible that can be customized based on the interest of the user. With ASCOT (an AStronomical COllaborative Toolkit) we expand on these concepts to provide a customizable and extensible gadget framework for use in science. Unlike iGoogle, where all of the gadgets are independent, the gadgets we develop communicate and share information, enabling users to visualize and interact with data through multiple, simultaneous views. With this approach, web-based applications for accessing and visualizing data can be generated easily and, by linking these tools together, integrated and powerful data analysis and discovery tools can be constructed.
User observations on information sharing (corporate knowledge and lessons learned)
NASA Technical Reports Server (NTRS)
Montague, Ronald A.; Gregg, Lawrence A.; Martin, Shirley A.; Underwood, Leroy H.; Mcgee, John M.
1993-01-01
The sharing of 'corporate knowledge' and lessons learned in the NASA aerospace community has been identified by Johnson Space Center survey participants as a desirable tool. The concept of the program is based on creating a user friendly information system that will allow engineers, scientists, and managers at all working levels to share their information and experiences with other users irrespective of location or organization. The survey addresses potential end uses for such a system and offers some guidance on the development of subsequent processes to ensure the integrity of the information shared. This system concept will promote sharing of information between NASA centers, between NASA and its contractors, between NASA and other government agencies, and perhaps between NASA and institutions of higher learning.
GeMS: an advanced software package for designing synthetic genes.
Jayaraj, Sebastian; Reid, Ralph; Santi, Daniel V
2005-01-01
A user-friendly, advanced software package for gene design is described. The software comprises an integrated suite of programs-also provided as stand-alone tools-that automatically performs the following tasks in gene design: restriction site prediction, codon optimization for any expression host, restriction site inclusion and exclusion, separation of long sequences into synthesizable fragments, T(m) and stem-loop determinations, optimal oligonucleotide component design and design verification/error-checking. The output is a complete design report and a list of optimized oligonucleotides to be prepared for subsequent gene synthesis. The user interface accommodates both inexperienced and experienced users. For inexperienced users, explanatory notes are provided such that detailed instructions are not necessary; for experienced users, a streamlined interface is provided without such notes. The software has been extensively tested in the design and successful synthesis of over 400 kb of genes, many of which exceeded 5 kb in length.
Exploring a Net Centric Architecture Using the Net Warrior Airborne Early Warning and Control Node
2007-12-01
implemented in different languages. Customisation Interfaces for customising components. User-friendly customisation tools will use these interfaces...Sun Enterprise Java Beans. Customisation Customisation in the context of components is defined in [Heineman & Councill 2001, p. 42] as ‘…the ability...of a consumer to adapt a component prior to its installation or use’. Customisation can be facilitated through the use of specialised interfaces
A real-time phoneme counting algorithm and application for speech rate monitoring.
Aharonson, Vered; Aharonson, Eran; Raichlin-Levi, Katia; Sotzianu, Aviv; Amir, Ofer; Ovadia-Blechman, Zehava
2017-03-01
Adults who stutter can learn to control and improve their speech fluency by modifying their speaking rate. Existing speech therapy technologies can assist this practice by monitoring speaking rate and providing feedback to the patient, but cannot provide an accurate, quantitative measurement of speaking rate. Moreover, most technologies are too complex and costly to be used for home practice. We developed an algorithm and a smartphone application that monitor a patient's speaking rate in real time and provide user-friendly feedback to both patient and therapist. Our speaking rate computation is performed by a phoneme counting algorithm which implements spectral transition measure extraction to estimate phoneme boundaries. The algorithm is implemented in real time in a mobile application that presents its results in a user-friendly interface. The application incorporates two modes: one provides the patient with visual feedback of his/her speech rate for self-practice and another provides the speech therapist with recordings, speech rate analysis and tools to manage the patient's practice. The algorithm's phoneme counting accuracy was validated on ten healthy subjects who read a paragraph at slow, normal and fast paces, and was compared to manual counting of speech experts. Test-retest and intra-counter reliability were assessed. Preliminary results indicate differences of -4% to 11% between automatic and human phoneme counting. Differences were largest for slow speech. The application can thus provide reliable, user-friendly, real-time feedback for speaking rate control practice. Copyright © 2017 Elsevier Inc. All rights reserved.
Investigation and Implementation of a Tree Transformation System for User Friendly Programming.
1984-12-01
systems have become an important area of research because of theiL direct impact on all areas of computer science such as software engineering ...RD-i52 716 INVESTIGTIN AND IMPLEMENTATION OF A TREE I/2TRANSFORMATION SYSTEM FOR USER FRIENDLY PROGRAMMING (U) NAVAL POSTGRADUATE SCHOOL MONTEREY CA...Implementation of a Master’s Thesis Tree Transformation System for User December 1984 Friendly Programming 6. PERFORMING ORG. REPORT NUMBER 7. AU~THOR(s) S
Cornelissen, Frans; Cik, Miroslav; Gustin, Emmanuel
2012-04-01
High-content screening has brought new dimensions to cellular assays by generating rich data sets that characterize cell populations in great detail and detect subtle phenotypes. To derive relevant, reliable conclusions from these complex data, it is crucial to have informatics tools supporting quality control, data reduction, and data mining. These tools must reconcile the complexity of advanced analysis methods with the user-friendliness demanded by the user community. After review of existing applications, we realized the possibility of adding innovative new analysis options. Phaedra was developed to support workflows for drug screening and target discovery, interact with several laboratory information management systems, and process data generated by a range of techniques including high-content imaging, multicolor flow cytometry, and traditional high-throughput screening assays. The application is modular and flexible, with an interface that can be tuned to specific user roles. It offers user-friendly data visualization and reduction tools for HCS but also integrates Matlab for custom image analysis and the Konstanz Information Miner (KNIME) framework for data mining. Phaedra features efficient JPEG2000 compression and full drill-down functionality from dose-response curves down to individual cells, with exclusion and annotation options, cell classification, statistical quality controls, and reporting.
Primer on the Implementation of a Pharmacy Intranet Site to Improve Department Communication
Hale, Holly J.
2013-01-01
Purpose: The purpose of the article is to describe the experience of selecting, developing, and implementing a pharmacy department intranet site with commentary regarding application to other institutions. Clinical practitioners and supporting staff need an effective, efficient, organized, and user-friendly communication tool to utilize and relay information required to optimize patient care. Summary: To create a functional and user-friendly department intranet site, department leadership and staff should be involved in the process from selection of product through implementation. A product that supports both document storage management and communication delivery and has the capability to be customized to provide varied levels of site access is desirable. The designation of an intranet site owner/developer within the department will facilitate purposeful site design and site maintenance execution. A well-designed and up-to-date site along with formal end-user training are essential for staff adoption and continued utilization. Conclusion: Development of a department intranet site requires a considerable time investment by several members of the department. The implementation of an intranet site can be an important step toward achieving improved communications. Staff utilization of this resource is key to its success. PMID:24421523
Primer on the implementation of a pharmacy intranet site to improve department communication.
Hale, Holly J
2013-07-01
The purpose of the article is to describe the experience of selecting, developing, and implementing a pharmacy department intranet site with commentary regarding application to other institutions. Clinical practitioners and supporting staff need an effective, efficient, organized, and user-friendly communication tool to utilize and relay information required to optimize patient care. To create a functional and user-friendly department intranet site, department leadership and staff should be involved in the process from selection of product through implementation. A product that supports both document storage management and communication delivery and has the capability to be customized to provide varied levels of site access is desirable. The designation of an intranet site owner/developer within the department will facilitate purposeful site design and site maintenance execution. A well-designed and up-to-date site along with formal end-user training are essential for staff adoption and continued utilization. Development of a department intranet site requires a considerable time investment by several members of the department. The implementation of an intranet site can be an important step toward achieving improved communications. Staff utilization of this resource is key to its success.
Eruptive event generator based on the Gibson-Low magnetic configuration
NASA Astrophysics Data System (ADS)
Borovikov, D.; Sokolov, I. V.; Manchester, W. B.; Jin, M.; Gombosi, T. I.
2017-08-01
Coronal mass ejections (CMEs), a kind of energetic solar eruptions, are an integral subject of space weather research. Numerical magnetohydrodynamic (MHD) modeling, which requires powerful computational resources, is one of the primary means of studying the phenomenon. With increasing accessibility of such resources, grows the demand for user-friendly tools that would facilitate the process of simulating CMEs for scientific and operational purposes. The Eruptive Event Generator based on Gibson-Low flux rope (EEGGL), a new publicly available computational model presented in this paper, is an effort to meet this demand. EEGGL allows one to compute the parameters of a model flux rope driving a CME via an intuitive graphical user interface. We provide a brief overview of the physical principles behind EEGGL and its functionality. Ways toward future improvements of the tool are outlined.
Bumm, Klaus; Zheng, Mingzhong; Bailey, Clyde; Zhan, Fenghuang; Chiriva-Internati, M; Eddlemon, Paul; Terry, Julian; Barlogie, Bart; Shaughnessy, John D
2002-02-01
Clinical GeneOrganizer (CGO) is a novel windows-based archiving, organization and data mining software for the integration of gene expression profiling in clinical medicine. The program implements various user-friendly tools and extracts data for further statistical analysis. This software was written for Affymetrix GeneChip *.txt files, but can also be used for any other microarray-derived data. The MS-SQL server version acts as a data mart and links microarray data with clinical parameters of any other existing database and therefore represents a valuable tool for combining gene expression analysis and clinical disease characteristics.
The Master Archive Collection Inventory (MACI)
NASA Astrophysics Data System (ADS)
Lief, C. J.; Arnfield, J.; Sprain, M.
2014-12-01
The Master Archive Collection Inventory (MACI) project at the NOAA National Climatic Data Center (NCDC) is an effort to re-inventory all digital holdings to streamline data set and product titles and update documentation to discovery level ISO 199115-2. Subject Matter Experts (SME) are being identified for each of the holdings and will be responsible for creating and maintaining metadata records. New user-friendly tools are available for the SMEs to easily create and update this documentation. Updated metadata will be available for retrieval by other aggregators and discovery tools, increasing the usability of NCDC data and products.
Situational Awareness Geospatial Application (iSAGA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sher, Benjamin
Situational Awareness Geospatial Application (iSAGA) is a geospatial situational awareness software tool that uses an algorithm to extract location data from nearly any internet-based, or custom data source and display it geospatially; allows user-friendly conduct of spatial analysis using custom-developed tools; searches complex Geographic Information System (GIS) databases and accesses high resolution imagery. iSAGA has application at the federal, state and local levels of emergency response, consequence management, law enforcement, emergency operations and other decision makers as a tool to provide complete, visual, situational awareness using data feeds and tools selected by the individual agency or organization. Feeds may bemore » layered and custom tools developed to uniquely suit each subscribing agency or organization. iSAGA may similarly be applied to international agencies and organizations.« less
Intelligent control system based on ARM for lithography tool
NASA Astrophysics Data System (ADS)
Chen, Changlong; Tang, Xiaoping; Hu, Song; Wang, Nan
2014-08-01
The control system of traditional lithography tool is based on PC and MCU. The PC handles the complex algorithm, human-computer interaction, and communicates with MCU via serial port; The MCU controls motors and electromagnetic valves, etc. This mode has shortcomings like big volume, high power consumption, and wasting of PC resource. In this paper, an embedded intelligent control system of lithography tool, based on ARM, is provided. The control system used S5PV210 as processor, completing the functions of PC in traditional lithography tool, and provided a good human-computer interaction by using LCD and capacitive touch screen. Using Android4.0.3 as operating system, the equipment provided a cool and easy UI which made the control more user-friendly, and implemented remote control and debug, pushing video information of product by network programming. As a result, it's convenient for equipment vendor to provide technical support for users. Finally, compared with traditional lithography tool, this design reduced the PC part, making the hardware resources efficiently used and reducing the cost and volume. Introducing embedded OS and the concepts in "The Internet of things" into the design of lithography tool can be a development trend.
Creating the User-Friendly Library by Evaluating Patron Perception of Signage.
ERIC Educational Resources Information Center
Bosman, Ellen; Rusinek, Carol
1997-01-01
Librarians at Indiana University Northwest Library surveyed patrons on how to make the library's collection and services more accessible by improving signage. Examines the effectiveness of signage to instruct users, reduce difficulties and fears, ameliorate negative experiences, and contribute to a user-friendly environment. (AEF)
User-friendly solutions for microarray quality control and pre-processing on ArrayAnalysis.org
Eijssen, Lars M. T.; Jaillard, Magali; Adriaens, Michiel E.; Gaj, Stan; de Groot, Philip J.; Müller, Michael; Evelo, Chris T.
2013-01-01
Quality control (QC) is crucial for any scientific method producing data. Applying adequate QC introduces new challenges in the genomics field where large amounts of data are produced with complex technologies. For DNA microarrays, specific algorithms for QC and pre-processing including normalization have been developed by the scientific community, especially for expression chips of the Affymetrix platform. Many of these have been implemented in the statistical scripting language R and are available from the Bioconductor repository. However, application is hampered by lack of integrative tools that can be used by users of any experience level. To fill this gap, we developed a freely available tool for QC and pre-processing of Affymetrix gene expression results, extending, integrating and harmonizing functionality of Bioconductor packages. The tool can be easily accessed through a wizard-like web portal at http://www.arrayanalysis.org or downloaded for local use in R. The portal provides extensive documentation, including user guides, interpretation help with real output illustrations and detailed technical documentation. It assists newcomers to the field in performing state-of-the-art QC and pre-processing while offering data analysts an integral open-source package. Providing the scientific community with this easily accessible tool will allow improving data quality and reuse and adoption of standards. PMID:23620278
FunRich proteomics software analysis, let the fun begin!
Benito-Martin, Alberto; Peinado, Héctor
2015-08-01
Protein MS analysis is the preferred method for unbiased protein identification. It is normally applied to a large number of both small-scale and high-throughput studies. However, user-friendly computational tools for protein analysis are still needed. In this issue, Mathivanan and colleagues (Proteomics 2015, 15, 2597-2601) report the development of FunRich software, an open-access software that facilitates the analysis of proteomics data, providing tools for functional enrichment and interaction network analysis of genes and proteins. FunRich is a reinterpretation of proteomic software, a standalone tool combining ease of use with customizable databases, free access, and graphical representations. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
OXlearn: a new MATLAB-based simulation tool for connectionist models.
Ruh, Nicolas; Westermann, Gert
2009-11-01
OXlearn is a free, platform-independent MATLAB toolbox in which standard connectionist neural network models can be set up, run, and analyzed by means of a user-friendly graphical interface. Due to its seamless integration with the MATLAB programming environment, the inner workings of the simulation tool can be easily inspected and/or extended using native MATLAB commands or components. This combination of usability, transparency, and extendability makes OXlearn an efficient tool for the implementation of basic research projects or the prototyping of more complex research endeavors, as well as for teaching. Both the MATLAB toolbox and a compiled version that does not require access to MATLAB can be downloaded from http://psych.brookes.ac.uk/oxlearn/.
NASA Astrophysics Data System (ADS)
Kuckein, C.; Denker, C.; Verma, M.; Balthasar, H.; González Manrique, S. J.; Louis, R. E.; Diercke, A.
2017-10-01
A huge amount of data has been acquired with the GREGOR Fabry-Pérot Interferometer (GFPI), large-format facility cameras, and since 2016 with the High-resolution Fast Imager (HiFI). These data are processed in standardized procedures with the aim of providing science-ready data for the solar physics community. For this purpose, we have developed a user-friendly data reduction pipeline called ``sTools'' based on the Interactive Data Language (IDL) and licensed under creative commons license. The pipeline delivers reduced and image-reconstructed data with a minimum of user interaction. Furthermore, quick-look data are generated as well as a webpage with an overview of the observations and their statistics. All the processed data are stored online at the GREGOR GFPI and HiFI data archive of the Leibniz Institute for Astrophysics Potsdam (AIP). The principles of the pipeline are presented together with selected high-resolution spectral scans and images processed with sTools.
PRANAS: A New Platform for Retinal Analysis and Simulation.
Cessac, Bruno; Kornprobst, Pierre; Kraria, Selim; Nasser, Hassan; Pamplona, Daniela; Portelli, Geoffrey; Viéville, Thierry
2017-01-01
The retina encodes visual scenes by trains of action potentials that are sent to the brain via the optic nerve. In this paper, we describe a new free access user-end software allowing to better understand this coding. It is called PRANAS (https://pranas.inria.fr), standing for Platform for Retinal ANalysis And Simulation. PRANAS targets neuroscientists and modelers by providing a unique set of retina-related tools. PRANAS integrates a retina simulator allowing large scale simulations while keeping a strong biological plausibility and a toolbox for the analysis of spike train population statistics. The statistical method (entropy maximization under constraints) takes into account both spatial and temporal correlations as constraints, allowing to analyze the effects of memory on statistics. PRANAS also integrates a tool computing and representing in 3D (time-space) receptive fields. All these tools are accessible through a friendly graphical user interface. The most CPU-costly of them have been implemented to run in parallel.
Evaluation of interaction dynamics of concurrent processes
NASA Astrophysics Data System (ADS)
Sobecki, Piotr; Białasiewicz, Jan T.; Gross, Nicholas
2017-03-01
The purpose of this paper is to present the wavelet tools that enable the detection of temporal interactions of concurrent processes. In particular, the determination of interaction coherence of time-varying signals is achieved using a complex continuous wavelet transform. This paper has used electrocardiogram (ECG) and seismocardiogram (SCG) data set to show multiple continuous wavelet analysis techniques based on Morlet wavelet transform. MATLAB Graphical User Interface (GUI), developed in the reported research to assist in quick and simple data analysis, is presented. These software tools can discover the interaction dynamics of time-varying signals, hence they can reveal their correlation in phase and amplitude, as well as their non-linear interconnections. The user-friendly MATLAB GUI enables effective use of the developed software what enables to load two processes under investigation, make choice of the required processing parameters, and then perform the analysis. The software developed is a useful tool for researchers who have a need for investigation of interaction dynamics of concurrent processes.
HyCAW: Hydrological Climate change Adaptation Wizard
NASA Astrophysics Data System (ADS)
Bagli, Stefano; Mazzoli, Paolo; Broccoli, Davide; Luzzi, Valerio
2016-04-01
Changes in temporal and total water availability due to hydrologic and climate change requires an efficient use of resources through the selection of the best adaptation options. HyCAW provides a novel service to users willing or needing to adapt to hydrological change, by turning available scientific information into a user friendly online wizard that lets to: • Evaluate the monthly reduction of water availability induced by climate change; • Select the best adaptation options and visualize the benefits in terms of water balance and cost reduction; • Quantify potential of water saving by improving of water use efficiency. The tool entails knowledge of the intra-annual distribution of available surface and groundwater flows at a site under present and future (climate change) scenarios. This information is extracted from long term scenario simulation by E-HYPE (European hydrological predictions for the environment) model from Swedish Meteorological and Hydrological Institute, to quantify the expected evolution in water availability (e.g. percent reduction of soil infiltration and aquifer recharge; relative seasonal shift of runoff from summer to winter in mountain areas; etc.). Users are requested to provide in input their actual water supply on a monthly basis, both from surface and groundwater sources. Appropriate decision trees and an embedded precompiled database of Water saving technology for different sectors (household, agriculture, industrial, tourisms) lead them to interactively identify good practices for water saving/recycling/harvesting that they may implement in their specific context. Thanks to this service, users are not required to have a detailed understanding neither of data nor of hydrological processes, but may benefit of scientific analysis directly for practical adaptation in a simple and user friendly way, effectively improving their adaptation capacity. The tool is being developed under a collaborative FP7 funded project called SWITCH-ON (EU FP7 project No 603587) coordinated by SMHI (http://water-switch-on.eu/) and online demo is available at www.gecosistema.com/switchon
Computer programing for geosciences: Teach your students how to make tools
NASA Astrophysics Data System (ADS)
Grapenthin, Ronni
2011-12-01
When I announced my intention to pursue a Ph.D. in geophysics, some people gave me confused looks, because I was working on a master's degree in computer science at the time. My friends, like many incoming geoscience graduate students, have trouble linking these two fields. From my perspective, it is pretty straightforward: Much of geoscience evolves around novel analyses of large data sets that require custom tools—computer programs—to minimize the drudgery of manual data handling; other disciplines share this characteristic. While most faculty adapted to the need for tool development quite naturally, as they grew up around computer terminal interfaces, incoming graduate students lack intuitive understanding of programing concepts such as generalization and automation. I believe the major cause is the intuitive graphical user interfaces of modern operating systems and applications, which isolate the user from all technical details. Generally, current curricula do not recognize this gap between user and machine. For students to operate effectively, they require specialized courses teaching them the skills they need to make tools that operate on particular data sets and solve their specific problems. Courses in computer science departments are aimed at a different audience and are of limited help.
Zhang, Bing; Schmoyer, Denise; Kirov, Stefan; Snoddy, Jay
2004-01-01
Background Microarray and other high-throughput technologies are producing large sets of interesting genes that are difficult to analyze directly. Bioinformatics tools are needed to interpret the functional information in the gene sets. Results We have created a web-based tool for data analysis and data visualization for sets of genes called GOTree Machine (GOTM). This tool was originally intended to analyze sets of co-regulated genes identified from microarray analysis but is adaptable for use with other gene sets from other high-throughput analyses. GOTree Machine generates a GOTree, a tree-like structure to navigate the Gene Ontology Directed Acyclic Graph for input gene sets. This system provides user friendly data navigation and visualization. Statistical analysis helps users to identify the most important Gene Ontology categories for the input gene sets and suggests biological areas that warrant further study. GOTree Machine is available online at . Conclusion GOTree Machine has a broad application in functional genomic, proteomic and other high-throughput methods that generate large sets of interesting genes; its primary purpose is to help users sort for interesting patterns in gene sets. PMID:14975175
Geyer, John; Myers, Kathleen; Vander Stoep, Ann; McCarty, Carolyn; Palmer, Nancy; DeSalvo, Amy
2011-10-01
Clinical trials with multiple intervention locations and a single research coordinating center can be logistically difficult to implement. Increasingly, web-based systems are used to provide clinical trial support with many commercial, open source, and proprietary systems in use. New web-based tools are available which can be customized without programming expertise to deliver web-based clinical trial management and data collection functions. To demonstrate the feasibility of utilizing low-cost configurable applications to create a customized web-based data collection and study management system for a five intervention site randomized clinical trial establishing the efficacy of providing evidence-based treatment via teleconferencing to children with attention-deficit hyperactivity disorder. The sites are small communities that would not usually be included in traditional randomized trials. A major goal was to develop database that participants could access from computers in their home communities for direct data entry. Discussed is the selection process leading to the identification and utilization of a cost-effective and user-friendly set of tools capable of customization for data collection and study management tasks. An online assessment collection application, template-based web portal creation application, and web-accessible Access 2007 database were selected and customized to provide the following features: schedule appointments, administer and monitor online secure assessments, issue subject incentives, and securely transmit electronic documents between sites. Each tool was configured by users with limited programming expertise. As of June 2011, the system has successfully been used with 125 participants in 5 communities, who have completed 536 sets of assessment questionnaires, 8 community therapists, and 11 research staff at the research coordinating center. Total automation of processes is not possible with the current set of tools as each is loosely affiliated, creating some inefficiency. This system is best suited to investigations with a single data source e.g., psychosocial questionnaires. New web-based applications can be used by investigators with limited programming experience to implement user-friendly, efficient, and cost-effective tools for multi-site clinical trials with small distant communities. Such systems allow the inclusion in research of populations that are not usually involved in clinical trials.
Development of a Software Tool to Automate ADCO Flight Controller Console Planning Tasks
NASA Technical Reports Server (NTRS)
Anderson, Mark G.
2011-01-01
This independent study project covers the development of the International Space Station (ISS) Attitude Determination and Control Officer (ADCO) Planning Exchange APEX Tool. The primary goal of the tool is to streamline existing manual and time-intensive planning tools into a more automated, user-friendly application that interfaces with existing products and allows the ADCO to produce accurate products and timelines more effectively. This paper will survey the current ISS attitude planning process and its associated requirements, goals, documentation and software tools and how a software tool could simplify and automate many of the planning actions which occur at the ADCO console. The project will be covered from inception through the initial prototype delivery in November 2011 and will include development of design requirements and software as well as design verification and testing.
SNPConvert: SNP Array Standardization and Integration in Livestock Species.
Nicolazzi, Ezequiel Luis; Marras, Gabriele; Stella, Alessandra
2016-06-09
One of the main advantages of single nucleotide polymorphism (SNP) array technology is providing genotype calls for a specific number of SNP markers at a relatively low cost. Since its first application in animal genetics, the number of available SNP arrays for each species has been constantly increasing. However, conversely to that observed in whole genome sequence data analysis, SNP array data does not have a common set of file formats or coding conventions for allele calling. Therefore, the standardization and integration of SNP array data from multiple sources have become an obstacle, especially for users with basic or no programming skills. Here, we describe the difficulties related to handling SNP array data, focusing on file formats, SNP allele coding, and mapping. We also present SNPConvert suite, a multi-platform, open-source, and user-friendly set of tools to overcome these issues. This tool, which can be integrated with open-source and open-access tools already available, is a first step towards an integrated system to standardize and integrate any type of raw SNP array data. The tool is available at: https://github. com/nicolazzie/SNPConvert.git.
Weidner, Christopher; Fischer, Cornelius; Sauer, Sascha
2014-12-01
We introduce PHOXTRACK (PHOsphosite-X-TRacing Analysis of Causal Kinases), a user-friendly freely available software tool for analyzing large datasets of post-translational modifications of proteins, such as phosphorylation, which are commonly gained by mass spectrometry detection. In contrast to other currently applied data analysis approaches, PHOXTRACK uses full sets of quantitative proteomics data and applies non-parametric statistics to calculate whether defined kinase-specific sets of phosphosite sequences indicate statistically significant concordant differences between various biological conditions. PHOXTRACK is an efficient tool for extracting post-translational information of comprehensive proteomics datasets to decipher key regulatory proteins and to infer biologically relevant molecular pathways. PHOXTRACK will be maintained over the next years and is freely available as an online tool for non-commercial use at http://phoxtrack.molgen.mpg.de. Users will also find a tutorial at this Web site and can additionally give feedback at https://groups.google.com/d/forum/phoxtrack-discuss. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Lopez-Doriga, Adriana; Feliubadaló, Lídia; Menéndez, Mireia; Lopez-Doriga, Sergio; Morón-Duran, Francisco D; del Valle, Jesús; Tornero, Eva; Montes, Eva; Cuesta, Raquel; Campos, Olga; Gómez, Carolina; Pineda, Marta; González, Sara; Moreno, Victor; Capellá, Gabriel; Lázaro, Conxi
2014-03-01
Next-generation sequencing (NGS) has revolutionized genomic research and is set to have a major impact on genetic diagnostics thanks to the advent of benchtop sequencers and flexible kits for targeted libraries. Among the main hurdles in NGS are the difficulty of performing bioinformatic analysis of the huge volume of data generated and the high number of false positive calls that could be obtained, depending on the NGS technology and the analysis pipeline. Here, we present the development of a free and user-friendly Web data analysis tool that detects and filters sequence variants, provides coverage information, and allows the user to customize some basic parameters. The tool has been developed to provide accurate genetic analysis of targeted sequencing of common high-risk hereditary cancer genes using amplicon libraries run in a GS Junior System. The Web resource is linked to our own mutation database, to assist in the clinical classification of identified variants. We believe that this tool will greatly facilitate the use of the NGS approach in routine laboratories.
Computerized Design Synthesis (CDS), A database-driven multidisciplinary design tool
NASA Technical Reports Server (NTRS)
Anderson, D. M.; Bolukbasi, A. O.
1989-01-01
The Computerized Design Synthesis (CDS) system under development at McDonnell Douglas Helicopter Company (MDHC) is targeted to make revolutionary improvements in both response time and resource efficiency in the conceptual and preliminary design of rotorcraft systems. It makes the accumulated design database and supporting technology analysis results readily available to designers and analysts of technology, systems, and production, and makes powerful design synthesis software available in a user friendly format.
SC3 - consensus clustering of single-cell RNA-Seq data
Kiselev, Vladimir Yu.; Kirschner, Kristina; Schaub, Michael T.; Andrews, Tallulah; Yiu, Andrew; Chandra, Tamir; Natarajan, Kedar N; Reik, Wolf; Barahona, Mauricio; Green, Anthony R; Hemberg, Martin
2017-01-01
Single-cell RNA-seq (scRNA-seq) enables a quantitative cell-type characterisation based on global transcriptome profiles. We present Single-Cell Consensus Clustering (SC3), a user-friendly tool for unsupervised clustering which achieves high accuracy and robustness by combining multiple clustering solutions through a consensus approach. We demonstrate that SC3 is capable of identifying subclones based on the transcriptomes from neoplastic cells collected from patients. PMID:28346451
2012-02-17
tool should be combined with a user-friendly Windows-based software interface that utilizes the best practices for process planning developed by us and...best practices developed through this project, resulting in the commercial availability of machines for the Navy and others. These machines will...research 2011 Outstanding Paper Award, VRAP 2011, for paper "Some Studies on Dislocation Density based Finite Element Modeling of Ultrasonic
Hazardous Waste Generator Regulations: A User-Friendly Reference Document
User-friendly reference to assist EPA and state staff, industrial facilities generating and managing hazardous wastes as well as the general public, in locating and understanding RCRA hazardous waste generator regulations.
Inferring tie strength from online directed behavior.
Jones, Jason J; Settle, Jaime E; Bond, Robert M; Fariss, Christopher J; Marlow, Cameron; Fowler, James H
2013-01-01
Some social connections are stronger than others. People have not only friends, but also best friends. Social scientists have long recognized this characteristic of social connections and researchers frequently use the term tie strength to refer to this concept. We used online interaction data (specifically, Facebook interactions) to successfully identify real-world strong ties. Ground truth was established by asking users themselves to name their closest friends in real life. We found the frequency of online interaction was diagnostic of strong ties, and interaction frequency was much more useful diagnostically than were attributes of the user or the user's friends. More private communications (messages) were not necessarily more informative than public communications (comments, wall posts, and other interactions).
gPKPDSim: a SimBiology®-based GUI application for PKPD modeling in drug development.
Hosseini, Iraj; Gajjala, Anita; Bumbaca Yadav, Daniela; Sukumaran, Siddharth; Ramanujan, Saroja; Paxson, Ricardo; Gadkar, Kapil
2018-04-01
Modeling and simulation (M&S) is increasingly used in drug development to characterize pharmacokinetic-pharmacodynamic (PKPD) relationships and support various efforts such as target feasibility assessment, molecule selection, human PK projection, and preclinical and clinical dose and schedule determination. While model development typically require mathematical modeling expertise, model exploration and simulations could in many cases be performed by scientists in various disciplines to support the design, analysis and interpretation of experimental studies. To this end, we have developed a versatile graphical user interface (GUI) application to enable easy use of any model constructed in SimBiology ® to execute various common PKPD analyses. The MATLAB ® -based GUI application, called gPKPDSim, has a single screen interface and provides functionalities including simulation, data fitting (parameter estimation), population simulation (exploring the impact of parameter variability on the outputs of interest), and non-compartmental PK analysis. Further, gPKPDSim is a user-friendly tool with capabilities including interactive visualization, exporting of results and generation of presentation-ready figures. gPKPDSim was designed primarily for use in preclinical and translational drug development, although broader applications exist. gPKPDSim is a MATLAB ® -based open-source application and is publicly available to download from MATLAB ® Central™. We illustrate the use and features of gPKPDSim using multiple PKPD models to demonstrate the wide applications of this tool in pharmaceutical sciences. Overall, gPKPDSim provides an integrated, multi-purpose user-friendly GUI application to enable efficient use of PKPD models by scientists from various disciplines, regardless of their modeling expertise.
Carpeggiani, Clara; Paterni, Marco; Caramella, Davide; Vano, Eliseo; Semelka, Richard C; Picano, Eugenio
2012-11-01
Awareness of radiological risk is low among doctors and patients. An educational/decision tool that considers each patient' s cumulative lifetime radiation exposure would facilitate provider-patient communication. The purpose of this work was to develop user-friendly software for simple estimation and communication of radiological risk to patients and doctors as a part of the SUIT-Heart (Stop Useless Imaging Testing in Heart disease) Project of the Tuscany Region. We developed a novel software program (PC-platform, Windows OS fully downloadable at http://suit-heart.ifc.cnr.it) considering reference dose estimates from American Heart Association Radiological Imaging 2009 guidelines and UK Royal College of Radiology 2007 guidelines. Cancer age and gender-weighted risk were derived from Biological Effects of Ionising Radiation VII Committee, 2006. With simple input functions (demographics, age, gender) the user selects from a predetermined menu variables relating to natural (e.g., airplane flights and geo-tracked background exposure), professional (e.g., cath lab workers) and medical (e.g., CT, cardiac scintigraphy, coronary stenting) sources. The program provides a simple numeric (cumulative effective dose in milliSievert, mSv, and equivalent number of chest X-rays) and graphic (cumulative temporal trends of exposure, cancer cases out of 100 exposed persons) display. A simple software program allows straightforward estimation of cumulative dose (in multiples of chest X-rays) and risk (in extra % lifetime cancer risk), with simple numbers quantifying lifetime extra cancer risk. Pictorial display of radiation risk may be valuable for increasing radiological awareness in cardiologists. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Maher, Molly; Kaziunas, Elizabeth; Ackerman, Mark; Derry, Holly; Forringer, Rachel; Miller, Kristen; O'Reilly, Dennis; An, Larry C; Tewari, Muneesh; Hanauer, David A; Choi, Sung Won
2016-02-01
Health information technology (IT) has opened exciting avenues for capturing, delivering and sharing data, and offers the potential to develop cost-effective, patient-focused applications. In recent years, there has been a proliferation of health IT applications such as outpatient portals. Rigorous evaluation is fundamental to ensure effectiveness and sustainability, as resistance to more widespread adoption of outpatient portals may be due to lack of user friendliness. Health IT applications that integrate with the existing electronic health record and present information in a condensed, user-friendly format could improve coordination of care and communication. Importantly, these applications should be developed systematically with appropriate methodological design and testing to ensure usefulness, adoption, and sustainability. Based on our prior work that identified numerous information needs and challenges of HCT, we developed an experimental prototype of a health IT tool, the BMT Roadmap. Our goal was to develop a tool that could be used in the real-world, daily practice of HCT patients and caregivers (users) in the inpatient setting. Herein, we examined the views, needs, and wants of users in the design and development process of the BMT Roadmap through user-centered Design Groups. Three important themes emerged: 1) perception of core features as beneficial (views), 2) alerting the design team to potential issues with the user interface (needs); and 3) providing a deeper understanding of the user experience in terms of wider psychosocial requirements (wants). These findings resulted in changes that led to an improved, functional BMT Roadmap product, which will be tested as an intervention in the pediatric HCT population in the fall of 2015 (ClinicalTrials.govNCT02409121). Copyright © 2016 American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.
Bridging the Gap - Networking Educators using Real-Time Seismic Data
NASA Astrophysics Data System (ADS)
Ortiz, A. M.; Renwald, M. D.; Baldwin, T. K.; Hall, M. K.
2004-12-01
After nearly a decade, the seismology community has made critical advances in identifying what is effective and what is needed for success in incorporating real-time seismic data in the classroom. Today's K-16 classroom teachers have many options and opportunities for incorporating short- and long-term inquiry activities for monitoring earthquakes and analyzing seismic data in their daily instruction. Through the SpiNet program, we are providing web-based tools that support educators working with real-time seismic data (http://www.scieds.com/spinet/). Our site includes a Recent Seismicity section, which allows users to share seismic data in real-time, and provides near real-time information about global seismicity. Our Activities section provides data and lessons to assist educators who wish to integrate seismology into their classroom. The Research section, currently under development, will allow educators to share general information about how they teach seismology in their classroom through a discussion board and by posting lesson plans. In addition, we are developing a user-friendly tool for students to post results of their research projects. Designing a website which targets a range of users requires a working knowledge of both user needs and website programming and design. User needs include providing a logical navigational structure and accounting for differences in browser functionality, internet access, and users' abilities. Using website development tools, such as PHP, MySQL, RDF feeds, and specialized geoscience applications, we are automating site maintenance; incorporating databases for information storage and retrieval; and providing accessibility for users with a range of skills and physical limitations. By incorporating these features, we have built a dynamic interface for a broad range of users interested in educational seismology.
NASA Astrophysics Data System (ADS)
Peng, G.; Austin, M.
2017-12-01
Identification and prioritization of targeted user community needs are not always considered until after data has been created and archived. Gaps in data curation and documentation in the data production and delivery phases limit data's broad utility specifically for decision makers. Expert understanding and knowledge of a particular dataset is often required as a part of the data and metadata curation process to establish the credibility of the data and support informed decision-making. To enhance curation practices, content from NOAA's Observing System Integrated Assessment (NOSIA) Value Tree, NOAA's Data Catalog/Digital Object Identifier (DOI) projects (collection-level metadata) have been integrated with Data/Stewardship Maturity Matrices (data and stewardship quality information) focused on assessment of user community needs. This results in user focused evidence based decision making tools created by NOAA's National Environmental Satellite, Data, and Information Service (NESDIS) through identification and assessment of data content gaps related to scientific knowledge and application to key areas of societal benefit. Through enabling user need feedback from the beginning of data creation through archive allows users to determine the quality and value of data that is fit for purpose. Data gap assessment and prioritization are presented in a user-friendly way using the data stewardship maturity matrices as measurement of data management quality. These decision maker tools encourages data producers and data providers/stewards to consider users' needs prior to data creation and dissemination resulting in user driven data requirements increasing return on investment. A use case focused on need for NOAA observations linked societal benefit will be used to demonstrate the value of these tools.
Friend suggestion in social network based on user log
NASA Astrophysics Data System (ADS)
Kaviya, R.; Vanitha, M.; Sumaiya Thaseen, I.; Mangaiyarkarasi, R.
2017-11-01
Simple friend recommendation algorithms such as similarity, popularity and social aspects is the basic requirement to be explored to methodically form high-performance social friend recommendation. Suggestion of friends is followed. No tags of character were followed. In the proposed system, we use an algorithm for network correlation-based social friend recommendation (NC-based SFR).It includes user activities like where one lives and works. A new friend recommendation method, based on network correlation, by considering the effect of different social roles. To model the correlation between different networks, we develop a method that aligns these networks through important feature selection. We consider by preserving the network structure for a more better recommendations so that it significantly improves the accuracy for better friend-recommendation.
Polytobacco, marijuana, and alcohol use patterns in college students: A latent class analysis.
Haardörfer, Regine; Berg, Carla J; Lewis, Michael; Payne, Jackelyn; Pillai, Drishti; McDonald, Bennett; Windle, Michael
2016-08-01
Limited research has examined polysubstance use profiles among young adults focusing on the various tobacco products currently available. We examined use patterns of various tobacco products, marijuana, and alcohol using data from the baseline survey of a multiwave longitudinal study of 3418 students aged 18-25 recruited from seven U.S. college campuses. We assessed sociodemographics, individual-level factors (depression; perceptions of harm and addictiveness,), and sociocontextual factors (parental/friend use). We conducted a latent class analysis and multivariable logistic regression to examine correlates of class membership (Abstainers were referent group). Results indicated five classes: Abstainers (26.1% per past 4-month use), Alcohol only users (38.9%), Heavy polytobacco users (7.3%), Light polytobacco users (17.3%), and little cigar and cigarillo (LCC)/hookah/marijuana co-users (10.4%). The most stable was LCC/hookah/marijuana co-users (77.3% classified as such in past 30-day and 4-month timeframes), followed by Heavy polytobacco users (53.2% classified consistently). Relative to Abstainers, Heavy polytobacco users were less likely to be Black and have no friends using alcohol and perceived harm of tobacco and marijuana use lower. Light polytobacco users were older, more likely to have parents using tobacco, and less likely to have friends using tobacco. LCC/hookah/marijuana co-users were older and more likely to have parents using tobacco. Alcohol only users perceived tobacco and marijuana use to be less socially acceptable, were more likely to have parents using alcohol and friends using marijuana, but less likely to have friends using tobacco. These findings may inform substance use prevention and recovery programs by better characterizing polysubstance use patterns. Copyright © 2016 Elsevier Ltd. All rights reserved.
Polytobacco, marijuana, and alcohol use patterns in college students: A latent class analysis
Haardörfer, Regine; Berg, Carla J.; Lewis, Michael; Payne, Jackelyn; Pillai, Drishti; McDonald, Bennett; Windle, Michael
2016-01-01
Limited research has examined polysubstance use profiles among young adults focusing on the various tobacco products currently available. We examined use patterns of various tobacco products, marijuana, and alcohol using data from the baseline survey of a multiwave longitudinal study of 3418 students aged 18-25 recruited from seven U.S. college campuses. We assessed sociodemographics, individual-level factors (depression; perceptions of harm and addictiveness,), and sociocontextual factors (parental/friend use). We conducted a latent class analysis and multivariable logistic regression to examine correlates of class membership (Abstainers were referent group). Results indicated five classes: Abstainers (26.1% per past 4-month use), Alcohol only users (38.9%), Heavy polytobacco users (7.3%), Light polytobacco users (17.3%), and little cigar and cigarillo (LCC)/hookah/marijuana co-users (10.4%). The most stable was LCC/hookah/marijuana co-users (77.3% classified as such in past 30-day and 4-month timeframes), followed by Heavy polytobacco users (53.2% classified consistently). Relative to Abstainers, Heavy polytobacco users were less likely to be Black and have no friends using alcohol and perceived harm of tobacco and marijuana use lower. Light polytobacco users were older, more likely to have parents using tobacco, and less likely to have friends using tobacco. LCC/hookah/marijuana co-users were older and more likely to have parents using tobacco. Alcohol only users perceived tobacco and marijuana use to be less socially acceptable, were more likely to have parents using alcohol and friends using marijuana, but less likely to have friends using tobacco. These findings may inform substance use prevention and recovery programs by better characterizing polysubstance use patterns. PMID:27074202
Kilaru, Varun; Barfield, Richard T; Schroeder, James W; Smith, Alicia K
2012-01-01
Recent evidence suggests that DNA methylation changes may underlie numerous complex traits and diseases. The advent of commercial, array-based methods to interrogate DNA methylation has led to a profusion of epigenetic studies in the literature. Array-based methods, such as the popular Illumina GoldenGate and Infinium platforms, estimate the proportion of DNA methylated at single-base resolution for thousands of CpG sites across the genome. These arrays generate enormous amounts of data, but few software resources exist for efficient and flexible analysis of these data. We developed a software package called MethLAB (http://genetics.emory.edu/conneely/MethLAB) using R, an open source statistical language that can be edited to suit the needs of the user. MethLAB features a graphical user interface (GUI) with a menu-driven format designed to efficiently read in and manipulate array-based methylation data in a user-friendly manner. MethLAB tests for association between methylation and relevant phenotypes by fitting a separate linear model for each CpG site. These models can incorporate both continuous and categorical phenotypes and covariates, as well as fixed or random batch or chip effects. MethLAB accounts for multiple testing by controlling the false discovery rate (FDR) at a user-specified level. Standard output includes a spreadsheet-ready text file and an array of publication-quality figures. Considering the growing interest in and availability of DNA methylation data, there is a great need for user-friendly open source analytical tools. With MethLAB, we present a timely resource that will allow users with no programming experience to implement flexible and powerful analyses of DNA methylation data. PMID:22430798
An interactive program for computer-aided map design, display, and query: EMAPKGS2
Pouch, G.W.
1997-01-01
EMAPKGS2 is a user-friendly, PC-based electronic mapping tool for use in hydrogeologic exploration and appraisal. EMAPKGS2 allows the analyst to construct maps interactively from data stored in a relational database, perform point-oriented spatial queries such as locating all wells within a specified radius, perform geographic overlays, and export the data to other programs for further analysis. EMAPKGS2 runs under Microsoft?? Windows??? 3.1 and compatible operating systems. EMAPKGS2 is a public domain program available from the Kansas Geological Survey. EMAPKGS2 is the centerpiece of WHEAT, the Windows-based Hydrogeologic Exploration and Appraisal Toolkit, a suite of user-friendly Microsoft?? Windows??? programs for natural resource exploration and management. The principal goals in development of WHEAT have been ease of use, hardware independence, low cost, and end-user extensibility. WHEAT'S native data format is a Microsoft?? Access?? database. WHEAT stores a feature's geographic coordinates as attributes so they can be accessed easily by the user. The WHEAT programs are designed to be used in conjunction with other Microsoft?? Windows??? software to allow the natural resource scientist to perform work easily and effectively. WHEAT and EMAPKGS have been used at several of Kansas' Groundwater Management Districts and the Kansas Geological Survey on groundwater management operations, groundwater modeling projects, and geologic exploration projects. ?? 1997 Elsevier Science Ltd.
Transmission Line Jobs and Economic Development Impact (JEDI) Model User Reference Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, M.; Keyser, D.
The Jobs and Economic Development Impact (JEDI) models, developed through the National Renewable Energy Laboratory (NREL), are freely available, user-friendly tools that estimate the potential economic impacts of constructing and operating power generation projects for a range of conventional and renewable energy technologies. The Transmission Line JEDI model can be used to field questions about the economic impacts of transmission lines in a given state, region, or local community. This Transmission Line JEDI User Reference Guide was developed to provide basic instruction on operating the model and understanding the results. This guide also provides information on the model's underlying methodology,more » as well as the parameters and references used to develop the cost data contained in the model.« less
An interactive program for pharmacokinetic modeling.
Lu, D R; Mao, F
1993-05-01
A computer program, PharmK, was developed for pharmacokinetic modeling of experimental data. The program was written in C computer language based on the high-level user-interface Macintosh operating system. The intention was to provide a user-friendly tool for users of Macintosh computers. An interactive algorithm based on the exponential stripping method is used for the initial parameter estimation. Nonlinear pharmacokinetic model fitting is based on the maximum likelihood estimation method and is performed by the Levenberg-Marquardt method based on chi 2 criterion. Several methods are available to aid the evaluation of the fitting results. Pharmacokinetic data sets have been examined with the PharmK program, and the results are comparable with those obtained with other programs that are currently available for IBM PC-compatible and other types of computers.
Reuter, Katja; Ukpolo, Francis; Ward, Edward; Wilson, Melissa L; Angyan, Praveen
2016-06-29
Scarce information about clinical research, in particular clinical trials, is among the top reasons why potential participants do not take part in clinical studies. Without volunteers, on the other hand, clinical research and the development of novel approaches to preventing, diagnosing, and treating disease are impossible. Promising digital options such as social media have the potential to work alongside traditional methods to boost the promotion of clinical research. However, investigators and research institutions are challenged to leverage these innovations while saving time and resources. To develop and test the efficiency of a Web-based tool that automates the generation and distribution of user-friendly social media messages about clinical trials. Trial Promoter is developed in Ruby on Rails, HTML, cascading style sheet (CSS), and JavaScript. In order to test the tool and the correctness of the generated messages, clinical trials (n=46) were randomized into social media messages and distributed via the microblogging social media platform Twitter and the social network Facebook. The percent correct was calculated to determine the probability with which Trial Promoter generates accurate messages. During a 10-week testing phase, Trial Promoter automatically generated and published 525 user-friendly social media messages on Twitter and Facebook. On average, Trial Promoter correctly used the message templates and substituted the message parameters (text, URLs, and disease hashtags) 97.7% of the time (1563/1600). Trial Promoter may serve as a promising tool to render clinical trial promotion more efficient while requiring limited resources. It supports the distribution of any research or other types of content. The Trial Promoter code and installation instructions are freely available online.
Evaluation of a novel Serious Game based assessment tool for patients with Alzheimer's disease.
Vallejo, Vanessa; Wyss, Patric; Rampa, Luca; Mitache, Andrei V; Müri, René M; Mosimann, Urs P; Nef, Tobias
2017-01-01
Despite growing interest in developing ecological assessment of difficulties in patients with Alzheimer's disease new methods assessing the cognitive difficulties related to functional activities are missing. To complete current evaluation, the use of Serious Games can be a promising approach as it offers the possibility to recreate a virtual environment with daily living activities and a precise and complete cognitive evaluation. The aim of the present study was to evaluate the usability and the screening potential of a new ecological tool for assessment of cognitive functions in patients with Alzheimer's disease. Eighteen patients with Alzheimer's disease and twenty healthy controls participated to the study. They were asked to complete six daily living virtual tasks assessing several cognitive functions: three navigation tasks, one shopping task, one cooking task and one table preparation task following a one-day scenario. Usability of the game was evaluated through a questionnaire and through the analysis of the computer interactions for the two groups. Furthermore, the performances in terms of time to achieve the task and percentage of completion on the several tasks were recorded. Results indicate that both groups subjectively found the game user friendly and they were objectively able to play the game without computer interactions difficulties. Comparison of the performances between the two groups indicated a significant difference in terms of percentage of achievement of the several tasks and in terms of time they needed to achieve the several tasks. This study suggests that this new Serious Game based assessment tool is a user-friendly and ecological method to evaluate the cognitive abilities related to the difficulties patients can encounter in daily living activities and can be used as a screening tool as it allowed to distinguish Alzheimer's patient's performance from healthy controls.
Ukpolo, Francis; Ward, Edward; Wilson, Melissa L
2016-01-01
Background Scarce information about clinical research, in particular clinical trials, is among the top reasons why potential participants do not take part in clinical studies. Without volunteers, on the other hand, clinical research and the development of novel approaches to preventing, diagnosing, and treating disease are impossible. Promising digital options such as social media have the potential to work alongside traditional methods to boost the promotion of clinical research. However, investigators and research institutions are challenged to leverage these innovations while saving time and resources. Objective To develop and test the efficiency of a Web-based tool that automates the generation and distribution of user-friendly social media messages about clinical trials. Methods Trial Promoter is developed in Ruby on Rails, HTML, cascading style sheet (CSS), and JavaScript. In order to test the tool and the correctness of the generated messages, clinical trials (n=46) were randomized into social media messages and distributed via the microblogging social media platform Twitter and the social network Facebook. The percent correct was calculated to determine the probability with which Trial Promoter generates accurate messages. Results During a 10-week testing phase, Trial Promoter automatically generated and published 525 user-friendly social media messages on Twitter and Facebook. On average, Trial Promoter correctly used the message templates and substituted the message parameters (text, URLs, and disease hashtags) 97.7% of the time (1563/1600). Conclusions Trial Promoter may serve as a promising tool to render clinical trial promotion more efficient while requiring limited resources. It supports the distribution of any research or other types of content. The Trial Promoter code and installation instructions are freely available online. PMID:27357424
Dai, Yilin; Guo, Ling; Li, Meng; Chen, Yi-Bu
2012-06-08
Microarray data analysis presents a significant challenge to researchers who are unable to use the powerful Bioconductor and its numerous tools due to their lack of knowledge of R language. Among the few existing software programs that offer a graphic user interface to Bioconductor packages, none have implemented a comprehensive strategy to address the accuracy and reliability issue of microarray data analysis due to the well known probe design problems associated with many widely used microarray chips. There is also a lack of tools that would expedite the functional analysis of microarray results. We present Microarray Я US, an R-based graphical user interface that implements over a dozen popular Bioconductor packages to offer researchers a streamlined workflow for routine differential microarray expression data analysis without the need to learn R language. In order to enable a more accurate analysis and interpretation of microarray data, we incorporated the latest custom probe re-definition and re-annotation for Affymetrix and Illumina chips. A versatile microarray results output utility tool was also implemented for easy and fast generation of input files for over 20 of the most widely used functional analysis software programs. Coupled with a well-designed user interface, Microarray Я US leverages cutting edge Bioconductor packages for researchers with no knowledge in R language. It also enables a more reliable and accurate microarray data analysis and expedites downstream functional analysis of microarray results.
The GenABEL Project for statistical genomics
Karssen, Lennart C.; van Duijn, Cornelia M.; Aulchenko, Yurii S.
2016-01-01
Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the “core team”, facilitating agile statistical omics methodology development and fast dissemination. PMID:27347381
NASA Astrophysics Data System (ADS)
Bohon, W.; Frus, R.; Arrowsmith, R.; Fouch, M. J.; Garnero, E. J.; Semken, S. C.; Taylor, W. L.
2011-12-01
Social media has emerged as a popular and effective form of communication among all age groups, with nearly half of Internet users belonging to a social network or using another form of social media on a regular basis. This phenomenon creates an excellent opportunity for earth science organizations to use the wide reach, functionality and informal environment of social media platforms to disseminate important scientific information, create brand recognition, and establish trust with users. Further, social media systems can be utilized for missions of education, outreach, and communicating important timely information (e.g., news agencies are common users). They are eminently scaleable (thus serving from a few to millions of users with no cost and no performance problem), searchable (people are turning to them more frequently as conduits for information), and user friendly (thanks to the massive resources poured into the underlying technology and design, these systems are easy to use and have been widely adopted). They can be used, therefore, to engage the public interactively with the EarthScope facilities, experiments, and discoveries, and continue the cycle of discussions, experiments, analysis and conclusions that typify scientific advancement. The EarthScope National Office (ESNO) is launching an effort to utilize social media to broaden its impact as a conduit between scientists, facilities, educators, and the public. The ESNO will use the opportunities that social media affords to offer high quality science content in a variety of formats that appeal to social media users of various age groups, including blogs (popular with users 18-29), Facebook and Twitter updates (popular with users ages 18-50), email updates (popular with older adults), and video clips (popular with all age groups). We will monitor the number of "fans" and "friends" on social media and networking pages in order to gauge the increase in the percentage of the user population visiting the site. We will also use existing tools available on social media sites to track the relationships between users who visit or "friend" the site to determine how knowledge of the site is transferred amongst various social, educational or geographic groups. Finally, we will use this information to iteratively improve the variety of content and media on the site to increase our user pool, improve EarthScope recognition, and provide appropriate and user-specific Earth science information, especially for time sensitive events of wide interest such as natural disasters.
Integrated tools for control-system analysis
NASA Technical Reports Server (NTRS)
Ostroff, Aaron J.; Proffitt, Melissa S.; Clark, David R.
1989-01-01
The basic functions embedded within a user friendly software package (MATRIXx) are used to provide a high level systems approach to the analysis of linear control systems. Various control system analysis configurations are assembled automatically to minimize the amount of work by the user. Interactive decision making is incorporated via menu options and at selected points, such as in the plotting section, by inputting data. There are five evaluations such as the singular value robustness test, singular value loop transfer frequency response, Bode frequency response, steady-state covariance analysis, and closed-loop eigenvalues. Another section describes time response simulations. A time response for random white noise disturbance is available. The configurations and key equations used for each type of analysis, the restrictions that apply, the type of data required, and an example problem are described. One approach for integrating the design and analysis tools is also presented.
imDEV: a graphical user interface to R multivariate analysis tools in Microsoft Excel.
Grapov, Dmitry; Newman, John W
2012-09-01
Interactive modules for Data Exploration and Visualization (imDEV) is a Microsoft Excel spreadsheet embedded application providing an integrated environment for the analysis of omics data through a user-friendly interface. Individual modules enables interactive and dynamic analyses of large data by interfacing R's multivariate statistics and highly customizable visualizations with the spreadsheet environment, aiding robust inferences and generating information-rich data visualizations. This tool provides access to multiple comparisons with false discovery correction, hierarchical clustering, principal and independent component analyses, partial least squares regression and discriminant analysis, through an intuitive interface for creating high-quality two- and a three-dimensional visualizations including scatter plot matrices, distribution plots, dendrograms, heat maps, biplots, trellis biplots and correlation networks. Freely available for download at http://sourceforge.net/projects/imdev/. Implemented in R and VBA and supported by Microsoft Excel (2003, 2007 and 2010).
A Python-based interface to examine motions in time series of solar images
NASA Astrophysics Data System (ADS)
Campos-Rozo, J. I.; Vargas Domínguez, S.
2017-10-01
Python is considered to be a mature programming language, besides of being widely accepted as an engaging option for scientific analysis in multiple areas, as will be presented in this work for the particular case of solar physics research. SunPy is an open-source library based on Python that has been recently developed to furnish software tools to solar data analysis and visualization. In this work we present a graphical user interface (GUI) based on Python and Qt to effectively compute proper motions for the analysis of time series of solar data. This user-friendly computing interface, that is intended to be incorporated to the Sunpy library, uses a local correlation tracking technique and some extra tools that allows the selection of different parameters to calculate, vizualize and analyze vector velocity fields of solar data, i.e. time series of solar filtergrams and magnetograms.
The LSST metrics analysis framework (MAF)
NASA Astrophysics Data System (ADS)
Jones, R. L.; Yoachim, Peter; Chandrasekharan, Srinivasan; Connolly, Andrew J.; Cook, Kem H.; Ivezic, Željko; Krughoff, K. S.; Petry, Catherine; Ridgway, Stephen T.
2014-07-01
We describe the Metrics Analysis Framework (MAF), an open-source python framework developed to provide a user-friendly, customizable, easily-extensible set of tools for analyzing data sets. MAF is part of the Large Synoptic Survey Telescope (LSST) Simulations effort. Its initial goal is to provide a tool to evaluate LSST Operations Simulation (OpSim) simulated surveys to help understand the effects of telescope scheduling on survey performance, however MAF can be applied to a much wider range of datasets. The building blocks of the framework are Metrics (algorithms to analyze a given quantity of data), Slicers (subdividing the overall data set into smaller data slices as relevant for each Metric), and Database classes (to access the dataset and read data into memory). We describe how these building blocks work together, and provide an example of using MAF to evaluate different dithering strategies. We also outline how users can write their own custom Metrics and use these within the framework.
CellTracker (not only) for dummies.
Piccinini, Filippo; Kiss, Alexa; Horvath, Peter
2016-03-15
Time-lapse experiments play a key role in studying the dynamic behavior of cells. Single-cell tracking is one of the fundamental tools for such analyses. The vast majority of the recently introduced cell tracking methods are limited to fluorescently labeled cells. An equally important limitation is that most software cannot be effectively used by biologists without reasonable expertise in image processing. Here we present CellTracker, a user-friendly open-source software tool for tracking cells imaged with various imaging modalities, including fluorescent, phase contrast and differential interference contrast (DIC) techniques. CellTracker is written in MATLAB (The MathWorks, Inc., USA). It works with Windows, Macintosh and UNIX-based systems. Source code and graphical user interface (GUI) are freely available at: http://celltracker.website/ horvath.peter@brc.mta.hu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Evidence of absence (v2.0) software user guide
Dalthorp, Daniel; Huso, Manuela; Dail, David
2017-07-06
Evidence of Absence software (EoA) is a user-friendly software application for estimating bird and bat fatalities at wind farms and for designing search protocols. The software is particularly useful in addressing whether the number of fatalities is below a given threshold and what search parameters are needed to give assurance that thresholds were not exceeded. The software also includes tools (1) for estimating carcass persistence distributions and searcher efficiency parameters ( and ) from field trials, (2) for projecting future mortality based on past monitoring data, and (3) for exploring the potential consequences of various choices in the design of long-term incidental take permits for protected species. The software was designed specifically for cases where tolerance for mortality is low and carcass counts are small or even 0, but the tools also may be used for mortality estimates when carcass counts are large.
A novel graphical user interface for ultrasound-guided shoulder arthroscopic surgery
NASA Astrophysics Data System (ADS)
Tyryshkin, K.; Mousavi, P.; Beek, M.; Pichora, D.; Abolmaesumi, P.
2007-03-01
This paper presents a novel graphical user interface developed for a navigation system for ultrasound-guided computer-assisted shoulder arthroscopic surgery. The envisioned purpose of the interface is to assist the surgeon in determining the position and orientation of the arthroscopic camera and other surgical tools within the anatomy of the patient. The user interface features real time position tracking of the arthroscopic instruments with an optical tracking system, and visualization of their graphical representations relative to a three-dimensional shoulder surface model of the patient, created from computed tomography images. In addition, the developed graphical interface facilitates fast and user-friendly intra-operative calibration of the arthroscope and the arthroscopic burr, capture and segmentation of ultrasound images, and intra-operative registration. A pilot study simulating the computer-aided shoulder arthroscopic procedure on a shoulder phantom demonstrated the speed, efficiency and ease-of-use of the system.
Grim, Katarina; Rosenberg, David; Svedberg, Petra; Schön, Ulla-Karin
2017-09-01
Shared decision making (SMD) related to treatment and rehabilitation is considered a central component in recovery-oriented practice. Although decision aids are regarded as an essential component for successfully implementing SDM, these aids are often lacking within psychiatric services. The aim of this study was to use a participatory design to facilitate the development of a user-generated, web-based decision aid for individuals receiving psychiatric services. The results of this effort as well as the lessons learned during the development and usability processes are reported. The participatory design included 4 iterative cycles of development. Various qualitative methods for data collection were used with potential end users participating as informants in focus group and individual interviews and as usability and pilot testers. Interviewing and testing identified usability problems that then led to refinements and making the subsequent prototypes increasingly user-friendly and relevant. In each phase of the process, feedback from potential end-users provided guidance in developing the formation of the web-based decision aid that strengthens the position of users by integrating access to information regarding alternative supports, interactivity between staff and users, and user preferences as a continual focus in the tool. This web-based decision aid has the potential to strengthen service users' experience of self-efficacy and control as well as provide staff access to user knowledge and preferences. Studies employing participatory models focusing on usability have potential to significantly contribute to the development and implementation of tools that reflect user perspectives. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
GREAT: a web portal for Genome Regulatory Architecture Tools
Bouyioukos, Costas; Bucchini, François; Elati, Mohamed; Képès, François
2016-01-01
GREAT (Genome REgulatory Architecture Tools) is a novel web portal for tools designed to generate user-friendly and biologically useful analysis of genome architecture and regulation. The online tools of GREAT are freely accessible and compatible with essentially any operating system which runs a modern browser. GREAT is based on the analysis of genome layout -defined as the respective positioning of co-functional genes- and its relation with chromosome architecture and gene expression. GREAT tools allow users to systematically detect regular patterns along co-functional genomic features in an automatic way consisting of three individual steps and respective interactive visualizations. In addition to the complete analysis of regularities, GREAT tools enable the use of periodicity and position information for improving the prediction of transcription factor binding sites using a multi-view machine learning approach. The outcome of this integrative approach features a multivariate analysis of the interplay between the location of a gene and its regulatory sequence. GREAT results are plotted in web interactive graphs and are available for download either as individual plots, self-contained interactive pages or as machine readable tables for downstream analysis. The GREAT portal can be reached at the following URL https://absynth.issb.genopole.fr/GREAT and each individual GREAT tool is available for downloading. PMID:27151196
MethLAB: a graphical user interface package for the analysis of array-based DNA methylation data.
Kilaru, Varun; Barfield, Richard T; Schroeder, James W; Smith, Alicia K; Conneely, Karen N
2012-03-01
Recent evidence suggests that DNA methylation changes may underlie numerous complex traits and diseases. The advent of commercial, array-based methods to interrogate DNA methylation has led to a profusion of epigenetic studies in the literature. Array-based methods, such as the popular Illumina GoldenGate and Infinium platforms, estimate the proportion of DNA methylated at single-base resolution for thousands of CpG sites across the genome. These arrays generate enormous amounts of data, but few software resources exist for efficient and flexible analysis of these data. We developed a software package called MethLAB (http://genetics.emory.edu/conneely/MethLAB) using R, an open source statistical language that can be edited to suit the needs of the user. MethLAB features a graphical user interface (GUI) with a menu-driven format designed to efficiently read in and manipulate array-based methylation data in a user-friendly manner. MethLAB tests for association between methylation and relevant phenotypes by fitting a separate linear model for each CpG site. These models can incorporate both continuous and categorical phenotypes and covariates, as well as fixed or random batch or chip effects. MethLAB accounts for multiple testing by controlling the false discovery rate (FDR) at a user-specified level. Standard output includes a spreadsheet-ready text file and an array of publication-quality figures. Considering the growing interest in and availability of DNA methylation data, there is a great need for user-friendly open source analytical tools. With MethLAB, we present a timely resource that will allow users with no programming experience to implement flexible and powerful analyses of DNA methylation data.
NASA Astrophysics Data System (ADS)
Sushko, Iurii; Novotarskyi, Sergii; Körner, Robert; Pandey, Anil Kumar; Rupp, Matthias; Teetz, Wolfram; Brandmaier, Stefan; Abdelaziz, Ahmed; Prokopenko, Volodymyr V.; Tanchuk, Vsevolod Y.; Todeschini, Roberto; Varnek, Alexandre; Marcou, Gilles; Ertl, Peter; Potemkin, Vladimir; Grishina, Maria; Gasteiger, Johann; Schwab, Christof; Baskin, Igor I.; Palyulin, Vladimir A.; Radchenko, Eugene V.; Welsh, William J.; Kholodovych, Vladyslav; Chekmarev, Dmitriy; Cherkasov, Artem; Aires-de-Sousa, Joao; Zhang, Qing-You; Bender, Andreas; Nigsch, Florian; Patiny, Luc; Williams, Antony; Tkachenko, Valery; Tetko, Igor V.
2011-06-01
The Online Chemical Modeling Environment is a web-based platform that aims to automate and simplify the typical steps required for QSAR modeling. The platform consists of two major subsystems: the database of experimental measurements and the modeling framework. A user-contributed database contains a set of tools for easy input, search and modification of thousands of records. The OCHEM database is based on the wiki principle and focuses primarily on the quality and verifiability of the data. The database is tightly integrated with the modeling framework, which supports all the steps required to create a predictive model: data search, calculation and selection of a vast variety of molecular descriptors, application of machine learning methods, validation, analysis of the model and assessment of the applicability domain. As compared to other similar systems, OCHEM is not intended to re-implement the existing tools or models but rather to invite the original authors to contribute their results, make them publicly available, share them with other users and to become members of the growing research community. Our intention is to make OCHEM a widely used platform to perform the QSPR/QSAR studies online and share it with other users on the Web. The ultimate goal of OCHEM is collecting all possible chemoinformatics tools within one simple, reliable and user-friendly resource. The OCHEM is free for web users and it is available online at http://www.ochem.eu.
TiConverter: A training image converting tool for multiple-point geostatistics
NASA Astrophysics Data System (ADS)
Fadlelmula F., Mohamed M.; Killough, John; Fraim, Michael
2016-11-01
TiConverter is a tool developed to ease the application of multiple-point geostatistics whether by the open source Stanford Geostatistical Modeling Software (SGeMS) or other available commercial software. TiConverter has a user-friendly interface and it allows the conversion of 2D training images into numerical representations in four different file formats without the need for additional code writing. These are the ASCII (.txt), the geostatistical software library (GSLIB) (.txt), the Isatis (.dat), and the VTK formats. It performs the conversion based on the RGB color system. In addition, TiConverter offers several useful tools including image resizing, smoothing, and segmenting tools. The purpose of this study is to introduce the TiConverter, and to demonstrate its application and advantages with several examples from the literature.
Developing Healthcare Data Analytics APPs with Open Data Science Tools.
Hao, Bibo; Sun, Wen; Yu, Yiqin; Xie, Guotong
2017-01-01
Recent advances in big data analytics provide more flexible, efficient, and open tools for researchers to gain insight from healthcare data. Whilst many tools require researchers to develop programs with programming languages like Python, R and so on, which is not a skill set grasped by many researchers in the healthcare data analytics area. To make data science more approachable, we explored existing tools and developed a practice that can help data scientists convert existing analytics pipelines to user-friendly analytics APPs with rich interactions and features of real-time analysis. With this practice, data scientists can develop customized analytics pipelines as APPs in Jupyter Notebook and disseminate them to other researchers easily, and researchers can benefit from the shared notebook to perform analysis tasks or reproduce research results much more easily.
ERIC Educational Resources Information Center
Metz, Ray E.; Junion-Metz, Gail
This book provides basic information about the World Wide Web and serves as a guide to the tools and techniques needed to browse the Web, integrate it into library services, or build an attractive, user-friendly home page for the library. Chapter 1 provides an overview of Web basics and chapter 2 discusses some of the big issues related to…
Wong, Kim; Navarro, José Fernández; Bergenstråhle, Ludvig; Ståhl, Patrik L; Lundeberg, Joakim
2018-06-01
Spatial Transcriptomics (ST) is a method which combines high resolution tissue imaging with high troughput transcriptome sequencing data. This data must be aligned with the images for correct visualization, a process that involves several manual steps. Here we present ST Spot Detector, a web tool that automates and facilitates this alignment through a user friendly interface. jose.fernandez.navarro@scilifelab.se. Supplementary data are available at Bioinformatics online.
Dendroscope: An interactive viewer for large phylogenetic trees
Huson, Daniel H; Richter, Daniel C; Rausch, Christian; Dezulian, Tobias; Franz, Markus; Rupp, Regula
2007-01-01
Background Research in evolution requires software for visualizing and editing phylogenetic trees, for increasingly very large datasets, such as arise in expression analysis or metagenomics, for example. It would be desirable to have a program that provides these services in an effcient and user-friendly way, and that can be easily installed and run on all major operating systems. Although a large number of tree visualization tools are freely available, some as a part of more comprehensive analysis packages, all have drawbacks in one or more domains. They either lack some of the standard tree visualization techniques or basic graphics and editing features, or they are restricted to small trees containing only tens of thousands of taxa. Moreover, many programs are diffcult to install or are not available for all common operating systems. Results We have developed a new program, Dendroscope, for the interactive visualization and navigation of phylogenetic trees. The program provides all standard tree visualizations and is optimized to run interactively on trees containing hundreds of thousands of taxa. The program provides tree editing and graphics export capabilities. To support the inspection of large trees, Dendroscope offers a magnification tool. The software is written in Java 1.4 and installers are provided for Linux/Unix, MacOS X and Windows XP. Conclusion Dendroscope is a user-friendly program for visualizing and navigating phylogenetic trees, for both small and large datasets. PMID:18034891
NASA Astrophysics Data System (ADS)
Krajewski, W. F.; Della Libera Zanchetta, A.; Mantilla, R.; Demir, I.
2017-12-01
This work explores the use of hydroinformatics tools to provide an user friendly and accessible interface for executing and assessing the output of realtime flood forecasts using distributed hydrological models. The main result is the implementation of a web system that uses an Iowa Flood Information System (IFIS)-based environment for graphical displays of rainfall-runoff simulation results for both real-time and past storm events. It communicates with ASYNCH ODE solver to perform large-scale distributed hydrological modeling based on segmentation of the terrain into hillslope-link hydrologic units. The cyber-platform also allows hindcast of model performance by testing multiple model configurations and assumptions of vertical flows in the soils. The scope of the currently implemented system is the entire set of contributing watersheds for the territory of the state of Iowa. The interface provides resources for visualization of animated maps for different water-related modeled states of the environment, including flood-waves propagation with classification of flood magnitude, runoff generation, surface soil moisture and total water column in the soil. Additional tools for comparing different model configurations and performing model evaluation by comparing to observed variables at monitored sites are also available. The user friendly interface has been published to the web under the URL http://ifis.iowafloodcenter.org/ifis/sc/modelplus/.
Drory Retwitzer, Matan; Polishchuk, Maya; Churkin, Elena; Kifer, Ilona; Yakhini, Zohar; Barash, Danny
2015-01-01
Searching for RNA sequence-structure patterns is becoming an essential tool for RNA practitioners. Novel discoveries of regulatory non-coding RNAs in targeted organisms and the motivation to find them across a wide range of organisms have prompted the use of computational RNA pattern matching as an enhancement to sequence similarity. State-of-the-art programs differ by the flexibility of patterns allowed as queries and by their simplicity of use. In particular—no existing method is available as a user-friendly web server. A general program that searches for RNA sequence-structure patterns is RNA Structator. However, it is not available as a web server and does not provide the option to allow flexible gap pattern representation with an upper bound of the gap length being specified at any position in the sequence. Here, we introduce RNAPattMatch, a web-based application that is user friendly and makes sequence/structure RNA queries accessible to practitioners of various background and proficiency. It also extends RNA Structator and allows a more flexible variable gaps representation, in addition to analysis of results using energy minimization methods. RNAPattMatch service is available at http://www.cs.bgu.ac.il/rnapattmatch. A standalone version of the search tool is also available to download at the site. PMID:25940619
Multi-model-based interactive authoring environment for creating shareable medical knowledge.
Ali, Taqdir; Hussain, Maqbool; Ali Khan, Wajahat; Afzal, Muhammad; Hussain, Jamil; Ali, Rahman; Hassan, Waseem; Jamshed, Arif; Kang, Byeong Ho; Lee, Sungyoung
2017-10-01
Technologically integrated healthcare environments can be realized if physicians are encouraged to use smart systems for the creation and sharing of knowledge used in clinical decision support systems (CDSS). While CDSSs are heading toward smart environments, they lack support for abstraction of technology-oriented knowledge from physicians. Therefore, abstraction in the form of a user-friendly and flexible authoring environment is required in order for physicians to create shareable and interoperable knowledge for CDSS workflows. Our proposed system provides a user-friendly authoring environment to create Arden Syntax MLM (Medical Logic Module) as shareable knowledge rules for intelligent decision-making by CDSS. Existing systems are not physician friendly and lack interoperability and shareability of knowledge. In this paper, we proposed Intelligent-Knowledge Authoring Tool (I-KAT), a knowledge authoring environment that overcomes the above mentioned limitations. Shareability is achieved by creating a knowledge base from MLMs using Arden Syntax. Interoperability is enhanced using standard data models and terminologies. However, creation of shareable and interoperable knowledge using Arden Syntax without abstraction increases complexity, which ultimately makes it difficult for physicians to use the authoring environment. Therefore, physician friendliness is provided by abstraction at the application layer to reduce complexity. This abstraction is regulated by mappings created between legacy system concepts, which are modeled as domain clinical model (DCM) and decision support standards such as virtual medical record (vMR) and Systematized Nomenclature of Medicine - Clinical Terms (SNOMED CT). We represent these mappings with a semantic reconciliation model (SRM). The objective of the study is the creation of shareable and interoperable knowledge using a user-friendly and flexible I-KAT. Therefore we evaluated our system using completeness and user satisfaction criteria, which we assessed through the system- and user-centric evaluation processes. For system-centric evaluation, we compared the implementation of clinical information modelling system requirements in our proposed system and in existing systems. The results suggested that 82.05% of the requirements were fully supported, 7.69% were partially supported, and 10.25% were not supported by our system. In the existing systems, 35.89% of requirements were fully supported, 28.20% were partially supported, and 35.89% were not supported. For user-centric evaluation, the assessment criterion was 'ease of use'. Our proposed system showed 15 times better results with respect to MLM creation time than the existing systems. Moreover, on average, the participants made only one error in MLM creation using our proposed system, but 13 errors per MLM using the existing systems. We provide a user-friendly authoring environment for creation of shareable and interoperable knowledge for CDSS to overcome knowledge acquisition complexity. The authoring environment uses state-of-the-art decision support-related clinical standards with increased ease of use. Copyright © 2017 Elsevier B.V. All rights reserved.
Real-time simulator for designing electron dual scattering foil systems.
Carver, Robert L; Hogstrom, Kenneth R; Price, Michael J; LeBlanc, Justin D; Pitcher, Garrett M
2014-11-08
The purpose of this work was to develop a user friendly, accurate, real-time com- puter simulator to facilitate the design of dual foil scattering systems for electron beams on radiotherapy accelerators. The simulator allows for a relatively quick, initial design that can be refined and verified with subsequent Monte Carlo (MC) calculations and measurements. The simulator also is a powerful educational tool. The simulator consists of an analytical algorithm for calculating electron fluence and X-ray dose and a graphical user interface (GUI) C++ program. The algorithm predicts electron fluence using Fermi-Eyges multiple Coulomb scattering theory with the reduced Gaussian formalism for scattering powers. The simulator also estimates central-axis and off-axis X-ray dose arising from the dual foil system. Once the geometry of the accelerator is specified, the simulator allows the user to continuously vary primary scattering foil material and thickness, secondary scat- tering foil material and Gaussian shape (thickness and sigma), and beam energy. The off-axis electron relative fluence or total dose profile and central-axis X-ray dose contamination are computed and displayed in real time. The simulator was validated by comparison of off-axis electron relative fluence and X-ray percent dose profiles with those calculated using EGSnrc MC. Over the energy range 7-20 MeV, using present foils on an Elekta radiotherapy accelerator, the simulator was able to reproduce MC profiles to within 2% out to 20 cm from the central axis. The central-axis X-ray percent dose predictions matched measured data to within 0.5%. The calculation time was approximately 100 ms using a single Intel 2.93 GHz processor, which allows for real-time variation of foil geometrical parameters using slider bars. This work demonstrates how the user-friendly GUI and real-time nature of the simulator make it an effective educational tool for gaining a better understanding of the effects that various system parameters have on a relative dose profile. This work also demonstrates a method for using the simulator as a design tool for creating custom dual scattering foil systems in the clinical range of beam energies (6-20 MeV).
Ryan, Michael C; Zeeberg, Barry R; Caplen, Natasha J; Cleland, James A; Kahn, Ari B; Liu, Hongfang; Weinstein, John N
2008-01-01
Background Over 60% of protein-coding genes in vertebrates express mRNAs that undergo alternative splicing. The resulting collection of transcript isoforms poses significant challenges for contemporary biological assays. For example, RT-PCR validation of gene expression microarray results may be unsuccessful if the two technologies target different splice variants. Effective use of sequence-based technologies requires knowledge of the specific splice variant(s) that are targeted. In addition, the critical roles of alternative splice forms in biological function and in disease suggest that assay results may be more informative if analyzed in the context of the targeted splice variant. Results A number of contemporary technologies are used for analyzing transcripts or proteins. To enable investigation of the impact of splice variation on the interpretation of data derived from those technologies, we have developed SpliceCenter. SpliceCenter is a suite of user-friendly, web-based applications that includes programs for analysis of RT-PCR primer/probe sets, effectors of RNAi, microarrays, and protein-targeting technologies. Both interactive and high-throughput implementations of the tools are provided. The interactive versions of SpliceCenter tools provide visualizations of a gene's alternative transcripts and probe target positions, enabling the user to identify which splice variants are or are not targeted. The high-throughput batch versions accept user query files and provide results in tabular form. When, for example, we used SpliceCenter's batch siRNA-Check to process the Cancer Genome Anatomy Project's large-scale shRNA library, we found that only 59% of the 50,766 shRNAs in the library target all known splice variants of the target gene, 32% target some but not all, and 9% do not target any currently annotated transcript. Conclusion SpliceCenter provides unique, user-friendly applications for assessing the impact of transcript variation on the design and interpretation of RT-PCR, RNAi, gene expression microarrays, antibody-based detection, and mass spectrometry proteomics. The tools are intended for use by bench biologists as well as bioinformaticists. PMID:18638396
Pushing and pulling: an assessment tool for occupational health and safety practitioners.
Lind, Carl Mikael
2018-03-01
A tool has been developed for supporting practitioners when assessing manual pushing and pulling operations based on an initiative by two global companies in the manufacturing industry. The aim of the tool is to support occupational health and safety practitioners in risk assessment and risk management of pushing and pulling operations in the manufacturing and logistics industries. The tool is based on a nine-multiplier equation that includes a wide range of factors affecting an operator's health risk and capacity in pushing and pulling. These multipliers are based on psychophysical, physiological and biomechanical studies in combination with judgments from an expert group consisting of senior researchers and ergonomists. In order to consider usability, more than 50 occupational health and safety practitioners (e.g., ergonomists, managers, safety representatives and production personnel) participated in the development of the tool. An evaluation by 22 ergonomists supports that the push/pull tool is user friendly in general.
Jules Verne Voyager, Jr: An Interactive Map Tool for Teaching Plate Tectonics
NASA Astrophysics Data System (ADS)
Hamburger, M. W.; Meertens, C. M.
2010-12-01
We present an interactive, web-based map utility that can make new geological and geophysical results accessible to a large number and variety of users. The tool provides a user-friendly interface that allows users to access a variety of maps, satellite images, and geophysical data at a range of spatial scales. The map tool, dubbed 'Jules Verne Voyager, Jr.', allows users to interactively create maps of a variety of study areas around the world. The utility was developed in collaboration with the UNAVCO Consortium for study of global-scale tectonic processes. Users can choose from a variety of base maps (including "Face of the Earth" and "Earth at Night" satellite imagery mosaics, global topography, geoid, sea-floor age, strain rate and seismic hazard maps, and others), add a number of geographic and geophysical overlays (coastlines, political boundaries, rivers and lakes, earthquake and volcano locations, stress axes, etc.), and then superimpose both observed and model velocity vectors representing a compilation of 2933 GPS geodetic measurements from around the world. A remarkable characteristic of the geodetic compilation is that users can select from some 21 plates' frames of reference, allowing a visual representation of both 'absolute' plate motion (in a no-net rotation reference frame) and relative motion along all of the world's plate boundaries. The tool allows users to zoom among at least three map scales. The map tool can be viewed at http://jules.unavco.org/VoyagerJr/Earth. A more detailed version of the map utility, developed in conjunction with the EarthScope initiative, focuses on North America geodynamics, and provides more detailed geophysical and geographic information for the United States, Canada, and Mexico. The ‘EarthScope Voyager’ can be accessed at http://jules.unavco.org/VoyagerJr/EarthScope. Because the system uses pre-constructed gif images and overlays, the system can rapidly create and display maps to a large number of users simultaneously and does not require any special software installation on users' systems. In addition, a javascript-based educational interface, dubbed "Exploring our Dynamic Planet", incorporates the map tool, explanatory material, background scientific material, and curricular activities that encourage users to explore Earth processes using the Jules Verne Voyager, Jr. tool. Exploring our Dynamic Planet can be viewed at http://www.dpc.ucar.edu/VoyagerJr/. Because of its flexibility, the map utilities can be used for hands-on exercises exploring plate interaction in a range of academic settings, from high school science classes to entry-level undergraduate to graduate-level tectonics courses.
Final Technical Report Power through Policy: "Best Practices" for Cost-Effective Distributed Wind
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rhoads-Weaver, Heather; Gagne, Matthew; Sahl, Kurt
2012-02-28
Power through Policy: 'Best Practices' for Cost-Effective Distributed Wind is a U.S. Department of Energy (DOE)-funded project to identify distributed wind technology policy best practices and to help policymakers, utilities, advocates, and consumers examine their effectiveness using a pro forma model. Incorporating a customized feed from the Database of State Incentives for Renewables and Efficiency (DSIRE), the Web-based Distributed Wind Policy Comparison Tool (Policy Tool) is designed to assist state, local, and utility officials in understanding the financial impacts of different policy options to help reduce the cost of distributed wind technologies. The project's final products include the Distributed Windmore » Policy Comparison Tool, found at www.windpolicytool.org, and its accompanying documentation: Distributed Wind Policy Comparison Tool Guidebook: User Instructions, Assumptions, and Case Studies. With only two initial user inputs required, the Policy Tool allows users to adjust and test a wide range of policy-related variables through a user-friendly dashboard interface with slider bars. The Policy Tool is populated with a variety of financial variables, including turbine costs, electricity rates, policies, and financial incentives; economic variables including discount and escalation rates; as well as technical variables that impact electricity production, such as turbine power curves and wind speed. The Policy Tool allows users to change many of the variables, including the policies, to gauge the expected impacts that various policy combinations could have on the cost of energy (COE), net present value (NPV), internal rate of return (IRR), and the simple payback of distributed wind projects ranging in size from 2.4 kilowatts (kW) to 100 kW. The project conducted case studies to demonstrate how the Policy Tool can provide insights into 'what if' scenarios and also allow the current status of incentives to be examined or defended when necessary. The ranking of distributed wind state policy and economic environments summarized in the attached report, based on the Policy Tool's default COE results, highlights favorable market opportunities for distributed wind growth as well as market conditions ripe for improvement. Best practices for distributed wind state policies are identified through an evaluation of their effect on improving the bottom line of project investments. The case studies and state rankings were based on incentives, power curves, and turbine pricing as of 2010, and may not match the current results from the Policy Tool. The Policy Tool can be used to evaluate the ways that a variety of federal and state policies and incentives impact the economics of distributed wind (and subsequently its expected market growth). It also allows policymakers to determine the impact of policy options, addressing market challenges identified in the U.S. DOE's '20% Wind Energy by 2030' report and helping to meet COE targets. In providing a simple and easy-to-use policy comparison tool that estimates financial performance, the Policy Tool and guidebook are expected to enhance market expansion by the small wind industry by increasing and refining the understanding of distributed wind costs, policy best practices, and key market opportunities in all 50 states. This comprehensive overview and customized software to quickly calculate and compare policy scenarios represent a fundamental step in allowing policymakers to see how their decisions impact the bottom line for distributed wind consumers, while estimating the relative advantages of different options available in their policy toolboxes. Interested stakeholders have suggested numerous ways to enhance and expand the initial effort to develop an even more user-friendly Policy Tool and guidebook, including the enhancement and expansion of the current tool, and conducting further analysis. The report and the project's Guidebook include further details on possible next steps. NREL Report No. BK-5500-53127; DOE/GO-102011-3453.« less
[Is there life beyond SPSS? Discover R].
Elosua Oliden, Paula
2009-11-01
R is a GNU statistical and programming environment with very high graphical capabilities. It is very powerful for research purposes, but it is also an exceptional tool for teaching. R is composed of more than 1400 packages that allow using it for simple statistics and applying the most complex and most recent formal models. Using graphical interfaces like the Rcommander package, permits working in user-friendly environments which are similar to the graphical environment used by SPSS. This last characteristic allows non-statisticians to overcome the obstacle of accessibility, and it makes R the best tool for teaching. Is there anything better? Open, free, affordable, accessible and always on the cutting edge.
Gemi: PCR Primers Prediction from Multiple Alignments
Sobhy, Haitham; Colson, Philippe
2012-01-01
Designing primers and probes for polymerase chain reaction (PCR) is a preliminary and critical step that requires the identification of highly conserved regions in a given set of sequences. This task can be challenging if the targeted sequences display a high level of diversity, as frequently encountered in microbiologic studies. We developed Gemi, an automated, fast, and easy-to-use bioinformatics tool with a user-friendly interface to design primers and probes based on multiple aligned sequences. This tool can be used for the purpose of real-time and conventional PCR and can deal efficiently with large sets of sequences of a large size. PMID:23316117
S3D: An interactive surface grid generation tool
NASA Technical Reports Server (NTRS)
Luh, Raymond Ching-Chung; Pierce, Lawrence E.; Yip, David
1992-01-01
S3D, an interactive software tool for surface grid generation, is described. S3D provides the means with which a geometry definition based either on a discretized curve set or a rectangular set can be quickly processed towards the generation of a surface grid for computational fluid dynamics (CFD) applications. This is made possible as a result of implementing commonly encountered surface gridding tasks in an environment with a highly efficient and user friendly graphical interface. Some of the more advanced features of S3D include surface-surface intersections, optimized surface domain decomposition and recomposition, and automated propagation of edge distributions to surrounding grids.
Soybean Knowledge Base (SoyKB): a Web Resource for Soybean Translational Genomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joshi, Trupti; Patil, Kapil; Fitzpatrick, Michael R.
2012-01-17
Background: Soybean Knowledge Base (SoyKB) is a comprehensive all-inclusive web resource for soybean translational genomics. SoyKB is designed to handle the management and integration of soybean genomics, transcriptomics, proteomics and metabolomics data along with annotation of gene function and biological pathway. It contains information on four entities, namely genes, microRNAs, metabolites and single nucleotide polymorphisms (SNPs). Methods: SoyKB has many useful tools such as Affymetrix probe ID search, gene family search, multiple gene/ metabolite search supporting co-expression analysis, and protein 3D structure viewer as well as download and upload capacity for experimental data and annotations. It has four tiers ofmore » registration, which control different levels of access to public and private data. It allows users of certain levels to share their expertise by adding comments to the data. It has a user-friendly web interface together with genome browser and pathway viewer, which display data in an intuitive manner to the soybean researchers, producers and consumers. Conclusions: SoyKB addresses the increasing need of the soybean research community to have a one-stop-shop functional and translational omics web resource for information retrieval and analysis in a user-friendly way. SoyKB can be publicly accessed at http://soykb.org/.« less
Development of an imaging method for quantifying a large digital PCR droplet
NASA Astrophysics Data System (ADS)
Huang, Jen-Yu; Lee, Shu-Sheng; Hsu, Yu-Hsiang
2017-02-01
Portable devices have been recognized as the future linkage between end-users and lab-on-a-chip devices. It has a user friendly interface and provides apps to interface headphones, cameras, and communication duct, etc. In particular, the digital resolution of cameras installed in smartphones or pads already has a high imaging resolution with a high number of pixels. This unique feature has triggered researches to integrate optical fixtures with smartphone to provide microscopic imaging capabilities. In this paper, we report our study on developing a portable diagnostic tool based on the imaging system of a smartphone and a digital PCR biochip. A computational algorithm is developed to processing optical images taken from a digital PCR biochip with a smartphone in a black box. Each reaction droplet is recorded in pixels and is analyzed in a sRGB (red, green, and blue) color space. Multistep filtering algorithm and auto-threshold algorithm are adopted to minimize background noise contributed from ccd cameras and rule out false positive droplets, respectively. Finally, a size-filtering method is applied to identify the number of positive droplets to quantify target's concentration. Statistical analysis is then performed for diagnostic purpose. This process can be integrated in an app and can provide a user friendly interface without professional training.
Rachinger, Jens; Bumm, Klaus; Wurm, Jochen; Bohr, Christopher; Nissen, Urs; Dannenmann, Tim; Buchfelder, Michael; Iro, Heinrich; Nimsky, Christopher
2007-01-01
To introduce a new robotic system to the field of neurosurgery and report on a preliminary assessment of accuracy as well as on envisioned application concepts. Based on experience with another system (Evolution 1, URS Inc., Schwerin, Germany), technical advancements are discussed. The basic module is an industrial 6 degrees of freedom robotic arm with a modified control element. The system combines frameless stereotaxy, robotics, and endoscopy. The robotic reproducibility error and the overall error were evaluated. For accuracy testing CT markers were placed on a cadaveric head and pinpointed with the robot's tool tip, both fully automated and telemanipulatory. Applicability in a clinical setting, user friendliness, safety and flexibility were assessed. The new system is suitable for use in the neurosurgical operating theatre. Hard- and software are user-friendly and flexible. The mean reproducibility error was 0.052-0.062 mm, the mean overall error was 0.816 mm. The system is less cumbersome and much easier to use than the Evolution 1. With its user-friendly interface and reliable safety features, its high application accuracy and flexibility, the new system is a versatile robotic platform for various neurosurgical applications. Adaptations for different applications are currently being realized. Copyright (c) 2007 S. Karger AG, Basel.
Volumetric neuroimage analysis extensions for the MIPAV software package.
Bazin, Pierre-Louis; Cuzzocreo, Jennifer L; Yassa, Michael A; Gandler, William; McAuliffe, Matthew J; Bassett, Susan S; Pham, Dzung L
2007-09-15
We describe a new collection of publicly available software tools for performing quantitative neuroimage analysis. The tools perform semi-automatic brain extraction, tissue classification, Talairach alignment, and atlas-based measurements within a user-friendly graphical environment. They are implemented as plug-ins for MIPAV, a freely available medical image processing software package from the National Institutes of Health. Because the plug-ins and MIPAV are implemented in Java, both can be utilized on nearly any operating system platform. In addition to the software plug-ins, we have also released a digital version of the Talairach atlas that can be used to perform regional volumetric analyses. Several studies are conducted applying the new tools to simulated and real neuroimaging data sets.
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy
2016-05-01
The cyber spaces are increasingly becoming the battlefields between friendly and adversary forces, with normal users caught in the middle. Accordingly, planners of enterprise defensive policies and offensive cyber missions alike have an essential goal to minimize the impact of their own actions and adversaries' attacks on normal operations of the commercial and government networks. To do this, the cyber analysis need accurate "cyber battle maps", where the functions, roles, and activities of individual and groups of devices and users are accurately identified. Most of the research in cyber exploitation has focused on the identification of attacks, attackers, and their devices. Many tools exist for device profiling, malware identification, user attribution, and attack analysis. However, most of the tools are intrusive, sensitive to data obfuscation, or provide anomaly flagging and not able to correctly classify the semantics and causes of network activities. In this paper, we review existing solutions that can identify functional and social roles of entities in cyberspace, discuss their weaknesses, and propose an approach for developing functional and social layers of cyber battle maps.
CheD: chemical database compilation tool, Internet server, and client for SQL servers.
Trepalin, S V; Yarkov, A V
2001-01-01
An efficient program, which runs on a personal computer, for the storage, retrieval, and processing of chemical information, is presented, The program can work both as a stand-alone application or in conjunction with a specifically written Web server application or with some standard SQL servers, e.g., Oracle, Interbase, and MS SQL. New types of data fields are introduced, e.g., arrays for spectral information storage, HTML and database links, and user-defined functions. CheD has an open architecture; thus, custom data types, controls, and services may be added. A WWW server application for chemical data retrieval features an easy and user-friendly installation on Windows NT or 95 platforms.
Gopalaswamy, Arjun M.; Royle, J. Andrew; Hines, James E.; Singh, Pallavi; Jathanna, Devcharan; Kumar, N. Samba; Karanth, K. Ullas
2012-01-01
1. The advent of spatially explicit capture-recapture models is changing the way ecologists analyse capture-recapture data. However, the advantages offered by these new models are not fully exploited because they can be difficult to implement. 2. To address this need, we developed a user-friendly software package, created within the R programming environment, called SPACECAP. This package implements Bayesian spatially explicit hierarchical models to analyse spatial capture-recapture data. 3. Given that a large number of field biologists prefer software with graphical user interfaces for analysing their data, SPACECAP is particularly useful as a tool to increase the adoption of Bayesian spatially explicit capture-recapture methods in practice.
A Data-Driven Solution for Performance Improvement
NASA Technical Reports Server (NTRS)
2002-01-01
Marketed as the "Software of the Future," Optimal Engineering Systems P.I. EXPERT(TM) technology offers statistical process control and optimization techniques that are critical to businesses looking to restructure or accelerate operations in order to gain a competitive edge. Kennedy Space Center granted Optimal Engineering Systems the funding and aid necessary to develop a prototype of the process monitoring and improvement software. Completion of this prototype demonstrated that it was possible to integrate traditional statistical quality assurance tools with robust optimization techniques in a user- friendly format that is visually compelling. Using an expert system knowledge base, the software allows the user to determine objectives, capture constraints and out-of-control processes, predict results, and compute optimal process settings.
Eisen, Lars; Lozano-Fuentes, Saul
2009-01-01
The aims of this review paper are to 1) provide an overview of how mapping and spatial and space-time modeling approaches have been used to date to visualize and analyze mosquito vector and epidemiologic data for dengue; and 2) discuss the potential for these approaches to be included as routine activities in operational vector and dengue control programs. Geographical information system (GIS) software are becoming more user-friendly and now are complemented by free mapping software that provide access to satellite imagery and basic feature-making tools and have the capacity to generate static maps as well as dynamic time-series maps. Our challenge is now to move beyond the research arena by transferring mapping and GIS technologies and spatial statistical analysis techniques in user-friendly packages to operational vector and dengue control programs. This will enable control programs to, for example, generate risk maps for exposure to dengue virus, develop Priority Area Classifications for vector control, and explore socioeconomic associations with dengue risk. PMID:19399163
PHYLOViZ: phylogenetic inference and data visualization for sequence based typing methods
2012-01-01
Background With the decrease of DNA sequencing costs, sequence-based typing methods are rapidly becoming the gold standard for epidemiological surveillance. These methods provide reproducible and comparable results needed for a global scale bacterial population analysis, while retaining their usefulness for local epidemiological surveys. Online databases that collect the generated allelic profiles and associated epidemiological data are available but this wealth of data remains underused and are frequently poorly annotated since no user-friendly tool exists to analyze and explore it. Results PHYLOViZ is platform independent Java software that allows the integrated analysis of sequence-based typing methods, including SNP data generated from whole genome sequence approaches, and associated epidemiological data. goeBURST and its Minimum Spanning Tree expansion are used for visualizing the possible evolutionary relationships between isolates. The results can be displayed as an annotated graph overlaying the query results of any other epidemiological data available. Conclusions PHYLOViZ is a user-friendly software that allows the combined analysis of multiple data sources for microbial epidemiological and population studies. It is freely available at http://www.phyloviz.net. PMID:22568821
NASA Astrophysics Data System (ADS)
Sakimoto, S. E. H.
2016-12-01
Planetary volcanism has redefined what is considered volcanism. "Magma" now may be considered to be anything from the molten rock familiar at terrestrial volcanoes to cryovolcanic ammonia-water mixes erupted on an outer solar system moon. However, even with unfamiliar compositions and source mechanisms, we find familiar landforms such as volcanic channels, lakes, flows, and domes and thus a multitude of possibilities for modeling. As on Earth, these landforms lend themselves to analysis for estimating storage, eruption and/or flow rates. This has potential pitfalls, as extension of the simplified analytic models we often use for terrestrial features into unfamiliar parameter space might yield misleading results. Our most commonly used tools for estimating flow and cooling have tended to lag significantly behind state-of-the-art; the easiest methods to use are neither realistic or accurate, but the more realistic and accurate computational methods are not simple to use. Since the latter computational tools tend to be both expensive and require a significant learning curve, there is a need for a user-friendly approach that still takes advantage of their accuracy. One method is use of the computational package for generation of a server-based tool that allows less computationally inclined users to get accurate results over their range of input parameters for a given problem geometry. A second method is to use the computational package for the generation of a polynomial empirical solution for each class of flow geometry that can be fairly easily solved by anyone with a spreadsheet. In this study, we demonstrate both approaches for several channel flow and lava lake geometries with terrestrial and extraterrestrial examples and compare their results. Specifically, we model cooling rectangular channel flow with a yield strength material, with applications to Mauna Loa, Kilauea, Venus, and Mars. This approach also shows promise with model applications to lava lakes, magma flow through cracks, and volcanic dome formation.
iPat: intelligent prediction and association tool for genomic research.
Chen, Chunpeng James; Zhang, Zhiwu
2018-06-01
The ultimate goal of genomic research is to effectively predict phenotypes from genotypes so that medical management can improve human health and molecular breeding can increase agricultural production. Genomic prediction or selection (GS) plays a complementary role to genome-wide association studies (GWAS), which is the primary method to identify genes underlying phenotypes. Unfortunately, most computing tools cannot perform data analyses for both GWAS and GS. Furthermore, the majority of these tools are executed through a command-line interface (CLI), which requires programming skills. Non-programmers struggle to use them efficiently because of the steep learning curves and zero tolerance for data formats and mistakes when inputting keywords and parameters. To address these problems, this study developed a software package, named the Intelligent Prediction and Association Tool (iPat), with a user-friendly graphical user interface. With iPat, GWAS or GS can be performed using a pointing device to simply drag and/or click on graphical elements to specify input data files, choose input parameters and select analytical models. Models available to users include those implemented in third party CLI packages such as GAPIT, PLINK, FarmCPU, BLINK, rrBLUP and BGLR. Users can choose any data format and conduct analyses with any of these packages. File conversions are automatically conducted for specified input data and selected packages. A GWAS-assisted genomic prediction method was implemented to perform genomic prediction using any GWAS method such as FarmCPU. iPat was written in Java for adaptation to multiple operating systems including Windows, Mac and Linux. The iPat executable file, user manual, tutorials and example datasets are freely available at http://zzlab.net/iPat. zhiwu.zhang@wsu.edu.
The iMeteo is a web-based weather visualization tool
NASA Astrophysics Data System (ADS)
Tuni San-Martín, Max; San-Martín, Daniel; Cofiño, Antonio S.
2010-05-01
iMeteo is a web-based weather visualization tool. Designed with an extensible J2EE architecture, it is capable of displaying information from heterogeneous data sources such as gridded data from numerical models (in NetCDF format) or databases of local predictions. All this information is presented in a user-friendly way, being able to choose the specific tool to display data (maps, graphs, information tables) and customize it to desired locations. *Modular Display System* Visualization of the data is achieved through a set of mini tools called widgets. A user can add them at will and arrange them around the screen easily with a drag and drop movement. They can be of various types and each can be configured separately, forming a really powerful and configurable system. The "Map" is the most complex widget, since it can show several variables simultaneously (either gridded or point-based) through a layered display. Other useful widgets are the the "Histogram", which generates a graph with the frequency characteristics of a variable and the "Timeline" which shows the time evolution of a variable at a given location in an interactive way. *Customization and security* Following the trends in web development, the user can easily customize the way data is displayed. Due to programming in client side with technologies like AJAX, the interaction with the application is similar to the desktop ones because there are rapid respone times. If a user is registered then he could also save his settings in the database, allowing access from any system with Internet access with his particular setup. There is particular emphasis on application security. The administrator can define a set of user profiles, which may have associated restrictions on access to certain data sources, geographic areas or time intervals.
Hussain-Alkhateeb, Laith; Kroeger, Axel; Olliaro, Piero; Rocklöv, Joacim; Sewe, Maquins Odhiambo; Tejeda, Gustavo; Benitez, David; Gill, Balvinder; Hakim, S Lokman; Gomes Carvalho, Roberta; Bowman, Leigh; Petzold, Max
2018-01-01
Dengue outbreaks are increasing in frequency over space and time, affecting people's health and burdening resource-constrained health systems. The ability to detect early emerging outbreaks is key to mounting an effective response. The early warning and response system (EWARS) is a toolkit that provides countries with early-warning systems for efficient and cost-effective local responses. EWARS uses outbreak and alarm indicators to derive prediction models that can be used prospectively to predict a forthcoming dengue outbreak at district level. We report on the development of the EWARS tool, based on users' recommendations into a convenient, user-friendly and reliable software aided by a user's workbook and its field testing in 30 health districts in Brazil, Malaysia and Mexico. 34 Health officers from the 30 study districts who had used the original EWARS for 7 to 10 months responded to a questionnaire with mainly open-ended questions. Qualitative content analysis showed that participants were generally satisfied with the tool but preferred open-access vs. commercial software. EWARS users also stated that the geographical unit should be the district, while access to meteorological information should be improved. These recommendations were incorporated into the second-generation EWARS-R, using the free R software, combined with recent surveillance data and resulted in higher sensitivities and positive predictive values of alarm signals compared to the first-generation EWARS. Currently the use of satellite data for meteorological information is being tested and a dashboard is being developed to increase user-friendliness of the tool. The inclusion of other Aedes borne viral diseases is under discussion. EWARS is a pragmatic and useful tool for detecting imminent dengue outbreaks to trigger early response activities.
Living Lab as an Agile Approach in Developing User-Friendly Welfare Technology.
Holappa, Niina; Sirkka, Andrew
2017-01-01
This paper discusses living lab as a method of developing user-friendly welfare technology, and presents a qualitative evaluation research of how living lab tested technologies impacted on the life of healthcare customers and professionals over test periods.
Let's get real about virtual: online health is here to stay.
Prainsack, Barbara
2013-08-01
A lot has been written about the opportunities of the Internet for medicine, and lately, also for disease research specifically. Although it remains to be seen how significant and sustainable a change this will result in, some recent developments are highly relevant for the area of genetic research. User-friendly, low-threshold web-based tools do not only provide information to patients and other users, but they also supply user-generated data that can be utilized by both medical practice and medical research. Many of these developments have been below the radar of mainstream academic research so far. Issues related to data quality and standardization, as well as data protection and privacy, still need to be addressed. Dismissing these platforms as fads of a tiny privileged minority risks missing the opportunity to have our say in these debates.
PRISE2: software for designing sequence-selective PCR primers and probes.
Huang, Yu-Ting; Yang, Jiue-in; Chrobak, Marek; Borneman, James
2014-09-25
PRISE2 is a new software tool for designing sequence-selective PCR primers and probes. To achieve high level of selectivity, PRISE2 allows the user to specify a collection of target sequences that the primers are supposed to amplify, as well as non-target sequences that should not be amplified. The program emphasizes primer selectivity on the 3' end, which is crucial for selective amplification of conserved sequences such as rRNA genes. In PRISE2, users can specify desired properties of primers, including length, GC content, and others. They can interactively manipulate the list of candidate primers, to choose primer pairs that are best suited for their needs. A similar process is used to add probes to selected primer pairs. More advanced features include, for example, the capability to define a custom mismatch penalty function. PRISE2 is equipped with a graphical, user-friendly interface, and it runs on Windows, Macintosh or Linux machines. PRISE2 has been tested on two very similar strains of the fungus Dactylella oviparasitica, and it was able to create highly selective primers and probes for each of them, demonstrating the ability to create useful sequence-selective assays. PRISE2 is a user-friendly, interactive software package that can be used to design high-quality selective primers for PCR experiments. In addition to choosing primers, users have an option to add a probe to any selected primer pair, enabling design of Taqman and other primer-probe based assays. PRISE2 can also be used to design probes for FISH and other hybridization-based assays.
An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology.
Deodhar, Suruchi; Bisset, Keith R; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V
2014-07-01
We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity.
An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology
Deodhar, Suruchi; Bisset, Keith R.; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V.
2014-01-01
We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity. PMID:25530914
TIME Impact - a new user-friendly tuberculosis (TB) model to inform TB policy decisions.
Houben, R M G J; Lalli, M; Sumner, T; Hamilton, M; Pedrazzoli, D; Bonsu, F; Hippner, P; Pillay, Y; Kimerling, M; Ahmedov, S; Pretorius, C; White, R G
2016-03-24
Tuberculosis (TB) is the leading cause of death from infectious disease worldwide, predominantly affecting low- and middle-income countries (LMICs), where resources are limited. As such, countries need to be able to choose the most efficient interventions for their respective setting. Mathematical models can be valuable tools to inform rational policy decisions and improve resource allocation, but are often unavailable or inaccessible for LMICs, particularly in TB. We developed TIME Impact, a user-friendly TB model that enables local capacity building and strengthens country-specific policy discussions to inform support funding applications at the (sub-)national level (e.g. Ministry of Finance) or to international donors (e.g. the Global Fund to Fight AIDS, Tuberculosis and Malaria).TIME Impact is an epidemiological transmission model nested in TIME, a set of TB modelling tools available for free download within the widely-used Spectrum software. The TIME Impact model reflects key aspects of the natural history of TB, with additional structure for HIV/ART, drug resistance, treatment history and age. TIME Impact enables national TB programmes (NTPs) and other TB policymakers to better understand their own TB epidemic, plan their response, apply for funding and evaluate the implementation of the response.The explicit aim of TIME Impact's user-friendly interface is to enable training of local and international TB experts towards independent use. During application of TIME Impact, close involvement of the NTPs and other local partners also builds critical understanding of the modelling methods, assumptions and limitations inherent to modelling. This is essential to generate broad country-level ownership of the modelling data inputs and results. In turn, it stimulates discussions and a review of the current evidence and assumptions, strengthening the decision-making process in general.TIME Impact has been effectively applied in a variety of settings. In South Africa, it informed the first South African HIV and TB Investment Cases and successfully leveraged additional resources from the National Treasury at a time of austerity. In Ghana, a long-term TIME model-centred interaction with the NTP provided new insights into the local epidemiology and guided resource allocation decisions to improve impact.
TRMM Precipitation Application Examples Using Data Services at NASA GES DISC
NASA Technical Reports Server (NTRS)
Liu, Zhong; Ostrenga, D.; Teng, W.; Kempler, S.; Greene, M.
2012-01-01
Data services to support precipitation applications are important for maximizing the NASA TRMM (Tropical Rainfall Measuring Mission) and the future GPM (Global Precipitation Mission) mission's societal benefits. TRMM Application examples using data services at the NASA GES DISC, including samples from users around the world will be presented in this poster. Precipitation applications often require near-real-time support. The GES DISC provides such support through: 1) Providing near-real-time precipitation products through TOVAS; 2) Maps of current conditions for monitoring precipitation and its anomaly around the world; 3) A user friendly tool (TOVAS) to analyze and visualize near-real-time and historical precipitation products; and 4) The GES DISC Hurricane Portal that provides near-real-time monitoring services for the Atlantic basin. Since the launch of TRMM, the GES DISC has developed data services to support precipitation applications around the world. In addition to the near-real-time services, other services include: 1) User friendly TRMM Online Visualization and Analysis System (TOVAS; URL: http://disc2.nascom.nasa.gov/Giovanni/tovas/); 2) Mirador (http://mirador.gsfc.nasa.gov/), a simplified interface for searching, browsing, and ordering Earth science data at GES DISC. Mirador is designed to be fast and easy to learn; 3) Data via OPeNDAP (http://disc.sci.gsfc.nasa.gov/services/opendap/). The OPeNDAP provides remote access to individual variables within datasets in a form usable by many tools, such as IDV, McIDAS-V, Panoply, Ferret and GrADS; and 4) The Open Geospatial Consortium (OGC) Web Map Service (WMS) (http://disc.sci.gsfc.nasa.gov/services/wxs_ogc.shtml). The WMS is an interface that allows the use of data and enables clients to build customized maps with data coming from a different network.
Baig, Hasan; Madsen, Jan
2017-01-15
Simulation and behavioral analysis of genetic circuits is a standard approach of functional verification prior to their physical implementation. Many software tools have been developed to perform in silico analysis for this purpose, but none of them allow users to interact with the model during runtime. The runtime interaction gives the user a feeling of being in the lab performing a real world experiment. In this work, we present a user-friendly software tool named D-VASim (Dynamic Virtual Analyzer and Simulator), which provides a virtual laboratory environment to simulate and analyze the behavior of genetic logic circuit models represented in an SBML (Systems Biology Markup Language). Hence, SBML models developed in other software environments can be analyzed and simulated in D-VASim. D-VASim offers deterministic as well as stochastic simulation; and differs from other software tools by being able to extract and validate the Boolean logic from the SBML model. D-VASim is also capable of analyzing the threshold value and propagation delay of a genetic circuit model. D-VASim is available for Windows and Mac OS and can be downloaded from bda.compute.dtu.dk/downloads/. haba@dtu.dk, jama@dtu.dk. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
AGORA : Organellar genome annotation from the amino acid and nucleotide references.
Jung, Jaehee; Kim, Jong Im; Jeong, Young-Sik; Yi, Gangman
2018-03-29
Next-generation sequencing (NGS) technologies have led to the accumulation of highthroughput sequence data from various organisms in biology. To apply gene annotation of organellar genomes for various organisms, more optimized tools for functional gene annotation are required. Almost all gene annotation tools are mainly focused on the chloroplast genome of land plants or the mitochondrial genome of animals.We have developed a web application AGORA for the fast, user-friendly, and improved annotations of organellar genomes. AGORA annotates genes based on a BLAST-based homology search and clustering with selected reference sequences from the NCBI database or user-defined uploaded data. AGORA can annotate the functional genes in almost all mitochondrion and plastid genomes of eukaryotes. The gene annotation of a genome with an exon-intron structure within a gene or inverted repeat region is also available. It provides information of start and end positions of each gene, BLAST results compared with the reference sequence, and visualization of gene map by OGDRAW. Users can freely use the software, and the accessible URL is https://bigdata.dongguk.edu/gene_project/AGORA/.The main module of the tool is implemented by the python and php, and the web page is built by the HTML and CSS to support all browsers. gangman@dongguk.edu.
Modeling biochemical transformation processes and information processing with Narrator.
Mandel, Johannes J; Fuss, Hendrik; Palfreyman, Niall M; Dubitzky, Werner
2007-03-27
Software tools that model and simulate the dynamics of biological processes and systems are becoming increasingly important. Some of these tools offer sophisticated graphical user interfaces (GUIs), which greatly enhance their acceptance by users. Such GUIs are based on symbolic or graphical notations used to describe, interact and communicate the developed models. Typically, these graphical notations are geared towards conventional biochemical pathway diagrams. They permit the user to represent the transport and transformation of chemical species and to define inhibitory and stimulatory dependencies. A critical weakness of existing tools is their lack of supporting an integrative representation of transport, transformation as well as biological information processing. Narrator is a software tool facilitating the development and simulation of biological systems as Co-dependence models. The Co-dependence Methodology complements the representation of species transport and transformation together with an explicit mechanism to express biological information processing. Thus, Co-dependence models explicitly capture, for instance, signal processing structures and the influence of exogenous factors or events affecting certain parts of a biological system or process. This combined set of features provides the system biologist with a powerful tool to describe and explore the dynamics of life phenomena. Narrator's GUI is based on an expressive graphical notation which forms an integral part of the Co-dependence Methodology. Behind the user-friendly GUI, Narrator hides a flexible feature which makes it relatively easy to map models defined via the graphical notation to mathematical formalisms and languages such as ordinary differential equations, the Systems Biology Markup Language or Gillespie's direct method. This powerful feature facilitates reuse, interoperability and conceptual model development. Narrator is a flexible and intuitive systems biology tool. It is specifically intended for users aiming to construct and simulate dynamic models of biology without recourse to extensive mathematical detail. Its design facilitates mappings to different formal languages and frameworks. The combined set of features makes Narrator unique among tools of its kind. Narrator is implemented as Java software program and available as open-source from http://www.narrator-tool.org.
NASA Astrophysics Data System (ADS)
Herdiansyah, Herdis; Satriya Utama, Andre; Safruddin; Hidayat, Heri; Gema Zuliana Irawan, Angga; Immanuel Tjandra Muliawan, R.; Mutia Pratiwi, Diana
2017-10-01
One of the factor that influenced the development of science is the existence of the library, which in this case is the college libraries. Library, which is located in the college environment, aims to supply collections of literatures to support research activities as well as educational for students of the college. Conceptually, every library now starts to practice environmental principles. For example, “X” library as a central library claims to be an environmental friendly library for practicing environmental friendly management, but the X library has not inserted the satisfaction and service aspect to the users, including whether it is true that environmental friendly process is perceived by library users. Satisfaction can be seen from the comparison between expectations and reality of library users. This paper analyzes the level of library user satisfaction with library services in the campus area and the gap between expectations and reality felt by the library users. The result of the research shows that there is a disparity between the hope of library management, which is sustainable and environmentally friendly with the reality in the management of the library, so that it has not given satisfaction to the users yet. The gap value of satisfaction that has the biggest difference is in the library collection with the value of 1.57; while for the smallest gap value is in the same service to all students with a value of 0.67.
A study of the age attribute in a query tool for a clinical data warehouse.
Scheufele, Elisabeth L; Scheufele, Elisabeth Lee; Dubey, Anil; Dubey, Anil Kumar; Murphy, Shawn N
2008-11-06
The RPDR, a clinical data warehouse with a user-friendly Querytool, allows researchers to perform studies on patient data. Currently, the RPDR represents age as the patient's age at the present time, which is problematic in situations where age at the time of the event is more appropriate. We will modify the Querytool to consider this by assessing the perception of age via survey, testing backend query solutions, and developing modifications based on these results.
Comprehensive Analysis of DNA Methylation Data with RnBeads
Walter, Jörn; Lengauer, Thomas; Bock, Christoph
2014-01-01
RnBeads is a software tool for large-scale analysis and interpretation of DNA methylation data, providing a user-friendly analysis workflow that yields detailed hypertext reports (http://rnbeads.mpi-inf.mpg.de). Supported assays include whole genome bisulfite sequencing, reduced representation bisulfite sequencing, Infinium microarrays, and any other protocol that produces high-resolution DNA methylation data. Important applications of RnBeads include the analysis of epigenome-wide association studies and epigenetic biomarker discovery in cancer cohorts. PMID:25262207
Program Aids Design Of Fluid-Circulating Systems
NASA Technical Reports Server (NTRS)
Bacskay, Allen; Dalee, Robert
1992-01-01
Computer Aided Systems Engineering and Analysis (CASE/A) program is interactive software tool for trade study and analysis, designed to increase productivity during all phases of systems engineering. Graphics-based command-driven software package provides user-friendly computing environment in which engineer analyzes performance and interface characteristics of ECLS/ATC system. Useful during all phases of spacecraft-design program, from initial conceptual design trade studies to actual flight, including pre-flight prediction and in-flight analysis of anomalies. Written in FORTRAN 77.
2011-12-01
http://www.tribuneindia.com/2004/20041227/main1.htm Chang, S. E., McDaniels, T. L., Mikawoz, J., & Peterson , K. (2007). Infrastructure failure...University. Hind, P., Frost, M., & Rowley, S. ( 1996 ). The resilience audit and the psychological contract. Journal of Managerial Psychology, 11, 18-29...globalization by more globalization. Asian Perspective, 28, 19-44. Hurwitt, J. M., Bolotnick, T. J., Corsetti, B. A., Hershey , D. A., Hoffman, K. T
Using bio.tools to generate and annotate workbench tool descriptions
Hillion, Kenzo-Hugo; Kuzmin, Ivan; Khodak, Anton; Rasche, Eric; Crusoe, Michael; Peterson, Hedi; Ison, Jon; Ménager, Hervé
2017-01-01
Workbench and workflow systems such as Galaxy, Taverna, Chipster, or Common Workflow Language (CWL)-based frameworks, facilitate the access to bioinformatics tools in a user-friendly, scalable and reproducible way. Still, the integration of tools in such environments remains a cumbersome, time consuming and error-prone process. A major consequence is the incomplete or outdated description of tools that are often missing important information, including parameters and metadata such as publication or links to documentation. ToolDog (Tool DescriptiOn Generator) facilitates the integration of tools - which have been registered in the ELIXIR tools registry (https://bio.tools) - into workbench environments by generating tool description templates. ToolDog includes two modules. The first module analyses the source code of the bioinformatics software with language-specific plugins, and generates a skeleton for a Galaxy XML or CWL tool description. The second module is dedicated to the enrichment of the generated tool description, using metadata provided by bio.tools. This last module can also be used on its own to complete or correct existing tool descriptions with missing metadata. PMID:29333231
RAP: RNA-Seq Analysis Pipeline, a new cloud-based NGS web application
2015-01-01
Background The study of RNA has been dramatically improved by the introduction of Next Generation Sequencing platforms allowing massive and cheap sequencing of selected RNA fractions, also providing information on strand orientation (RNA-Seq). The complexity of transcriptomes and of their regulative pathways make RNA-Seq one of most complex field of NGS applications, addressing several aspects of the expression process (e.g. identification and quantification of expressed genes and transcripts, alternative splicing and polyadenylation, fusion genes and trans-splicing, post-transcriptional events, etc.). Moreover, the huge volume of data generated by NGS platforms introduces unprecedented computational and technological challenges to efficiently analyze and store sequence data and results. Methods In order to provide researchers with an effective and friendly resource for analyzing RNA-Seq data, we present here RAP (RNA-Seq Analysis Pipeline), a cloud computing web application implementing a complete but modular analysis workflow. This pipeline integrates both state-of-the-art bioinformatics tools for RNA-Seq analysis and in-house developed scripts to offer to the user a comprehensive strategy for data analysis. RAP is able to perform quality checks (adopting FastQC and NGS QC Toolkit), identify and quantify expressed genes and transcripts (with Tophat, Cufflinks and HTSeq), detect alternative splicing events (using SpliceTrap) and chimeric transcripts (with ChimeraScan). This pipeline is also able to identify splicing junctions and constitutive or alternative polyadenylation sites (implementing custom analysis modules) and call for statistically significant differences in genes and transcripts expression, splicing pattern and polyadenylation site usage (using Cuffdiff2 and DESeq). Results Through a user friendly web interface, the RAP workflow can be suitably customized by the user and it is automatically executed on our cloud computing environment. This strategy allows to access to bioinformatics tools and computational resources without specific bioinformatics and IT skills. RAP provides a set of tabular and graphical results that can be helpful to browse, filter and export analyzed data, according to the user needs. PMID:26046471
GOSSIP, a New VO Compliant Tool for SED Fitting
NASA Astrophysics Data System (ADS)
Franzetti, P.; Scodeggio, M.; Garilli, B.; Fumana, M.; Paioro, L.
2008-08-01
We present GOSSIP (Galaxy Observed-Simulated SED Interactive Program), a new tool developed to perform SED fitting in a simple, user friendly and efficient way. GOSSIP automatically builds-up the observed SED of an object (or a large sample of objects) combining magnitudes in different bands and eventually a spectrum; then it performs a χ^2 minimization fitting procedure versus a set of synthetic models. The fitting results are used to estimate a number of physical parameters like the Star Formation History, absolute magnitudes, stellar mass and their Probability Distribution Functions. User defined models can be used, but GOSSIP is also able to load models produced by the most commonly used synthesis population codes. GOSSIP can be used interactively with other visualization tools using the PLASTIC protocol for communications. Moreover, since it has been developed with large data sets applications in mind, it will be extended to operate within the Virtual Observatory framework. GOSSIP is distributed to the astronomical community from the PANDORA group web site (http://cosmos.iasf-milano.inaf.it/pandora/gossip.html).
Han, Seong Kyu; Lee, Dongyeop; Lee, Heetak; Kim, Donghyo; Son, Heehwa G; Yang, Jae-Seong; Lee, Seung-Jae V; Kim, Sanguk
2016-08-30
Online application for survival analysis (OASIS) has served as a popular and convenient platform for the statistical analysis of various survival data, particularly in the field of aging research. With the recent advances in the fields of aging research that deal with complex survival data, we noticed a need for updates to the current version of OASIS. Here, we report OASIS 2 (http://sbi.postech.ac.kr/oasis2), which provides extended statistical tools for survival data and an enhanced user interface. In particular, OASIS 2 enables the statistical comparison of maximal lifespans, which is potentially useful for determining key factors that limit the lifespan of a population. Furthermore, OASIS 2 provides statistical and graphical tools that compare values in different conditions and times. That feature is useful for comparing age-associated changes in physiological activities, which can be used as indicators of "healthspan." We believe that OASIS 2 will serve as a standard platform for survival analysis with advanced and user-friendly statistical tools for experimental biologists in the field of aging research.
He, W; Zhao, S; Liu, X; Dong, S; Lv, J; Liu, D; Wang, J; Meng, Z
2013-12-04
Large-scale next-generation sequencing (NGS)-based resequencing detects sequence variations, constructs evolutionary histories, and identifies phenotype-related genotypes. However, NGS-based resequencing studies generate extraordinarily large amounts of data, making computations difficult. Effective use and analysis of these data for NGS-based resequencing studies remains a difficult task for individual researchers. Here, we introduce ReSeqTools, a full-featured toolkit for NGS (Illumina sequencing)-based resequencing analysis, which processes raw data, interprets mapping results, and identifies and annotates sequence variations. ReSeqTools provides abundant scalable functions for routine resequencing analysis in different modules to facilitate customization of the analysis pipeline. ReSeqTools is designed to use compressed data files as input or output to save storage space and facilitates faster and more computationally efficient large-scale resequencing studies in a user-friendly manner. It offers abundant practical functions and generates useful statistics during the analysis pipeline, which significantly simplifies resequencing analysis. Its integrated algorithms and abundant sub-functions provide a solid foundation for special demands in resequencing projects. Users can combine these functions to construct their own pipelines for other purposes.
An Update on Design Tools for Optimization of CMC 3D Fiber Architectures
NASA Technical Reports Server (NTRS)
Lang, J.; DiCarlo, J.
2012-01-01
Objective: Describe and up-date progress for NASA's efforts to develop 3D architectural design tools for CMC in general and for SIC/SiC composites in particular. Describe past and current sequential work efforts aimed at: Understanding key fiber and tow physical characteristics in conventional 2D and 3D woven architectures as revealed by microstructures in the literature. Developing an Excel program for down-selecting and predicting key geometric properties and resulting key fiber-controlled properties for various conventional 3D architectures. Developing a software tool for accurately visualizing all the key geometric details of conventional 3D architectures. Validating tools by visualizing and predicting the Internal geometry and key mechanical properties of a NASA SIC/SIC panel with a 3D orthogonal architecture. Applying the predictive and visualization tools toward advanced 3D orthogonal SiC/SIC composites, and combining them into a user-friendly software program.
SU-F-T-94: Plan2pdf - a Software Tool for Automatic Plan Report for Philips Pinnacle TPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, C
Purpose: To implement an automatic electronic PDF plan reporting tool for Philips Pinnacle treatment planning system (TPS) Methods: An electronic treatment plan reporting software is developed by us to enable fully automatic PDF report from Pinnacle TPS to external EMR programs such as MOSAIQ. The tool is named “plan2pdf”. plan2pdf is implemented using Pinnacle scripts, Java and UNIX shell scripts, without any external program needed. plan2pdf supports full auto-mode and manual mode reporting. In full auto-mode, with a single mouse click, plan2pdf will generate a detailed Pinnacle plan report in PDF format, which includes customizable cover page, Pinnacle plan summary,more » orthogonal views through each plan POI and maximum dose point, DRR for each beam, serial transverse views captured throughout the dose grid at a user specified interval, DVH and scorecard windows. The final PDF report is also automatically bookmarked for each section above for convenient plan review. The final PDF report can either be saved on a user specified folder on Pinnacle, or it can be automatically exported to an EMR import folder via a user configured FTP service. In manual capture mode, plan2pdf allows users to capture any Pinnacle plan by full screen, individual window or rectangular ROI drawn on screen. Furthermore, to avoid possible patients’ plan mix-up during auto-mode reporting, a user conflict check feature is included in plan2pdf: it prompts user to wait if another patient is being exported by plan2pdf by another user. Results: plan2pdf is tested extensively and successfully at our institution consists of 5 centers, 15 dosimetrists and 10 physicists, running Pinnacle version 9.10 on Enterprise servers. Conclusion: plan2pdf provides a highly efficient, user friendly and clinical proven platform for all Philips Pinnacle users, to generate a detailed plan report in PDF format for external EMR systems.« less
Looking to the Future: Communicating with an expanding cryospheric user community
NASA Astrophysics Data System (ADS)
Gergely, K.; Scott, D.; Booker, L.
2009-12-01
The National Snow and Ice Data Center (NSIDC) at the University of Colorado is known for its customer service. Through the User Services Office (USO) NSIDC provides end-to-end data support with timely, friendly, and professional assistance. This service includes expertise in selecting, obtaining, and handling of data, as well as the dissemination of information related to NSIDC’s cryospheric data and information. This dissemination happens across many mediums, such as email, newsletters, and Web-published data documentation. With surveys like the American Customer Service Index, we are learning more and more about what the user’s informational needs are, and beginning to anticipate what the user's needs might be in the future. In this presentation, we will examine the current USO processes for communicating with our user community, and explore how social networking tools, such as Twitter, Blogging, or Facebook may enhance the overall user experience. We will assess a communication approach that combines mainstream and emerging technologies in order to maintain a high standard of customer service with an expanding cryospheric user community.
Cryogenic Propellant Feed System Analytical Tool Development
NASA Technical Reports Server (NTRS)
Lusby, Brian S.; Miranda, Bruno M.; Collins, Jacob A.
2011-01-01
The Propulsion Systems Branch at NASA s Lyndon B. Johnson Space Center (JSC) has developed a parametric analytical tool to address the need to rapidly predict heat leak into propellant distribution lines based on insulation type, installation technique, line supports, penetrations, and instrumentation. The Propellant Feed System Analytical Tool (PFSAT) will also determine the optimum orifice diameter for an optional thermodynamic vent system (TVS) to counteract heat leak into the feed line and ensure temperature constraints at the end of the feed line are met. PFSAT was developed primarily using Fortran 90 code because of its number crunching power and the capability to directly access real fluid property subroutines in the Reference Fluid Thermodynamic and Transport Properties (REFPROP) Database developed by NIST. A Microsoft Excel front end user interface was implemented to provide convenient portability of PFSAT among a wide variety of potential users and its ability to utilize a user-friendly graphical user interface (GUI) developed in Visual Basic for Applications (VBA). The focus of PFSAT is on-orbit reaction control systems and orbital maneuvering systems, but it may be used to predict heat leak into ground-based transfer lines as well. PFSAT is expected to be used for rapid initial design of cryogenic propellant distribution lines and thermodynamic vent systems. Once validated, PFSAT will support concept trades for a variety of cryogenic fluid transfer systems on spacecraft, including planetary landers, transfer vehicles, and propellant depots, as well as surface-based transfer systems. The details of the development of PFSAT, its user interface, and the program structure will be presented.
Sensor metadata blueprints and computer-aided editing for disciplined SensorML
NASA Astrophysics Data System (ADS)
Tagliolato, Paolo; Oggioni, Alessandro; Fugazza, Cristiano; Pepe, Monica; Carrara, Paola
2016-04-01
The need for continuous, accurate, and comprehensive environmental knowledge has led to an increase in sensor observation systems and networks. The Sensor Web Enablement (SWE) initiative has been promoted by the Open Geospatial Consortium (OGC) to foster interoperability among sensor systems. The provision of metadata according to the prescribed SensorML schema is a key component for achieving this and nevertheless availability of correct and exhaustive metadata cannot be taken for granted. On the one hand, it is awkward for users to provide sensor metadata because of the lack in user-oriented, dedicated tools. On the other, the specification of invariant information for a given sensor category or model (e.g., observed properties and units of measurement, manufacturer information, etc.), can be labor- and timeconsuming. Moreover, the provision of these details is error prone and subjective, i.e., may differ greatly across distinct descriptions for the same system. We provide a user-friendly, template-driven metadata authoring tool composed of a backend web service and an HTML5/javascript client. This results in a form-based user interface that conceals the high complexity of the underlying format. This tool also allows for plugging in external data sources providing authoritative definitions for the aforementioned invariant information. Leveraging these functionalities, we compiled a set of SensorML profiles, that is, sensor metadata blueprints allowing end users to focus only on the metadata items that are related to their specific deployment. The natural extension of this scenario is the involvement of end users and sensor manufacturers in the crowd-sourced evolution of this collection of prototypes. We describe the components and workflow of our framework for computer-aided management of sensor metadata.
Castaño-Díez, Daniel; Kudryashev, Mikhail; Arheit, Marcel; Stahlberg, Henning
2012-05-01
Dynamo is a new software package for subtomogram averaging of cryo Electron Tomography (cryo-ET) data with three main goals: first, Dynamo allows user-transparent adaptation to a variety of high-performance computing platforms such as GPUs or CPU clusters. Second, Dynamo implements user-friendliness through GUI interfaces and scripting resources. Third, Dynamo offers user-flexibility through a plugin API. Besides the alignment and averaging procedures, Dynamo includes native tools for visualization and analysis of results and data, as well as support for third party visualization software, such as Chimera UCSF or EMAN2. As a demonstration of these functionalities, we studied bacterial flagellar motors and showed automatically detected classes with absent and present C-rings. Subtomogram averaging is a common task in current cryo-ET pipelines, which requires extensive computational resources and follows a well-established workflow. However, due to the data diversity, many existing packages offer slight variations of the same algorithm to improve results. One of the main purposes behind Dynamo is to provide explicit tools to allow the user the insertion of custom designed procedures - or plugins - to replace or complement the native algorithms in the different steps of the processing pipeline for subtomogram averaging without the burden of handling parallelization. Custom scripts that implement new approaches devised by the user are integrated into the Dynamo data management system, so that they can be controlled by the GUI or the scripting capacities. Dynamo executables do not require licenses for third party commercial software. Sources, executables and documentation are freely distributed on http://www.dynamo-em.org. Copyright © 2012 Elsevier Inc. All rights reserved.
GREAT: a web portal for Genome Regulatory Architecture Tools.
Bouyioukos, Costas; Bucchini, François; Elati, Mohamed; Képès, François
2016-07-08
GREAT (Genome REgulatory Architecture Tools) is a novel web portal for tools designed to generate user-friendly and biologically useful analysis of genome architecture and regulation. The online tools of GREAT are freely accessible and compatible with essentially any operating system which runs a modern browser. GREAT is based on the analysis of genome layout -defined as the respective positioning of co-functional genes- and its relation with chromosome architecture and gene expression. GREAT tools allow users to systematically detect regular patterns along co-functional genomic features in an automatic way consisting of three individual steps and respective interactive visualizations. In addition to the complete analysis of regularities, GREAT tools enable the use of periodicity and position information for improving the prediction of transcription factor binding sites using a multi-view machine learning approach. The outcome of this integrative approach features a multivariate analysis of the interplay between the location of a gene and its regulatory sequence. GREAT results are plotted in web interactive graphs and are available for download either as individual plots, self-contained interactive pages or as machine readable tables for downstream analysis. The GREAT portal can be reached at the following URL https://absynth.issb.genopole.fr/GREAT and each individual GREAT tool is available for downloading. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
The Gene Set Builder: collation, curation, and distribution of sets of genes
Yusuf, Dimas; Lim, Jonathan S; Wasserman, Wyeth W
2005-01-01
Background In bioinformatics and genomics, there are many applications designed to investigate the common properties for a set of genes. Often, these multi-gene analysis tools attempt to reveal sequential, functional, and expressional ties. However, while tremendous effort has been invested in developing tools that can analyze a set of genes, minimal effort has been invested in developing tools that can help researchers compile, store, and annotate gene sets in the first place. As a result, the process of making or accessing a set often involves tedious and time consuming steps such as finding identifiers for each individual gene. These steps are often repeated extensively to shift from one identifier type to another; or to recreate a published set. In this paper, we present a simple online tool which – with the help of the gene catalogs Ensembl and GeneLynx – can help researchers build and annotate sets of genes quickly and easily. Description The Gene Set Builder is a database-driven, web-based tool designed to help researchers compile, store, export, and share sets of genes. This application supports the 17 eukaryotic genomes found in version 32 of the Ensembl database, which includes species from yeast to human. User-created information such as sets and customized annotations are stored to facilitate easy access. Gene sets stored in the system can be "exported" in a variety of output formats – as lists of identifiers, in tables, or as sequences. In addition, gene sets can be "shared" with specific users to facilitate collaborations or fully released to provide access to published results. The application also features a Perl API (Application Programming Interface) for direct connectivity to custom analysis tools. A downloadable Quick Reference guide and an online tutorial are available to help new users learn its functionalities. Conclusion The Gene Set Builder is an Ensembl-facilitated online tool designed to help researchers compile and manage sets of genes in a user-friendly environment. The application can be accessed via . PMID:16371163
Recommendations for the user-specific enhancement of flood maps
NASA Astrophysics Data System (ADS)
Meyer, V.; Kuhlicke, C.; Luther, J.; Fuchs, S.; Priest, S.; Dorner, W.; Serrhini, K.; Pardoe, J.; McCarthy, S.; Seidel, J.; Palka, G.; Unnerstall, H.; Viavattene, C.; Scheuer, S.
2012-05-01
The European Union Floods Directive requires the establishment of flood maps for high risk areas in all European member states by 2013. However, the current practice of flood mapping in Europe still shows some deficits. Firstly, flood maps are frequently seen as an information tool rather than a communication tool. This means that, for example, local stocks of knowledge are not incorporated. Secondly, the contents of flood maps often do not match the requirements of the end-users. Finally, flood maps are often designed and visualised in a way that cannot be easily understood by residents at risk and/or that is not suitable for the respective needs of public authorities in risk and event management. The RISK MAP project examined how end-user participation in the mapping process may be used to overcome these barriers and enhance the communicative power of flood maps, fundamentally increasing their effectiveness. Based on empirical findings from a participatory approach that incorporated interviews, workshops and eye-tracking tests, conducted in five European case studies, this paper outlines recommendations for user-specific enhancements of flood maps. More specific, recommendations are given with regard to (1) appropriate stakeholder participation processes, which allow incorporating local knowledge and preferences, (2) the improvement of the contents of flood maps by considering user-specific needs and (3) the improvement of the visualisation of risk maps in order to produce user-friendly and understandable risk maps for the user groups concerned. Furthermore, "idealised" maps for different user groups are presented: for strategic planning, emergency management and the public.
Trial, Adoption, Usage and Diffusion of Social Media
2011-12-01
Gaming Users Online Forums Users Podcasting Users Ease of use 15 20 3 0 0 0 To stay in contact 11 0 1 0 0 0 Pressure from friends 10 1 2 0 0 0...focused 2 0 0 0 0 0 Ease of use 2 3 0 0 0 0 Kill boredom 2 0 0 0 0 0 Content (self- expression) 1 0 1 0 0 0 Catch people’s attention 1 0 0 0 0 0...Users Online Forums Users Podcasting Users Stay in contact with friends 20 0 0 0 0 0 Ease of use 6 2 0 0 0 0 Technology features 6 8 0 0
An Intelligent Tool for Activity Data Collection
Jehad Sarkar, A. M.
2011-01-01
Activity recognition systems using simple and ubiquitous sensors require a large variety of real-world sensor data for not only evaluating their performance but also training the systems for better functioning. However, a tremendous amount of effort is required to setup an environment for collecting such data. For example, expertise and resources are needed to design and install the sensors, controllers, network components, and middleware just to perform basic data collections. It is therefore desirable to have a data collection method that is inexpensive, flexible, user-friendly, and capable of providing large and diverse activity datasets. In this paper, we propose an intelligent activity data collection tool which has the ability to provide such datasets inexpensively without physically deploying the testbeds. It can be used as an inexpensive and alternative technique to collect human activity data. The tool provides a set of web interfaces to create a web-based activity data collection environment. It also provides a web-based experience sampling tool to take the user’s activity input. The tool generates an activity log using its activity knowledge and the user-given inputs. The activity knowledge is mined from the web. We have performed two experiments to validate the tool’s performance in producing reliable datasets. PMID:22163832
RGmatch: matching genomic regions to proximal genes in omics data integration.
Furió-Tarí, Pedro; Conesa, Ana; Tarazona, Sonia
2016-11-22
The integrative analysis of multiple genomics data often requires that genome coordinates-based signals have to be associated with proximal genes. The relative location of a genomic region with respect to the gene (gene area) is important for functional data interpretation; hence algorithms that match regions to genes should be able to deliver insight into this information. In this work we review the tools that are publicly available for making region-to-gene associations. We also present a novel method, RGmatch, a flexible and easy-to-use Python tool that computes associations either at the gene, transcript, or exon level, applying a set of rules to annotate each region-gene association with the region location within the gene. RGmatch can be applied to any organism as long as genome annotation is available. Furthermore, we qualitatively and quantitatively compare RGmatch to other tools. RGmatch simplifies the association of a genomic region with its closest gene. At the same time, it is a powerful tool because the rules used to annotate these associations are very easy to modify according to the researcher's specific interests. Some important differences between RGmatch and other similar tools already in existence are RGmatch's flexibility, its wide range of user options, compatibility with any annotatable organism, and its comprehensive and user-friendly output.
ASaiM: a Galaxy-based framework to analyze microbiota data.
Batut, Bérénice; Gravouil, Kévin; Defois, Clémence; Hiltemann, Saskia; Brugère, Jean-François; Peyretaillade, Eric; Peyret, Pierre
2018-05-22
New generations of sequencing platforms coupled to numerous bioinformatics tools has led to rapid technological progress in metagenomics and metatranscriptomics to investigate complex microorganism communities. Nevertheless, a combination of different bioinformatic tools remains necessary to draw conclusions out of microbiota studies. Modular and user-friendly tools would greatly improve such studies. We therefore developed ASaiM, an Open-Source Galaxy-based framework dedicated to microbiota data analyses. ASaiM provides an extensive collection of tools to assemble, extract, explore and visualize microbiota information from raw metataxonomic, metagenomic or metatranscriptomic sequences. To guide the analyses, several customizable workflows are included and are supported by tutorials and Galaxy interactive tours, which guide users through the analyses step by step. ASaiM is implemented as a Galaxy Docker flavour. It is scalable to thousands of datasets, but also can be used on a normal PC. The associated source code is available under Apache 2 license at https://github.com/ASaiM/framework and documentation can be found online (http://asaim.readthedocs.io). Based on the Galaxy framework, ASaiM offers a sophisticated environment with a variety of tools, workflows, documentation and training to scientists working on complex microorganism communities. It makes analysis and exploration analyses of microbiota data easy, quick, transparent, reproducible and shareable.
GI-conf: A configuration tool for the GI-cat distributed catalog
NASA Astrophysics Data System (ADS)
Papeschi, F.; Boldrini, E.; Bigagli, L.; Mazzetti, P.
2009-04-01
In this work we present a configuration tool for the GI-cat. In an Service-Oriented Architecture (SOA) framework, GI-cat implements a distributed catalog service providing advanced capabilities, such as: caching, brokering and mediation functionalities. GI-cat applies a distributed approach, being able to distribute queries to the remote service providers of interest in an asynchronous style, and notifies the status of the queries to the caller implementing an incremental feedback mechanism. Today, GI-cat functionalities are made available through two standard catalog interfaces: the OGC CSW ISO and CSW Core Application Profiles. However, two other interfaces are under testing: the CIM and the EO Extension Packages of the CSW ebRIM Application Profile. GI-cat is able to interface a multiplicity of discovery and access services serving heterogeneous Earth and Space Sciences resources. They include international standards like the OGC Web Services -i.e. OGC CSW, WCS, WFS and WMS, as well as interoperability arrangements (i.e. community standards) such as: UNIDATA THREDDS/OPeNDAP, SeaDataNet CDI (Common Data Index), GBIF (Global Biodiversity Information Facility) services, and SibESS-C infrastructure services. GI-conf implements user-friendly configuration tool for GI-cat. This is a GUI application that employs a visual and very simple approach to configure both the GI-cat publishing and distribution capabilities, in a dynamic way. The tool allows to set one or more GI-cat configurations. Each configuration consists of: a) the catalog standards interfaces published by GI-cat; b) the resources (i.e. services/servers) to be accessed and mediated -i.e. federated. Simple icons are used for interfaces and resources, implementing a user-friendly visual approach. The main GI-conf functionalities are: • Interfaces and federated resources management: user can set which interfaces must be published; besides, she/he can add a new resource, update or remove an already federated resource. • Multiple configuration management: multiple GI-cat configurations can be defined; every configuration identifies a set of published interfaces and a set of federated resources. Configurations can be edited, added, removed, exported, and even imported. • HTML report creation: an HTML report can be created, showing the current active GI-cat configuration, including the resources that are being federated and the published interface endpoints. The configuration tool is shipped with GI-cat and can be used to configure the service after its installation is completed.
CTG Analyzer: A graphical user interface for cardiotocography.
Sbrollini, Agnese; Agostinelli, Angela; Burattini, Luca; Morettini, Micaela; Di Nardo, Francesco; Fioretti, Sandro; Burattini, Laura
2017-07-01
Cardiotocography (CTG) is the most commonly used test for establishing the good health of the fetus during pregnancy and labor. CTG consists in the recording of fetal heart rate (FHR; bpm) and maternal uterine contractions (UC; mmHg). FHR is characterized by baseline, baseline variability, tachycardia, bradycardia, acceleration and decelerations. Instead, UC signal is characterized by presence of contractions and contractions period. Such parameters are usually evaluated by visual inspection. However, visual analysis of CTG recordings has a well-demonstrated poor reproducibility, due to the complexity of physiological phenomena affecting fetal heart rhythm and being related to clinician's experience. Computerized tools in support of clinicians represents a possible solution for improving correctness in CTG interpretation. This paper proposes CTG Analyzer as a graphical tool for automatic and objective analysis of CTG tracings. CTG Analyzer was developed under MATLAB®; it is a very intuitive and user friendly graphical user interface. FHR time series and UC signal are represented one under the other, on a grid with reference lines, as usually done for CTG reports printed on paper. Colors help identification of FHR and UC features. Automatic analysis is based on some unchangeable features definitions provided by the FIGO guidelines, and other arbitrary settings whose default values can be changed by the user. Eventually, CTG Analyzer provides a report file listing all the quantitative results of the analysis. Thus, CTG Analyzer represents a potentially useful graphical tool for automatic and objective analysis of CTG tracings.
Balatsoukas, Panos; Williams, Richard; Davies, Colin; Ainsworth, John; Buchan, Iain
2015-11-01
Integrated care pathways (ICPs) define a chronological sequence of steps, most commonly diagnostic or treatment, to be followed in providing care for patients. Care pathways help to ensure quality standards are met and to reduce variation in practice. Although research on the computerisation of ICP progresses, there is still little knowledge on what are the requirements for designing user-friendly and usable electronic care pathways, or how users (normally health care professionals) interact with interfaces that support design, analysis and visualisation of ICPs. The purpose of the study reported in this paper was to address this gap by evaluating the usability of a novel web-based tool called COCPIT (Collaborative Online Care Pathway Investigation Tool). COCPIT supports the design, analysis and visualisation of ICPs at the population level. In order to address the aim of this study, an evaluation methodology was designed based on heuristic evaluations and a mixed method usability test. The results showed that modular visualisation and direct manipulation of information related to the design and analysis of ICPs is useful for engaging and stimulating users. However, designers should pay attention to issues related to the visibility of the system status and the match between the system and the real world, especially in relation to the display of statistical information about care pathways and the editing of clinical information within a care pathway. The paper concludes with recommendations for interface design.
Maser: one-stop platform for NGS big data from analysis to visualization
Kinjo, Sonoko; Monma, Norikazu; Misu, Sadahiko; Kitamura, Norikazu; Imoto, Junichi; Yoshitake, Kazutoshi; Gojobori, Takashi; Ikeo, Kazuho
2018-01-01
Abstract A major challenge in analyzing the data from high-throughput next-generation sequencing (NGS) is how to handle the huge amounts of data and variety of NGS tools and visualize the resultant outputs. To address these issues, we developed a cloud-based data analysis platform, Maser (Management and Analysis System for Enormous Reads), and an original genome browser, Genome Explorer (GE). Maser enables users to manage up to 2 terabytes of data to conduct analyses with easy graphical user interface operations and offers analysis pipelines in which several individual tools are combined as a single pipeline for very common and standard analyses. GE automatically visualizes genome assembly and mapping results output from Maser pipelines, without requiring additional data upload. With this function, the Maser pipelines can graphically display the results output from all the embedded tools and mapping results in a web browser. Therefore Maser realized a more user-friendly analysis platform especially for beginners by improving graphical display and providing the selected standard pipelines that work with built-in genome browser. In addition, all the analyses executed on Maser are recorded in the analysis history, helping users to trace and repeat the analyses. The entire process of analysis and its histories can be shared with collaborators or opened to the public. In conclusion, our system is useful for managing, analyzing, and visualizing NGS data and achieves traceability, reproducibility, and transparency of NGS analysis. Database URL: http://cell-innovation.nig.ac.jp/maser/ PMID:29688385
Modeling biochemical transformation processes and information processing with Narrator
Mandel, Johannes J; Fuß, Hendrik; Palfreyman, Niall M; Dubitzky, Werner
2007-01-01
Background Software tools that model and simulate the dynamics of biological processes and systems are becoming increasingly important. Some of these tools offer sophisticated graphical user interfaces (GUIs), which greatly enhance their acceptance by users. Such GUIs are based on symbolic or graphical notations used to describe, interact and communicate the developed models. Typically, these graphical notations are geared towards conventional biochemical pathway diagrams. They permit the user to represent the transport and transformation of chemical species and to define inhibitory and stimulatory dependencies. A critical weakness of existing tools is their lack of supporting an integrative representation of transport, transformation as well as biological information processing. Results Narrator is a software tool facilitating the development and simulation of biological systems as Co-dependence models. The Co-dependence Methodology complements the representation of species transport and transformation together with an explicit mechanism to express biological information processing. Thus, Co-dependence models explicitly capture, for instance, signal processing structures and the influence of exogenous factors or events affecting certain parts of a biological system or process. This combined set of features provides the system biologist with a powerful tool to describe and explore the dynamics of life phenomena. Narrator's GUI is based on an expressive graphical notation which forms an integral part of the Co-dependence Methodology. Behind the user-friendly GUI, Narrator hides a flexible feature which makes it relatively easy to map models defined via the graphical notation to mathematical formalisms and languages such as ordinary differential equations, the Systems Biology Markup Language or Gillespie's direct method. This powerful feature facilitates reuse, interoperability and conceptual model development. Conclusion Narrator is a flexible and intuitive systems biology tool. It is specifically intended for users aiming to construct and simulate dynamic models of biology without recourse to extensive mathematical detail. Its design facilitates mappings to different formal languages and frameworks. The combined set of features makes Narrator unique among tools of its kind. Narrator is implemented as Java software program and available as open-source from . PMID:17389034
Gruber, Andreas R; Bernhart, Stephan H; Lorenz, Ronny
2015-01-01
The ViennaRNA package is a widely used collection of programs for thermodynamic RNA secondary structure prediction. Over the years, many additional tools have been developed building on the core programs of the package to also address issues related to noncoding RNA detection, RNA folding kinetics, or efficient sequence design considering RNA-RNA hybridizations. The ViennaRNA web services provide easy and user-friendly web access to these tools. This chapter describes how to use this online platform to perform tasks such as prediction of minimum free energy structures, prediction of RNA-RNA hybrids, or noncoding RNA detection. The ViennaRNA web services can be used free of charge and can be accessed via http://rna.tbi.univie.ac.at.
GAPIT: genome association and prediction integrated tool.
Lipka, Alexander E; Tian, Feng; Wang, Qishan; Peiffer, Jason; Li, Meng; Bradbury, Peter J; Gore, Michael A; Buckler, Edward S; Zhang, Zhiwu
2012-09-15
Software programs that conduct genome-wide association studies and genomic prediction and selection need to use methodologies that maximize statistical power, provide high prediction accuracy and run in a computationally efficient manner. We developed an R package called Genome Association and Prediction Integrated Tool (GAPIT) that implements advanced statistical methods including the compressed mixed linear model (CMLM) and CMLM-based genomic prediction and selection. The GAPIT package can handle large datasets in excess of 10 000 individuals and 1 million single-nucleotide polymorphisms with minimal computational time, while providing user-friendly access and concise tables and graphs to interpret results. http://www.maizegenetics.net/GAPIT. zhiwu.zhang@cornell.edu Supplementary data are available at Bioinformatics online.
PathScore: a web tool for identifying altered pathways in cancer data.
Gaffney, Stephen G; Townsend, Jeffrey P
2016-12-01
PathScore quantifies the level of enrichment of somatic mutations within curated pathways, applying a novel approach that identifies pathways enriched across patients. The application provides several user-friendly, interactive graphic interfaces for data exploration, including tools for comparing pathway effect sizes, significance, gene-set overlap and enrichment differences between projects. Web application available at pathscore.publichealth.yale.edu. Site implemented in Python and MySQL, with all major browsers supported. Source code available at: github.com/sggaffney/pathscore with a GPLv3 license. stephen.gaffney@yale.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Algodoo: A Tool for Encouraging Creativity in Physics Teaching and Learning
NASA Astrophysics Data System (ADS)
Gregorcic, Bor; Bodin, Madelen
2017-01-01
Algodoo (http://www.algodoo.com) is a digital sandbox for physics 2D simulations. It allows students and teachers to easily create simulated "scenes" and explore physics through a user-friendly and visually attractive interface. In this paper, we present different ways in which students and teachers can use Algodoo to visualize and solve physics problems, investigate phenomena and processes, and engage in out-of-school activities and projects. Algodoo, with its approachable interface, inhabits a middle ground between computer games and "serious" computer modeling. It is suitable as an entry-level modeling tool for students of all ages and can facilitate discussions about the role of computer modeling in physics.
NASA Astrophysics Data System (ADS)
Lipenbergs, E.; Bobrovs, Vj.; Ivanovs, G.
2016-10-01
To ensure that end-users and consumers have access to comprehensive, comparable and user-friendly information regarding the Internet access service quality, it is necessary to implement and regularly renew a set of legislative regulatory acts and to provide monitoring of the quality of Internet access services regarding the current European Regulatory Framework. The actual situation regarding the quality of service monitoring solutions in different European countries depends on national regulatory initiatives and public awareness. The service monitoring solutions are implemented using different measurement methodologies and tools. The paper investigates the practical implementations for developing a harmonising approach to quality monitoring in order to obtain objective information on the quality of Internet access services on mobile networks.
Lynx: a database and knowledge extraction engine for integrative medicine.
Sulakhe, Dinanath; Balasubramanian, Sandhya; Xie, Bingqing; Feng, Bo; Taylor, Andrew; Wang, Sheng; Berrocal, Eduardo; Dave, Utpal; Xu, Jinbo; Börnigen, Daniela; Gilliam, T Conrad; Maltsev, Natalia
2014-01-01
We have developed Lynx (http://lynx.ci.uchicago.edu)--a web-based database and a knowledge extraction engine, supporting annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Its underlying knowledge base (LynxKB) integrates various classes of information from >35 public databases and private collections, as well as manually curated data from our group and collaborators. Lynx provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization to assist the user in extracting meaningful knowledge from LynxKB and experimental data, whereas its service-oriented architecture provides public access to LynxKB and its analytical tools via user-friendly web services and interfaces.
[Development of a predictive program for microbial growth under various temperature conditions].
Fujikawa, Hiroshi; Yano, Kazuyoshi; Morozumi, Satoshi; Kimura, Bon; Fujii, Tateo
2006-12-01
A predictive program for microbial growth under various temperature conditions was developed with a mathematical model. The model was a new logistic model recently developed by us. The program predicts Escherichia coli growth in broth, Staphylococcus aureus growth and its enterotoxin production in milk, and Vibrio parahaemolyticus growth in broth at various temperature patterns. The program, which was built with Microsoft Excel (Visual Basic Application), is user-friendly; users can easily input the temperature history of a test food and obtain the prediction instantly on the computer screen. The predicted growth and toxin production can be important indices to determine whether a food is microbiologically safe or not. This program should be a useful tool to confirm the microbial safety of commercial foods.
Giovanni - The Bridge Between Data and Science
NASA Technical Reports Server (NTRS)
Liu, Zhong; Acker, James
2017-01-01
This article describes new features in the Geospatial Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni), a user-friendly online tool that enables visualization, analysis, and assessment of NASA Earth science data sets without downloading data and software. Since the satellite era began, data collected from Earth-observing satellites have been widely used in research and applications; however, using satellite-based data sets can still be a challenge to many. To facilitate data access and evaluation, as well as scientific exploration and discovery, the NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) has developed Giovanni for a wide range of users around the world. This article describes the latest capabilities of Giovanni with examples, and discusses future plans for this innovative system.
Cyber-Attack Methods, Why They Work on Us, and What to Do
NASA Technical Reports Server (NTRS)
Byrne, DJ
2015-01-01
Basic cyber-attack methods are well documented, and even automated with user-friendly GUIs (Graphical User Interfaces). Entire suites of attack tools are legal, conveniently packaged, and freely downloadable to anyone; more polished versions are sold with vendor support. Our team ran some of these against a selected set of projects within our organization to understand what the attacks do so that we can design and validate defenses against them. Some existing defenses were effective against the attacks, some less so. On average, every machine had twelve easily identifiable vulnerabilities, two of them "critical". Roughly 5% of passwords in use were easily crack-able. We identified a clear set of recommendations for each project, and some common patterns that emerged among them all.
Spooner, Amy J; Aitken, Leanne M; Chaboyer, Wendy
2017-11-15
There is widespread use of clinical information systems in intensive care units however, the evidence to support electronic handover is limited. The study aim was to assess the barriers and facilitators to use of an electronic minimum dataset for nursing team leader shift-to-shift handover in the intensive care unit prior to its implementation. The study was conducted in a 21-bed medical/surgical intensive care unit, specialising in cardiothoracic surgery at a tertiary referral hospital, in Queensland, Australia. An established tool was modified to the intensive care nursing handover context and a survey of all 63 nursing team leaders was undertaken. Survey statements were rated using a 6-point Likert scale with selections from 'strongly disagree' to 'strongly agree', and open-ended questions. Descriptive statistics were used to summarise results. A total of 39 team leaders responded to the survey (62%). Team leaders used general intensive care work unit guidelines to inform practice however they were less familiar with the intensive care handover work unit guideline. Barriers to minimum dataset uptake included: a tool that was not user friendly, time consuming and contained too much information. Facilitators to minimum dataset adoption included: a tool that was user friendly, saved time and contained relevant information. Identifying the complexities of a healthcare setting prior to the implementation of an intervention assists researchers and clinicians to integrate new knowledge into healthcare settings. Barriers and facilitators to knowledge use focused on usability, content and efficiency of the electronic minimum dataset and can be used to inform tailored strategies to optimise team leaders' adoption of a minimum dataset for handover. Copyright © 2017 Australian College of Critical Care Nurses Ltd. Published by Elsevier Ltd. All rights reserved.
Biblio-MetReS for user-friendly mining of genes and biological processes in scientific documents.
Usie, Anabel; Karathia, Hiren; Teixidó, Ivan; Alves, Rui; Solsona, Francesc
2014-01-01
One way to initiate the reconstruction of molecular circuits is by using automated text-mining techniques. Developing more efficient methods for such reconstruction is a topic of active research, and those methods are typically included by bioinformaticians in pipelines used to mine and curate large literature datasets. Nevertheless, experimental biologists have a limited number of available user-friendly tools that use text-mining for network reconstruction and require no programming skills to use. One of these tools is Biblio-MetReS. Originally, this tool permitted an on-the-fly analysis of documents contained in a number of web-based literature databases to identify co-occurrence of proteins/genes. This approach ensured results that were always up-to-date with the latest live version of the databases. However, this 'up-to-dateness' came at the cost of large execution times. Here we report an evolution of the application Biblio-MetReS that permits constructing co-occurrence networks for genes, GO processes, Pathways, or any combination of the three types of entities and graphically represent those entities. We show that the performance of Biblio-MetReS in identifying gene co-occurrence is as least as good as that of other comparable applications (STRING and iHOP). In addition, we also show that the identification of GO processes is on par to that reported in the latest BioCreAtIvE challenge. Finally, we also report the implementation of a new strategy that combines on-the-fly analysis of new documents with preprocessed information from documents that were encountered in previous analyses. This combination simultaneously decreases program run time and maintains 'up-to-dateness' of the results. http://metres.udl.cat/index.php/downloads, metres.cmb@gmail.com.
Network Analytical Tool for Monitoring Global Food Safety Highlights China
Nepusz, Tamás; Petróczi, Andrea; Naughton, Declan P.
2009-01-01
Background The Beijing Declaration on food safety and security was signed by over fifty countries with the aim of developing comprehensive programs for monitoring food safety and security on behalf of their citizens. Currently, comprehensive systems for food safety and security are absent in many countries, and the systems that are in place have been developed on different principles allowing poor opportunities for integration. Methodology/Principal Findings We have developed a user-friendly analytical tool based on network approaches for instant customized analysis of food alert patterns in the European dataset from the Rapid Alert System for Food and Feed. Data taken from alert logs between January 2003 – August 2008 were processed using network analysis to i) capture complexity, ii) analyze trends, and iii) predict possible effects of interventions by identifying patterns of reporting activities between countries. The detector and transgressor relationships are readily identifiable between countries which are ranked using i) Google's PageRank algorithm and ii) the HITS algorithm of Kleinberg. The program identifies Iran, China and Turkey as the transgressors with the largest number of alerts. However, when characterized by impact, counting the transgressor index and the number of countries involved, China predominates as a transgressor country. Conclusions/Significance This study reports the first development of a network analysis approach to inform countries on their transgressor and detector profiles as a user-friendly aid for the adoption of the Beijing Declaration. The ability to instantly access the country-specific components of the several thousand annual reports will enable each country to identify the major transgressors and detectors within its trading network. Moreover, the tool can be used to monitor trading countries for improved detector/transgressor ratios. PMID:19688088
Lai, Jin-Shei; Bregman, Corey; Zelko, Frank; Nowinski, Cindy; Cella, David; Beaumont, Jennifer J; Goldman, Stewart
2017-09-01
Cognitive dysfunction is a major concern for children with brain tumors. A valid, user-friendly screening tool could facilitate prompt referral for comprehensive neuropsychological assessments and therefore early intervention. Applications of the pediatric perceived cognitive function item bank (pedsPCF) such as computerized adaptive testing can potentially serve as such a tool given its brevity and user-friendly nature. This study aimed to evaluate whether pedsPCF was a valid indicator of cerebral compromise using the criterion of structural brain changes indicated by leukoencephalopathy grades. Data from 99 children (mean age = 12.6 years) with brain tumors and their parents were analyzed. Average time since diagnosis was 5.8 years; time since last treatment was 4.3 years. Leukoencephalopathy grade (range 0-4) was based on white matter damage and degree of deep white matter volume loss shown on MRI. Parents of patients completed the pedsPCF. Scores were based on the US general population-based T-score metric (mean = 50; SD = 10). Higher scores reflect better function. Leukoencephalopathy grade distributions were as follows: 36 grade 0, 27 grade 1, 22 grade 2, 13 grade 3, and 1 grade 4. The mean pedsPCF T-score was 48.3 (SD = 8.3; range 30.5-63.7). The pedsPCF scores significantly discriminated patients with different leukoencephalopathy grades, F = 4.14, p = 0.0084. Effect sizes ranged from 0.09 (grade 0 vs. 1) to 1.22 (grade 0 vs. 3/4). This study demonstrates that the pedsPCF is a valid indicator of leukoencephalopathy and provides support for its use as a screening tool for more comprehensive neurocognitive testing.
Using ICESat/GLAS Data Produced in a Self-Describing Format
NASA Astrophysics Data System (ADS)
Fowler, D. K.; Webster, D.; Fowler, C.; McAllister, M.; Haran, T. M.
2015-12-01
For the life of the ICESat mission and beyond, GLAS data have been distributed in binary format by NASA's National Snow and Ice Data Center Distributed Active Archive Center (NSIDC DAAC) at the University of Colorado in Boulder. These data have been extremely useful but, depending on the users, not always the easiest to use. Recently, with release 33 and 34, GLAS data have been produced in an HDF5 format. The NSIDC User Services Office has found that most users find this HDF5 format to be more user friendly than the original binary format. Some of the advantages include being able to view the actual data using HDFView or any of a number of open source tools freely available for users to view and work with the data. Also with this format NSIDC DAAC has been able to provide more selective and specific services which include spatial subsetting, file stitching, and the much sought after parameter subsetting through the use of Reverb, the next generation Earth science discovery tool. The final release of GLAS data in 2014 and the ongoing user questions not just about the data, but about the mission, satellite platform, and instrument have also spurred NSIDC DAAC efforts to make all of the mission documents and information available to the public in one location. Thus was born the ICESat/GLAS Long Term Archive now available online. The data and specifics from this mission are archived and made available to the public at NASA's NSIDC DAAC.
Böhm, Ingrid
2011-08-01
The purpose of this article is to present a user-friendly tool for quantifying the iron content of superparamagnetic labeled cells before cell tracking by magnetic resonance imaging (MRI). Iron quantification was evaluated by using Prussian blue staining and spectrophotometry. White blood cells were labeled with superparamagnetic iron oxide (SPIO) nanoparticles. Labeling was confirmed by light microscopy. Subsequently, the cells were embedded in a phantom and scanned on a 3 T magnetic resonance tomography (MRT) whole-body system. Mean peak wavelengths λ(peak) was determined at A(720 nm) (range 719-722 nm). Linearity was proven for the measuring range 0.5 to 10 μg Fe/mL (r = .9958; p = 2.2 × 10(-12)). The limit of detection was 0.01 μg Fe/mL (0.1785 mM), and the limit of quantification was 0.04 μg Fe/mL (0.714 mM). Accuracy was demonstrated by comparison with atomic absorption spectrometry. Precision and robustness were also proven. On T(2)-weighted images, signal intensity varied according to the iron concentration of SPIO-labeled cells. Absorption spectrophotometry is both a highly sensitive and user-friendly technique that is feasible for quantifying the iron content of magnetically labeled cells. The presented data suggest that spectrophotometry is a promising tool for promoting the implementation of magnetic resonance-based cell tracking in routine clinical applications (from bench to bedside).
NASA Astrophysics Data System (ADS)
Abdul-Aziz, O. I.; Ishtiaq, K. S.
2015-12-01
We present a user-friendly modeling tool on MS Excel to predict the greenhouse gas (GHG) fluxes and estimate potential carbon sequestration from the coastal wetlands. The dominant controls of wetland GHG fluxes and their relative mechanistic linkages with various hydro-climatic, sea level, biogeochemical and ecological drivers were first determined by employing a systematic data-analytics method, including Pearson correlation matrix, principal component and factor analyses, and exploratory partial least squares regressions. The mechanistic knowledge and understanding was then utilized to develop parsimonious non-linear (power-law) models to predict wetland carbon dioxide (CO2) and methane (CH4) fluxes based on a sub-set of climatic, hydrologic and environmental drivers such as the photosynthetically active radiation, soil temperature, water depth, and soil salinity. The models were tested with field data for multiple sites and seasons (2012-13) collected from the Waquoit Bay, MA. The model estimated the annual wetland carbon storage by up-scaling the instantaneous predicted fluxes to an extended growing season (e.g., May-October) and by accounting for the net annual lateral carbon fluxes between the wetlands and estuary. The Excel Spreadsheet model is a simple ecological engineering tool for coastal carbon management and their incorporation into a potential carbon market under a changing climate, sea level and environment. Specifically, the model can help to determine appropriate GHG offset protocols and monitoring plans for projects that focus on tidal wetland restoration and maintenance.
Damienikan, Aliaksandr U.
2016-01-01
The majority of bacterial genome annotations are currently automated and based on a ‘gene by gene’ approach. Regulatory signals and operon structures are rarely taken into account which often results in incomplete and even incorrect gene function assignments. Here we present SigmoID, a cross-platform (OS X, Linux and Windows) open-source application aiming at simplifying the identification of transcription regulatory sites (promoters, transcription factor binding sites and terminators) in bacterial genomes and providing assistance in correcting annotations in accordance with regulatory information. SigmoID combines a user-friendly graphical interface to well known command line tools with a genome browser for visualising regulatory elements in genomic context. Integrated access to online databases with regulatory information (RegPrecise and RegulonDB) and web-based search engines speeds up genome analysis and simplifies correction of genome annotation. We demonstrate some features of SigmoID by constructing a series of regulatory protein binding site profiles for two groups of bacteria: Soft Rot Enterobacteriaceae (Pectobacterium and Dickeya spp.) and Pseudomonas spp. Furthermore, we inferred over 900 transcription factor binding sites and alternative sigma factor promoters in the annotated genome of Pectobacterium atrosepticum. These regulatory signals control putative transcription units covering about 40% of the P. atrosepticum chromosome. Reviewing the annotation in cases where it didn’t fit with regulatory information allowed us to correct product and gene names for over 300 loci. PMID:27257541
NASA Astrophysics Data System (ADS)
Liu, Z.; Acker, J. G.; Kempler, S. J.
2016-12-01
The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) is one of twelve NASA Science Mission Directorate (SMD) Data Centers that provide Earth science data, information, and services to research scientists, applications scientists, applications users, and students around the world. The GES DISC is the home (archive) of NASA Precipitation and Hydrology, as well as Atmospheric Composition and Dynamics remote sensing data and information. To facilitate Earth science data access, the GES DISC has been developing user-friendly data services for users at different levels. Among them, the Geospatial Interactive Online Visualization ANd aNalysis Infrastructure (GIOVANNI, http://giovanni.gsfc.nasa.gov/) allows users to explore satellite-based data using sophisticated analyses and visualizations without downloading data and software, which is particularly suitable for novices to use NASA datasets in STEM activities. In this presentation, we will briefly introduce GIOVANNI and recommend datasets for STEM. Examples of using these datasets in STEM activities will be presented as well.
NASA Technical Reports Server (NTRS)
Liu, Z.; Acker, J.; Kempler, S.
2016-01-01
The NASA Goddard Earth Sciences (GES) Data and Information Services Center(DISC) is one of twelve NASA Science Mission Directorate (SMD) Data Centers that provide Earth science data, information, and services to users around the world including research and application scientists, students, citizen scientists, etc. The GESDISC is the home (archive) of remote sensing datasets for NASA Precipitation and Hydrology, Atmospheric Composition and Dynamics, etc. To facilitate Earth science data access, the GES DISC has been developing user-friendly data services for users at different levels in different countries. Among them, the Geospatial Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni, http:giovanni.gsfc.nasa.gov) allows users to explore satellite-based datasets using sophisticated analyses and visualization without downloading data and software, which is particularly suitable for novices (such as students) to use NASA datasets in STEM (science, technology, engineering and mathematics) activities. In this presentation, we will briefly introduce Giovanni along with examples for STEM activities.
An electronic registry for physiotherapists in Belgium.
Buyl, Ronald; Nyssen, Marc
2008-01-01
This paper describes the results of the KINELECTRICS project. Since more and more clinical documents are stored and transmitted in an electronic way, the aim of this project was to design an electronic version of the registry that contains all acts of physiotherapists. The solution we present here, not only meets all legal constraints, but also enables to verify the traceability and inalterability of the generated documents, by means of SHA-256 codes. The proposed structure, using XML technology can also form a basis for the development of tools that can be used by the controlling authorities. By means of a certification procedure for software systems, we succeeded in developing a user friendly system that enables end-users that use a quality labeled software package, to automatically produce all the legally necessary documents concerning the registry. Moreover, we hope that this development will be an incentive for non-users to start working in an electronic way.
Multiple-body simulation with emphasis on integrated Space Shuttle vehicle
NASA Technical Reports Server (NTRS)
Chiu, Ing-Tsau
1993-01-01
The program to obtain intergrid communications - Pegasus - was enhanced to make better use of computing resources. Periodic block tridiagonal and penta-diagonal diagonal routines in OVERFLOW were modified to use a better algorithm to speed up the calculation for grids with periodic boundary conditions. Several programs were added to collar grid tools and a user friendly shell script was developed to help users generate collar grids. User interface for HYPGEN was modified to cope with the changes in HYPGEN. ET/SRB attach hardware grids were added to the computational model for the space shuttle and is currently incorporated into the refined shuttle model jointly developed at Johnson Space Center and Ames Research Center. Flow simulation for the integrated space shuttle vehicle at flight Reynolds number was carried out and compared with flight data as well as the earlier simulation for wind tunnel Reynolds number.
IDEOM: an Excel interface for analysis of LC-MS-based metabolomics data.
Creek, Darren J; Jankevics, Andris; Burgess, Karl E V; Breitling, Rainer; Barrett, Michael P
2012-04-01
The application of emerging metabolomics technologies to the comprehensive investigation of cellular biochemistry has been limited by bottlenecks in data processing, particularly noise filtering and metabolite identification. IDEOM provides a user-friendly data processing application that automates filtering and identification of metabolite peaks, paying particular attention to common sources of noise and false identifications generated by liquid chromatography-mass spectrometry (LC-MS) platforms. Building on advanced processing tools such as mzMatch and XCMS, it allows users to run a comprehensive pipeline for data analysis and visualization from a graphical user interface within Microsoft Excel, a familiar program for most biological scientists. IDEOM is provided free of charge at http://mzmatch.sourceforge.net/ideom.html, as a macro-enabled spreadsheet (.xlsb). Implementation requires Microsoft Excel (2007 or later). R is also required for full functionality. michael.barrett@glasgow.ac.uk Supplementary data are available at Bioinformatics online.
NCBI GEO: mining tens of millions of expression profiles--database and tools update.
Barrett, Tanya; Troup, Dennis B; Wilhite, Stephen E; Ledoux, Pierre; Rudnev, Dmitry; Evangelista, Carlos; Kim, Irene F; Soboleva, Alexandra; Tomashevsky, Maxim; Edgar, Ron
2007-01-01
The Gene Expression Omnibus (GEO) repository at the National Center for Biotechnology Information (NCBI) archives and freely disseminates microarray and other forms of high-throughput data generated by the scientific community. The database has a minimum information about a microarray experiment (MIAME)-compliant infrastructure that captures fully annotated raw and processed data. Several data deposit options and formats are supported, including web forms, spreadsheets, XML and Simple Omnibus Format in Text (SOFT). In addition to data storage, a collection of user-friendly web-based interfaces and applications are available to help users effectively explore, visualize and download the thousands of experiments and tens of millions of gene expression patterns stored in GEO. This paper provides a summary of the GEO database structure and user facilities, and describes recent enhancements to database design, performance, submission format options, data query and retrieval utilities. GEO is accessible at http://www.ncbi.nlm.nih.gov/geo/
HYDRA Hyperspectral Data Research Application Tom Rink and Tom Whittaker
NASA Astrophysics Data System (ADS)
Rink, T.; Whittaker, T.
2005-12-01
HYDRA is a freely available, easy to install tool for visualization and analysis of large local or remote hyper/multi-spectral datasets. HYDRA is implemented on top of the open source VisAD Java library via Jython - the Java implementation of the user friendly Python programming language. VisAD provides data integration, through its generalized data model, user-display interaction and display rendering. Jython has an easy to read, concise, scripting-like, syntax which eases software development. HYDRA allows data sharing of large datasets through its support of the OpenDAP and OpenADDE server-client protocols. The users can explore and interrogate data, and subset in physical and/or spectral space to isolate key areas of interest for further analysis without having to download an entire dataset. It also has an extensible data input architecture to recognize new instruments and understand different local file formats, currently NetCDF and HDF4 are supported.
Lim, Huat Chye; Curlin, Marcel E; Mittler, John E
2011-11-01
Computer simulation models can be useful in exploring the efficacy of HIV therapy regimens in preventing the evolution of drug-resistant viruses. Current modeling programs, however, were designed by researchers with expertise in computational biology, limiting their accessibility to those who might lack such a background. We have developed a user-friendly graphical program, HIV Therapy Simulator (HIVSIM), that is accessible to non-technical users. The program allows clinicians and researchers to explore the effectiveness of various therapeutic strategies, such as structured treatment interruptions, booster therapies and induction-maintenance therapies. We anticipate that HIVSIM will be useful for evaluating novel drug-based treatment concepts in clinical research, and as an educational tool. HIV Therapy Simulator is freely available for Mac OS and Windows at http://sites.google.com/site/hivsimulator/. jmittler@uw.edu. Supplementary data are available at Bioinformatics online.
NETL CO 2 Storage prospeCtive Resource Estimation Excel aNalysis (CO 2-SCREEN) User's Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanguinito, Sean M.; Goodman, Angela; Levine, Jonathan
This user’s manual guides the use of the National Energy Technology Laboratory’s (NETL) CO 2 Storage prospeCtive Resource Estimation Excel aNalysis (CO 2-SCREEN) tool, which was developed to aid users screening saline formations for prospective CO 2 storage resources. CO 2- SCREEN applies U.S. Department of Energy (DOE) methods and equations for estimating prospective CO 2 storage resources for saline formations. CO2-SCREEN was developed to be substantive and user-friendly. It also provides a consistent method for calculating prospective CO 2 storage resources that allows for consistent comparison of results between different research efforts, such as the Regional Carbon Sequestration Partnershipsmore » (RCSP). CO 2-SCREEN consists of an Excel spreadsheet containing geologic inputs and outputs, linked to a GoldSim Player model that calculates prospective CO 2 storage resources via Monte Carlo simulation.« less
Boscardin, Christy; Fergus, Kirkpatrick B; Hellevig, Bonnie; Hauer, Karen E
2017-11-09
Easily accessible and interpretable performance data constitute critical feedback for learners that facilitate informed self-assessment and learning planning. To provide this feedback, there has been a proliferation of educational dashboards in recent years. An educational (learner) dashboard systematically delivers timely and continuous feedback on performance and can provide easily visualized and interpreted performance data. In this paper, we provide practical tips for developing a functional, user-friendly individual learner performance dashboard and literature review of dashboard development, assessment theory, and users' perspectives. Considering key design principles and maximizing current technological advances in data visualization techniques can increase dashboard utility and enhance the user experience. By bridging current technology with assessment strategies that support learning, educators can continue to improve the field of learning analytics and design of information management tools such as dashboards in support of improved learning outcomes.
PYCHEM: a multivariate analysis package for python.
Jarvis, Roger M; Broadhurst, David; Johnson, Helen; O'Boyle, Noel M; Goodacre, Royston
2006-10-15
We have implemented a multivariate statistical analysis toolbox, with an optional standalone graphical user interface (GUI), using the Python scripting language. This is a free and open source project that addresses the need for a multivariate analysis toolbox in Python. Although the functionality provided does not cover the full range of multivariate tools that are available, it has a broad complement of methods that are widely used in the biological sciences. In contrast to tools like MATLAB, PyChem 2.0.0 is easily accessible and free, allows for rapid extension using a range of Python modules and is part of the growing amount of complementary and interoperable scientific software in Python based upon SciPy. One of the attractions of PyChem is that it is an open source project and so there is an opportunity, through collaboration, to increase the scope of the software and to continually evolve a user-friendly platform that has applicability across a wide range of analytical and post-genomic disciplines. http://sourceforge.net/projects/pychem
The GNAT: A new tool for processing NMR data.
Castañar, Laura; Poggetto, Guilherme Dal; Colbourne, Adam A; Morris, Gareth A; Nilsson, Mathias
2018-06-01
The GNAT (General NMR Analysis Toolbox) is a free and open-source software package for processing, visualising, and analysing NMR data. It supersedes the popular DOSY Toolbox, which has a narrower focus on diffusion NMR. Data import of most common formats from the major NMR platforms is supported, as well as a GNAT generic format. Key basic processing of NMR data (e.g., Fourier transformation, baseline correction, and phasing) is catered for within the program, as well as more advanced techniques (e.g., reference deconvolution and pure shift FID reconstruction). Analysis tools include DOSY and SCORE for diffusion data, ROSY T 1 /T 2 estimation for relaxation data, and PARAFAC for multilinear analysis. The GNAT is written for the MATLAB® language and comes with a user-friendly graphical user interface. The standard version is intended to run with a MATLAB installation, but completely free-standing compiled versions for Windows, Mac, and Linux are also freely available. © 2018 The Authors Magnetic Resonance in Chemistry Published by John Wiley & Sons Ltd.
Metadyn View: Fast web-based viewer of free energy surfaces calculated by metadynamics
NASA Astrophysics Data System (ADS)
Hošek, Petr; Spiwok, Vojtěch
2016-01-01
Metadynamics is a highly successful enhanced sampling technique for simulation of molecular processes and prediction of their free energy surfaces. An in-depth analysis of data obtained by this method is as important as the simulation itself. Although there are several tools to compute free energy surfaces from metadynamics data, they usually lack user friendliness and a build-in visualization part. Here we introduce Metadyn View as a fast and user friendly viewer of bias potential/free energy surfaces calculated by metadynamics in Plumed package. It is based on modern web technologies including HTML5, JavaScript and Cascade Style Sheets (CSS). It can be used by visiting the web site and uploading a HILLS file. It calculates the bias potential/free energy surface on the client-side, so it can run online or offline without necessity to install additional web engines. Moreover, it includes tools for measurement of free energies and free energy differences and data/image export.
Neale, Joanne; Brown, Caral
2016-09-01
Homeless drug and alcohol users are one of the most marginalised groups in society. They frequently have complex needs and limited social support. In this paper, we explore the role of friendship in the lives of homeless drug and alcohol users living in hostels, using the concepts of 'social capital' and 'recovery capital' to frame the analyses. The study was undertaken in three hostels, each in a different English city, during 2013-2014. Audio recorded semi-structured interviews were conducted with 30 residents (9 females; 21 males) who self-reported drink and/or drug problems; follow-up interviews were completed 4-6 weeks later with 22 participants (6 females; 16 males). Data were transcribed verbatim, coded using the software package MAXQDA, and analysed using Framework. Only 21 participants reported current friends at interview 1, and friendship networks were small and changeable. Despite this, participants desired friendships that were culturally normative. Eight categories of friend emerged from the data: family-like friends; using friends; homeless friends; childhood friends; online-only friends; drug treatment friends; work friends; and mutual interest friends. Routine and regular contact was highly valued, with family-like friends appearing to offer the most constant practical and emotional support. The use of information and communication technologies (ICTs) was central to many participants' friendships, keeping them connected to social support and recovery capital outside homelessness and substance-using worlds. We conclude that those working with homeless drug and alcohol users - and potentially other marginalised populations - could beneficially encourage their clients to identify and build upon their most positive and reliable relationships. Additionally, they might explore ways of promoting the use of ICTs to combat loneliness and isolation. Texting, emailing, online mutual aid meetings, chatrooms, Internet penpals, skyping and other social media all offer potentially valuable opportunities for building friendships that can bolster otherwise limited social and recovery capital. © 2015 The Authors. Health and Social Care in the Community Published by John Wiley & Sons Ltd.
Identification of MS-Cleavable and Non-Cleavable Chemically Crosslinked Peptides with MetaMorpheus.
Lu, Lei; Millikin, Robert J; Solntsev, Stefan K; Rolfs, Zach; Scalf, Mark; Shortreed, Michael R; Smith, Lloyd M
2018-05-25
Protein chemical crosslinking combined with mass spectrometry has become an important technique for the analysis of protein structure and protein-protein interactions. A variety of crosslinkers are well developed, but reliable, rapid, and user-friendly tools for large-scale analysis of crosslinked proteins are still in need. Here we report MetaMorpheusXL, a new search module within the MetaMorpheus software suite that identifies both MS-cleavable and non-cleavable crosslinked peptides in MS data. MetaMorpheusXL identifies MS-cleavable crosslinked peptides with an ion-indexing algorithm, which enables an efficient large database search. The identification does not require the presence of signature fragment ions, an advantage compared to similar programs such as XlinkX. One complication associated with the need for signature ions from cleavable crosslinkers such as DSSO (disuccinimidyl sulfoxide) is the requirement for multiple fragmentation types and energy combinations, which is not necessary for MetaMorpheusXL. The ability to perform proteome-wide analysis is another advantage of MetaMorpheusXl compared to such programs as MeroX and DXMSMS. MetaMorpheusXL is also faster than other currently available MS-cleavable crosslink search software programs. It is imbedded in MetaMorpheus, an open-source and freely available software suite that provides a reliable, fast, user-friendly graphical user interface that is readily accessible to researchers.
The Value of Data and Metadata Standardization for Interoperability in Giovanni
NASA Astrophysics Data System (ADS)
Smit, C.; Hegde, M.; Strub, R. F.; Bryant, K.; Li, A.; Petrenko, M.
2017-12-01
Giovanni (https://giovanni.gsfc.nasa.gov/giovanni/) is a data exploration and visualization tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC). It has been around in one form or another for more than 15 years. Giovanni calculates simple statistics and produces 22 different visualizations for more than 1600 geophysical parameters from more than 90 satellite and model products. Giovanni relies on external data format standards to ensure interoperability, including the NetCDF CF Metadata Conventions. Unfortunately, these standards were insufficient to make Giovanni's internal data representation truly simple to use. Finding and working with dimensions can be convoluted with the CF Conventions. Furthermore, the CF Conventions are silent on machine-friendly descriptive metadata such as the parameter's source product and product version. In order to simplify analyzing disparate earth science data parameters in a unified way, we developed Giovanni's internal standard. First, the format standardizes parameter dimensions and variables so they can be easily found. Second, the format adds all the machine-friendly metadata Giovanni needs to present our parameters to users in a consistent and clear manner. At a glance, users can grasp all the pertinent information about parameters both during parameter selection and after visualization. This poster gives examples of how our metadata and data standards, both external and internal, have both simplified our code base and improved our users' experiences.
NASA Astrophysics Data System (ADS)
Russell, R. M.; Johnson, R. M.; Gardiner, E. S.; Bergman, J. J.; Genyuk, J.; Henderson, S.
2004-12-01
Interactive visualizations can be powerful tools for helping students, teachers, and the general public comprehend significant features in rich datasets and complex systems. Successful use of such visualizations requires viewers to have, or to acquire, adequate expertise in use of the relevant visualization tools. In many cases, the learning curve associated with competent use of such tools is too steep for casual users, such as members of the lay public browsing science outreach web sites or K-12 students and teachers trying to integrate such tools into their learning about geosciences. "Windows to the Universe" (http://www.windows.ucar.edu) is a large (roughly 6,000 web pages), well-established (first posted online in 1995), and popular (over 5 million visitor sessions and 40 million pages viewed per year) science education web site that covers a very broad range of Earth science and space science topics. The primary audience of the site consists of K-12 students and teachers and the general public. We have developed several interactive visualizations for use on the site in conjunction with text and still image reference materials. One major emphasis in the design of these interactives has been to ensure that casual users can quickly learn how to use the interactive features without becoming frustrated and departing before they were able to appreciate the visualizations displayed. We will demonstrate several of these "user-friendly" interactive visualizations and comment on the design philosophy we have employed in developing them.
BioCluster: tool for identification and clustering of Enterobacteriaceae based on biochemical data.
Abdullah, Ahmed; Sabbir Alam, S M; Sultana, Munawar; Hossain, M Anwar
2015-06-01
Presumptive identification of different Enterobacteriaceae species is routinely achieved based on biochemical properties. Traditional practice includes manual comparison of each biochemical property of the unknown sample with known reference samples and inference of its identity based on the maximum similarity pattern with the known samples. This process is labor-intensive, time-consuming, error-prone, and subjective. Therefore, automation of sorting and similarity in calculation would be advantageous. Here we present a MATLAB-based graphical user interface (GUI) tool named BioCluster. This tool was designed for automated clustering and identification of Enterobacteriaceae based on biochemical test results. In this tool, we used two types of algorithms, i.e., traditional hierarchical clustering (HC) and the Improved Hierarchical Clustering (IHC), a modified algorithm that was developed specifically for the clustering and identification of Enterobacteriaceae species. IHC takes into account the variability in result of 1-47 biochemical tests within this Enterobacteriaceae family. This tool also provides different options to optimize the clustering in a user-friendly way. Using computer-generated synthetic data and some real data, we have demonstrated that BioCluster has high accuracy in clustering and identifying enterobacterial species based on biochemical test data. This tool can be freely downloaded at http://microbialgen.du.ac.bd/biocluster/. Copyright © 2015 The Authors. Production and hosting by Elsevier Ltd.. All rights reserved.
Novel 3D Approach to Flare Modeling via Interactive IDL Widget Tools
NASA Astrophysics Data System (ADS)
Nita, G. M.; Fleishman, G. D.; Gary, D. E.; Kuznetsov, A.; Kontar, E. P.
2011-12-01
Currently, and soon-to-be, available sophisticated 3D models of particle acceleration and transport in solar flares require a new level of user-friendly visualization and analysis tools allowing quick and easy adjustment of the model parameters and computation of realistic radiation patterns (images, spectra, polarization, etc). We report the current state of the art of these tools in development, already proved to be highly efficient for the direct flare modeling. We present an interactive IDL widget application intended to provide a flexible tool that allows the user to generate spatially resolved radio and X-ray spectra. The object-based architecture of this application provides full interaction with imported 3D magnetic field models (e.g., from an extrapolation) that may be embedded in a global coronal model. Various tools provided allow users to explore the magnetic connectivity of the model by generating magnetic field lines originating in user-specified volume positions. Such lines may serve as reference lines for creating magnetic flux tubes, which are further populated with user-defined analytical thermal/non thermal particle distribution models. By default, the application integrates IDL callable DLL and Shared libraries containing fast GS emission codes developed in FORTRAN and C++ and soft and hard X-ray codes developed in IDL. However, the interactive interface allows interchanging these default libraries with any user-defined IDL or external callable codes designed to solve the radiation transfer equation in the same or other wavelength ranges of interest. To illustrate the tool capacity and generality, we present a step-by-step real-time computation of microwave and X-ray images from realistic magnetic structures obtained from a magnetic field extrapolation preceding a real event, and compare them with the actual imaging data obtained by NORH and RHESSI instruments. We discuss further anticipated developments of the tools needed to accommodate temporal evolution of the magnetic field structure and/or fast electron population implied by the electron acceleration and transport. This work was supported in part by NSF grants AGS-0961867, AST-0908344, and NASA grants NNX10AF27G and NNX11AB49G to New Jersey Institute of Technology, by a UK STFC rolling grant, STFC/PPARC Advanced Fellowship, and the Leverhulme Trust, UK. Financial support by the European Commission through the SOLAIRE and HESPE Networks is gratefully acknowledged.
Chang, Cheng; Xu, Kaikun; Guo, Chaoping; Wang, Jinxia; Yan, Qi; Zhang, Jian; He, Fuchu; Zhu, Yunping
2018-05-22
Compared with the numerous software tools developed for identification and quantification of -omics data, there remains a lack of suitable tools for both downstream analysis and data visualization. To help researchers better understand the biological meanings in their -omics data, we present an easy-to-use tool, named PANDA-view, for both statistical analysis and visualization of quantitative proteomics data and other -omics data. PANDA-view contains various kinds of analysis methods such as normalization, missing value imputation, statistical tests, clustering and principal component analysis, as well as the most commonly-used data visualization methods including an interactive volcano plot. Additionally, it provides user-friendly interfaces for protein-peptide-spectrum representation of the quantitative proteomics data. PANDA-view is freely available at https://sourceforge.net/projects/panda-view/. 1987ccpacer@163.com and zhuyunping@gmail.com. Supplementary data are available at Bioinformatics online.
RE-PLAN: An Extensible Software Architecture to Facilitate Disaster Response Planning
O’Neill, Martin; Mikler, Armin R.; Indrakanti, Saratchandra; Tiwari, Chetan; Jimenez, Tamara
2014-01-01
Computational tools are needed to make data-driven disaster mitigation planning accessible to planners and policymakers without the need for programming or GIS expertise. To address this problem, we have created modules to facilitate quantitative analyses pertinent to a variety of different disaster scenarios. These modules, which comprise the REsponse PLan ANalyzer (RE-PLAN) framework, may be used to create tools for specific disaster scenarios that allow planners to harness large amounts of disparate data and execute computational models through a point-and-click interface. Bio-E, a user-friendly tool built using this framework, was designed to develop and analyze the feasibility of ad hoc clinics for treating populations following a biological emergency event. In this article, the design and implementation of the RE-PLAN framework are described, and the functionality of the modules used in the Bio-E biological emergency mitigation tool are demonstrated. PMID:25419503
NASA Astrophysics Data System (ADS)
Wrobel, P. M.; Bogovac, M.; Sghaier, H.; Leani, J. J.; Migliori, A.; Padilla-Alvarez, R.; Czyzycki, M.; Osan, J.; Kaiser, R. B.; Karydas, A. G.
2016-10-01
A new synchrotron beamline end-station for multipurpose X-ray spectrometry applications has been recently commissioned and it is currently accessible by end-users at the XRF beamline of Elettra Sincrotrone Trieste. The end-station consists of an ultra-high vacuum chamber that includes as main instrument a seven-axis motorized manipulator for sample and detectors positioning, different kinds of X-ray detectors and optical cameras. The beamline end-station allows performing measurements in different X-ray spectrometry techniques such as Microscopic X-Ray Fluorescence analysis (μXRF), Total Reflection X-Ray Fluorescence analysis (TXRF), Grazing Incidence/Exit X-Ray Fluorescence analysis (GI-XRF/GE-XRF), X-Ray Reflectometry (XRR), and X-Ray Absorption Spectroscopy (XAS). A LabVIEW Graphical User Interface (GUI) bound with Tango control system consisted of many custom made software modules is utilized as a user-friendly tool for control of the entire end-station hardware components. The present work describes this advanced Tango and LabVIEW software platform that utilizes in an optimal synergistic manner the merits and functionality of these well-established programming and equipment control tools.
NASA Astrophysics Data System (ADS)
Brown, H.; Ritchey, N. A.
2017-12-01
NOAA National Centers for Environmental Information (NCEI) once was three separate data centers (NGDC, NODC, and NCDC). In 2015 the three centers merged into NCEI. NCEI has refined the art of long term preservation and stewardship practices throughout the life-cycle of various types of data. NCEI can help you navigate and make the complicated world of preserving your data user-friendly. Using tools at NCEI, data providers can request data to be archived, submit data for archival and create complete International Organization for Standardization (ISO) metadata records with ease. To ensure traceability, Digital Object Identifiers (DOIs) are minted for published data sets. The services offered at NCEI follow standards and NOAA directives such as the Open Archival Information System (OAIS) - Reference Model (ISO 14721) to ensure consistent long-term preservation for the Nation's resource of global environmental data for a broad spectrum of users. The implementation of these standards supports the data to be accessible, independently understandable and reproducible in an easy to understand format for all types of users. Insights from combined knowledge of 100+years of various domain and data management and preservation and the tools supporting these functions will be shared.
A user-friendly tool to evaluate the effectiveness of no-take marine reserves.
Villaseñor-Derbez, Juan Carlos; Faro, Caio; Wright, Melaina; Martínez, Jael; Fitzgerald, Sean; Fulton, Stuart; Mancha-Cisneros, Maria Del Mar; McDonald, Gavin; Micheli, Fiorenza; Suárez, Alvin; Torre, Jorge; Costello, Christopher
2018-01-01
Marine reserves are implemented to achieve a variety of objectives, but are seldom rigorously evaluated to determine whether those objectives are met. In the rare cases when evaluations do take place, they typically focus on ecological indicators and ignore other relevant objectives such as socioeconomics and governance. And regardless of the objectives, the diversity of locations, monitoring protocols, and analysis approaches hinder the ability to compare results across case studies. Moreover, analysis and evaluation of reserves is generally conducted by outside researchers, not the reserve managers or users, plausibly thereby hindering effective local management and rapid response to change. We present a framework and tool, called "MAREA", to overcome these challenges. Its purpose is to evaluate the extent to which any given reserve has achieved its stated objectives. MAREA provides specific guidance on data collection and formatting, and then conducts rigorous causal inference analysis based on data input by the user, providing real-time outputs about the effectiveness of the reserve. MAREA's ease of use, standardization of state-of-the-art inference methods, and ability to analyze marine reserve effectiveness across ecological, socioeconomic, and governance objectives could dramatically further our understanding and support of effective marine reserve management.
Estimating the Diets of Animals Using Stable Isotopes and a Comprehensive Bayesian Mixing Model
Hopkins, John B.; Ferguson, Jake M.
2012-01-01
Using stable isotope mixing models (SIMMs) as a tool to investigate the foraging ecology of animals is gaining popularity among researchers. As a result, statistical methods are rapidly evolving and numerous models have been produced to estimate the diets of animals—each with their benefits and their limitations. Deciding which SIMM to use is contingent on factors such as the consumer of interest, its food sources, sample size, the familiarity a user has with a particular framework for statistical analysis, or the level of inference the researcher desires to make (e.g., population- or individual-level). In this paper, we provide a review of commonly used SIMM models and describe a comprehensive SIMM that includes all features commonly used in SIMM analysis and two new features. We used data collected in Yosemite National Park to demonstrate IsotopeR's ability to estimate dietary parameters. We then examined the importance of each feature in the model and compared our results to inferences from commonly used SIMMs. IsotopeR's user interface (in R) will provide researchers a user-friendly tool for SIMM analysis. The model is also applicable for use in paleontology, archaeology, and forensic studies as well as estimating pollution inputs. PMID:22235246
Wang, Lin; Liu, Simin; Niu, Tianhua; Xu, Xin
2005-03-18
Single nucleotide polymorphisms (SNPs) provide an important tool in pinpointing susceptibility genes for complex diseases and in unveiling human molecular evolution. Selection and retrieval of an optimal SNP set from publicly available databases have emerged as the foremost bottlenecks in designing large-scale linkage disequilibrium studies, particularly in case-control settings. We describe the architectural structure and implementations of a novel software program, SNPHunter, which allows for both ad hoc-mode and batch-mode SNP search, automatic SNP filtering, and retrieval of SNP data, including physical position, function class, flanking sequences at user-defined lengths, and heterozygosity from NCBI dbSNP. The SNP data extracted from dbSNP via SNPHunter can be exported and saved in plain text format for further down-stream analyses. As an illustration, we applied SNPHunter for selecting SNPs for 10 major candidate genes for type 2 diabetes, including CAPN10, FABP4, IL6, NOS3, PPARG, TNF, UCP2, CRP, ESR1, and AR. SNPHunter constitutes an efficient and user-friendly tool for SNP screening, selection, and acquisition. The executable and user's manual are available at http://www.hsph.harvard.edu/ppg/software.htm
Lotus Base: An integrated information portal for the model legume Lotus japonicus
Mun, Terry; Bachmann, Asger; Gupta, Vikas; Stougaard, Jens; Andersen, Stig U.
2016-01-01
Lotus japonicus is a well-characterized model legume widely used in the study of plant-microbe interactions. However, datasets from various Lotus studies are poorly integrated and lack interoperability. We recognize the need for a comprehensive repository that allows comprehensive and dynamic exploration of Lotus genomic and transcriptomic data. Equally important are user-friendly in-browser tools designed for data visualization and interpretation. Here, we present Lotus Base, which opens to the research community a large, established LORE1 insertion mutant population containing an excess of 120,000 lines, and serves the end-user tightly integrated data from Lotus, such as the reference genome, annotated proteins, and expression profiling data. We report the integration of expression data from the L. japonicus gene expression atlas project, and the development of tools to cluster and export such data, allowing users to construct, visualize, and annotate co-expression gene networks. Lotus Base takes advantage of modern advances in browser technology to deliver powerful data interpretation for biologists. Its modular construction and publicly available application programming interface enable developers to tap into the wealth of integrated Lotus data. Lotus Base is freely accessible at: https://lotus.au.dk. PMID:28008948
A user-friendly tool to evaluate the effectiveness of no-take marine reserves
Fitzgerald, Sean; Fulton, Stuart; Mancha-Cisneros, Maria del Mar; McDonald, Gavin; Micheli, Fiorenza; Suárez, Alvin; Torre, Jorge
2018-01-01
Marine reserves are implemented to achieve a variety of objectives, but are seldom rigorously evaluated to determine whether those objectives are met. In the rare cases when evaluations do take place, they typically focus on ecological indicators and ignore other relevant objectives such as socioeconomics and governance. And regardless of the objectives, the diversity of locations, monitoring protocols, and analysis approaches hinder the ability to compare results across case studies. Moreover, analysis and evaluation of reserves is generally conducted by outside researchers, not the reserve managers or users, plausibly thereby hindering effective local management and rapid response to change. We present a framework and tool, called “MAREA”, to overcome these challenges. Its purpose is to evaluate the extent to which any given reserve has achieved its stated objectives. MAREA provides specific guidance on data collection and formatting, and then conducts rigorous causal inference analysis based on data input by the user, providing real-time outputs about the effectiveness of the reserve. MAREA’s ease of use, standardization of state-of-the-art inference methods, and ability to analyze marine reserve effectiveness across ecological, socioeconomic, and governance objectives could dramatically further our understanding and support of effective marine reserve management. PMID:29381762
VTGRAPH - GRAPHIC SOFTWARE TOOL FOR VT TERMINALS
NASA Technical Reports Server (NTRS)
Wang, C.
1994-01-01
VTGRAPH is a graphics software tool for DEC/VT or VT compatible terminals which are widely used by government and industry. It is a FORTRAN or C-language callable library designed to allow the user to deal with many computer environments which use VT terminals for window management and graphic systems. It also provides a PLOT10-like package plus color or shade capability for VT240, VT241, and VT300 terminals. The program is transportable to many different computers which use VT terminals. With this graphics package, the user can easily design more friendly user interface programs and design PLOT10 programs on VT terminals with different computer systems. VTGRAPH was developed using the ReGis Graphics set which provides a full range of graphics capabilities. The basic VTGRAPH capabilities are as follows: window management, PLOT10 compatible drawing, generic program routines for two and three dimensional plotting, and color graphics or shaded graphics capability. The program was developed in VAX FORTRAN in 1988. VTGRAPH requires a ReGis graphics set terminal and a FORTRAN compiler. The program has been run on a DEC MicroVAX 3600 series computer operating under VMS 5.0, and has a virtual memory requirement of 5KB.
Point Analysis in Java applied to histological images of the perforant pathway: a user's account.
Scorcioni, Ruggero; Wright, Susan N; Patrick Card, J; Ascoli, Giorgio A; Barrionuevo, Germán
2008-01-01
The freeware Java tool Point Analysis in Java (PAJ), created to perform 3D point analysis, was tested in an independent laboratory setting. The input data consisted of images of the hippocampal perforant pathway from serial immunocytochemical localizations of the rat brain in multiple views at different resolutions. The low magnification set (x2 objective) comprised the entire perforant pathway, while the high magnification set (x100 objective) allowed the identification of individual fibers. A preliminary stereological study revealed a striking linear relationship between the fiber count at high magnification and the optical density at low magnification. PAJ enabled fast analysis for down-sampled data sets and a friendly interface with automated plot drawings. Noted strengths included the multi-platform support as well as the free availability of the source code, conducive to a broad user base and maximum flexibility for ad hoc requirements. PAJ has great potential to extend its usability by (a) improving its graphical user interface, (b) increasing its input size limit, (c) improving response time for large data sets, and (d) potentially being integrated with other Java graphical tools such as ImageJ.
The thyrotropin receptor mutation database: update 2003.
Führer, Dagmar; Lachmund, Peter; Nebel, Istvan-Tibor; Paschke, Ralf
2003-12-01
In 1999 we have created a TSHR mutation database compiling TSHR mutations with their basic characteristics and associated clinical conditions (www.uni-leipzig.de/innere/tshr). Since then, more than 2887 users from 36 countries have logged into the TSHR mutation database and have contributed several valuable suggestions for further improvement of the database. We now present an updated and extended version of the TSHR database to which several novel features have been introduced: 1. detailed functional characteristics on all 65 mutations (43 activating and 22 inactivating mutations) reported to date, 2. 40 pedigrees with detailed information on molecular aspects, clinical courses and treatment options in patients with gain-of-function and loss-of-function germline TSHR mutations, 3. a first compilation of site-directed mutagenesis studies, 4. references with Medline links, 5. a user friendly search tool for specific database searches, user-specific database output and 6. an administrator tool for the submission of novel TSHR mutations. The TSHR mutation database is installed as one of the locus specific HUGO mutation databases. It is listed under index TSHR 603372 (http://ariel.ucs.unimelb.edu.au/~cotton/glsdbq.htm) and can be accessed via www.uni-leipzig.de/innere/tshr.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, John; Jankovsky, Zachary; Metzroth, Kyle G
2018-04-04
The purpose of the ADAPT code is to generate Dynamic Event Trees (DET) using a user specified set of simulators. ADAPT can utilize any simulation tool which meets a minimal set of requirements. ADAPT is based on the concept of DET which uses explicit modeling of the deterministic dynamic processes that take place during a nuclear reactor plant system (or other complex system) evolution along with stochastic modeling. When DET are used to model various aspects of Probabilistic Risk Assessment (PRA), all accident progression scenarios starting from an initiating event are considered simultaneously. The DET branching occurs at user specifiedmore » times and/or when an action is required by the system and/or the operator. These outcomes then decide how the dynamic system variables will evolve in time for each DET branch. Since two different outcomes at a DET branching may lead to completely different paths for system evolution, the next branching for these paths may occur not only at separate times, but can be based on different branching criteria. The computational infrastructure allows for flexibility in ADAPT to link with different system simulation codes, parallel processing of the scenarios under consideration, on-line scenario management (initiation as well as termination), analysis of results, and user friendly graphical capabilities. The ADAPT system is designed for a distributed computing environment; the scheduler can track multiple concurrent branches simultaneously. The scheduler is modularized so that the DET branching strategy can be modified (e.g. biasing towards the worst-case scenario/event). Independent database systems store data from the simulation tasks and the DET structure so that the event tree can be constructed and analyzed later. ADAPT is provided with a user-friendly client which can easily sort through and display the results of an experiment, precluding the need for the user to manually inspect individual simulator runs.« less
The crustal dynamics intelligent user interface anthology
NASA Technical Reports Server (NTRS)
Short, Nicholas M., Jr.; Campbell, William J.; Roelofs, Larry H.; Wattawa, Scott L.
1987-01-01
The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has, as one of its components, the development of an Intelligent User Interface (IUI). The intent of the IUI is to develop a friendly and intelligent user interface service based on expert systems and natural language processing technologies. The purpose of such a service is to support the large number of potential scientific and engineering users that have need of space and land-related research and technical data, but have little or no experience in query languages or understanding of the information content or architecture of the databases of interest. This document presents the design concepts, development approach and evaluation of the performance of a prototype IUI system for the Crustal Dynamics Project Database, which was developed using a microcomputer-based expert system tool (M. 1), the natural language query processor THEMIS, and the graphics software system GSS. The IUI design is based on a multiple view representation of a database from both the user and database perspective, with intelligent processes to translate between the views.
NASA Astrophysics Data System (ADS)
Kadow, C.; Illing, S.; Kunst, O.; Cubasch, U.
2014-12-01
The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced by other users-saving CPU time, I/O and disk space. This study presents the different techniques and advantages of such a hybrid evaluation system making use of a Big Data HPC in climate science. website: www-miklip.dkrz.de visitor-login: guest password: miklip
NASA Astrophysics Data System (ADS)
Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Ulbrich, Uwe; Cubasch, Ulrich
2015-04-01
The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced by other users-saving CPU time, I/O and disk space. This study presents the different techniques and advantages of such a hybrid evaluation system making use of a Big Data HPC in climate science. website: www-miklip.dkrz.de visitor-login: click on "Guest"
Graeve, Catherine; McGovern, Patricia; Nachreiner, Nancy M; Ayers, Lynn
2014-01-01
Occupational health nurses use their knowledge and skills to improve the health and safety of the working population; however, companies increasingly face budget constraints and may eliminate health and safety programs. Occupational health nurses must be prepared to document their services and outcomes, and use quantitative tools to demonstrate their value to employers. The aim of this project was to create and pilot test a quantitative tool for occupational health nurses to track their activities and potential cost savings for on-site occupational health nursing services. Tool developments included a pilot test in which semi-structured interviews with occupational health and safety leaders were conducted to identify currents issues and products used for estimating the value of occupational health nursing services. The outcome was the creation of a tool that estimates the economic value of occupational health nursing services. The feasibility and potential value of this tool is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monaco, M.E.; Battista, T.A.; Gill, T.A.
1997-06-01
NOAA`s National Ocean Service (NOS) is developing a suite of desktop geographic information system (GIS) tools to define, assess, and solve coastal resource management issues. This paper describes one component of the emerging NOS Coastal and Ocean Assessment GIS: Environmental Sensitivity Index (ESI) data with emphasis on living marine resource information. This work is underway through a unique federal, state, and private-sector partnership. The desktop GIS is a versatile, user-friendly system designed to provide coastal managers with mapping and analysis capabilities. These functions are under development using the recently generated North Carolina ESI data, with emphasis on accessing, analyzing, andmore » mapping estuarine species distributions. Example system features include: a user-friendly front end, generation of ESI maps and tables, and custom spatial and temporal analyses. Partners in the development of the desktop system include: NOAA`s Office of Ocean Resources Conservation and Assessment (ORCA) and Coastal Services Center, the Minerals Management Service (MMS), Florida Marine Research Institute, Environmental Systems Research Institute, Inc., and Research Planning, Inc. This work complements and supports MMS`s Gulf-wide Information System designed to support oil-spill contingency planning.« less
NASA Technical Reports Server (NTRS)
Smit, Christine; Hegde, Mahabaleshwara; Strub, Richard; Bryant, Keith; Li, Angela; Petrenko, Maksym
2017-01-01
Giovanni is a data exploration and visualization tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC). It has been around in one form or another for more than 15 years. Giovanni calculates simple statistics and produces 22 different visualizations for more than 1600 geophysical parameters from more than 90 satellite and model products. Giovanni relies on external data format standards to ensure interoperability, including the NetCDF CF Metadata Conventions. Unfortunately, these standards were insufficient to make Giovanni's internal data representation truly simple to use. Finding and working with dimensions can be convoluted with the CF Conventions. Furthermore, the CF Conventions are silent on machine-friendly descriptive metadata such as the parameter's source product and product version. In order to simplify analyzing disparate earth science data parameters in a unified way, we developed Giovanni's internal standard. First, the format standardizes parameter dimensions and variables so they can be easily found. Second, the format adds all the machine-friendly metadata Giovanni needs to present our parameters to users in a consistent and clear manner. At a glance, users can grasp all the pertinent information about parameters both during parameter selection and after visualization.
Oncogenomic portals for the visualization and analysis of genome-wide cancer data
Klonowska, Katarzyna; Czubak, Karol; Wojciechowska, Marzena; Handschuh, Luiza; Zmienko, Agnieszka; Figlerowicz, Marek; Dams-Kozlowska, Hanna; Kozlowski, Piotr
2016-01-01
Somatically acquired genomic alterations that drive oncogenic cellular processes are of great scientific and clinical interest. Since the initiation of large-scale cancer genomic projects (e.g., the Cancer Genome Project, The Cancer Genome Atlas, and the International Cancer Genome Consortium cancer genome projects), a number of web-based portals have been created to facilitate access to multidimensional oncogenomic data and assist with the interpretation of the data. The portals provide the visualization of small-size mutations, copy number variations, methylation, and gene/protein expression data that can be correlated with the available clinical, epidemiological, and molecular features. Additionally, the portals enable to analyze the gathered data with the use of various user-friendly statistical tools. Herein, we present a highly illustrated review of seven portals, i.e., Tumorscape, UCSC Cancer Genomics Browser, ICGC Data Portal, COSMIC, cBioPortal, IntOGen, and BioProfiling.de. All of the selected portals are user-friendly and can be exploited by scientists from different cancer-associated fields, including those without bioinformatics background. It is expected that the use of the portals will contribute to a better understanding of cancer molecular etiology and will ultimately accelerate the translation of genomic knowledge into clinical practice. PMID:26484415
Oncogenomic portals for the visualization and analysis of genome-wide cancer data.
Klonowska, Katarzyna; Czubak, Karol; Wojciechowska, Marzena; Handschuh, Luiza; Zmienko, Agnieszka; Figlerowicz, Marek; Dams-Kozlowska, Hanna; Kozlowski, Piotr
2016-01-05
Somatically acquired genomic alterations that drive oncogenic cellular processes are of great scientific and clinical interest. Since the initiation of large-scale cancer genomic projects (e.g., the Cancer Genome Project, The Cancer Genome Atlas, and the International Cancer Genome Consortium cancer genome projects), a number of web-based portals have been created to facilitate access to multidimensional oncogenomic data and assist with the interpretation of the data. The portals provide the visualization of small-size mutations, copy number variations, methylation, and gene/protein expression data that can be correlated with the available clinical, epidemiological, and molecular features. Additionally, the portals enable to analyze the gathered data with the use of various user-friendly statistical tools. Herein, we present a highly illustrated review of seven portals, i.e., Tumorscape, UCSC Cancer Genomics Browser, ICGC Data Portal, COSMIC, cBioPortal, IntOGen, and BioProfiling.de. All of the selected portals are user-friendly and can be exploited by scientists from different cancer-associated fields, including those without bioinformatics background. It is expected that the use of the portals will contribute to a better understanding of cancer molecular etiology and will ultimately accelerate the translation of genomic knowledge into clinical practice.
Hamilton, Matthew; Mahiane, Guy; Werst, Elric; Sanders, Rachel; Briët, Olivier; Smith, Thomas; Cibulskis, Richard; Cameron, Ewan; Bhatt, Samir; Weiss, Daniel J; Gething, Peter W; Pretorius, Carel; Korenromp, Eline L
2017-02-10
Scale-up of malaria prevention and treatment needs to continue but national strategies and budget allocations are not always evidence-based. This article presents a new modelling tool projecting malaria infection, cases and deaths to support impact evaluation, target setting and strategic planning. Nested in the Spectrum suite of programme planning tools, the model includes historic estimates of case incidence and deaths in groups aged up to 4, 5-14, and 15+ years, and prevalence of Plasmodium falciparum infection (PfPR) among children 2-9 years, for 43 sub-Saharan African countries and their 602 provinces, from the WHO and malaria atlas project. Impacts over 2016-2030 are projected for insecticide-treated nets (ITNs), indoor residual spraying (IRS), seasonal malaria chemoprevention (SMC), and effective management of uncomplicated cases (CMU) and severe cases (CMS), using statistical functions fitted to proportional burden reductions simulated in the P. falciparum dynamic transmission model OpenMalaria. In projections for Nigeria, ITNs, IRS, CMU, and CMS scale-up reduced health burdens in all age groups, with largest proportional and especially absolute reductions in children up to 4 years old. Impacts increased from 8 to 10 years following scale-up, reflecting dynamic effects. For scale-up of each intervention to 80% effective coverage, CMU had the largest impacts across all health outcomes, followed by ITNs and IRS; CMS and SMC conferred additional small but rapid mortality impacts. Spectrum-Malaria's user-friendly interface and intuitive display of baseline data and scenario projections holds promise to facilitate capacity building and policy dialogue in malaria programme prioritization. The module's linking to the OneHealth Tool for costing will support use of the software for strategic budget allocation. In settings with moderately low coverage levels, such as Nigeria, improving case management and achieving universal coverage with ITNs could achieve considerable burden reductions. Projections remain to be refined and validated with local expert input data and actual policy scenarios.
Classifying and profiling Social Networking Site users: a latent segmentation approach.
Alarcón-del-Amo, María-del-Carmen; Lorenzo-Romero, Carlota; Gómez-Borja, Miguel-Ángel
2011-09-01
Social Networking Sites (SNSs) have showed an exponential growth in the last years. The first step for an efficient use of SNSs stems from an understanding of the individuals' behaviors within these sites. In this research, we have obtained a typology of SNS users through a latent segmentation approach, based on the frequency by which users perform different activities within the SNSs, sociodemographic variables, experience in SNSs, and dimensions related to their interaction patterns. Four different segments have been obtained. The "introvert" and "novel" users are the more occasional. They utilize SNSs mainly to communicate with friends, although "introverts" are more passive users. The "versatile" user performs different activities, although occasionally. Finally, the "expert-communicator" performs a greater variety of activities with a higher frequency. They tend to perform some marketing-related activities such as commenting on ads or gathering information about products and brands as well as commenting ads. The companies can take advantage of these segmentation schemes in different ways: first, by tracking and monitoring information interchange between users regarding their products and brands. Second, they should match the SNS users' profiles with their market targets to use SNSs as marketing tools. Finally, for most business, the expert users could be interesting opinion leaders and potential brand influencers.
Dolecheck, K A; Heersche, G; Bewley, J M
2016-12-01
Assessing the economic implications of investing in automated estrus detection (AED) technologies can be overwhelming for dairy producers. The objectives of this study were to develop new regression equations for estimating the cost per day open (DO) and to apply the results to create a user-friendly, partial budget, decision support tool for investment analysis of AED technologies. In the resulting decision support tool, the end user can adjust herd-specific inputs regarding general management, current reproductive management strategies, and the proposed AED system. Outputs include expected DO, reproductive cull rate, net present value, and payback period for the proposed AED system. Utility of the decision support tool was demonstrated with an example dairy herd created using data from DairyMetrics (Dairy Records Management Systems, Raleigh, NC), Food and Agricultural Policy Research Institute (Columbia, MO), and published literature. Resulting herd size, rolling herd average milk production, milk price, and feed cost were 323 cows, 10,758kg, $0.41/kg, and $0.20/kg of dry matter, respectively. Automated estrus detection technologies with 2 levels of initial system cost (low: $5,000 vs. high: $10,000), tag price (low: $50 vs. high: $100), and estrus detection rate (low: 60% vs. high: 80%) were compared over a 7-yr investment period. Four scenarios were considered in a demonstration of the investment analysis tool: (1) a herd using 100% visual observation for estrus detection before adopting 100% AED, (2) a herd using 100% visual observation before adopting 75% AED and 25% visual observation, (3) a herd using 100% timed artificial insemination (TAI) before adopting 100% AED, and (4) a herd using 100% TAI before adopting 75% AED and 25% TAI. Net present value in scenarios 1 and 2 was always positive, indicating a positive investment situation. Net present value in scenarios 3 and 4 was always positive in combinations using a $50 tag price, and in scenario 4, the $5,000, $100, and 80% combination. Overall, the payback period ranged from 1.6 yr to greater than 10 yr. Investment analysis demonstration results were highly dependent on assumptions, especially AED system initial investment and labor costs. Dairy producers can use herd-specific inputs with the cost per day open regression equations and the decision support tool to estimate individual herd results. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Plastid: nucleotide-resolution analysis of next-generation sequencing and genomics data.
Dunn, Joshua G; Weissman, Jonathan S
2016-11-22
Next-generation sequencing (NGS) informs many biological questions with unprecedented depth and nucleotide resolution. These assays have created a need for analytical tools that enable users to manipulate data nucleotide-by-nucleotide robustly and easily. Furthermore, because many NGS assays encode information jointly within multiple properties of read alignments - for example, in ribosome profiling, the locations of ribosomes are jointly encoded in alignment coordinates and length - analytical tools are often required to extract the biological meaning from the alignments before analysis. Many assay-specific pipelines exist for this purpose, but there remains a need for user-friendly, generalized, nucleotide-resolution tools that are not limited to specific experimental regimes or analytical workflows. Plastid is a Python library designed specifically for nucleotide-resolution analysis of genomics and NGS data. As such, Plastid is designed to extract assay-specific information from read alignments while retaining generality and extensibility to novel NGS assays. Plastid represents NGS and other biological data as arrays of values associated with genomic or transcriptomic positions, and contains configurable tools to convert data from a variety of sources to such arrays. Plastid also includes numerous tools to manipulate even discontinuous genomic features, such as spliced transcripts, with nucleotide precision. Plastid automatically handles conversion between genomic and feature-centric coordinates, accounting for splicing and strand, freeing users of burdensome accounting. Finally, Plastid's data models use consistent and familiar biological idioms, enabling even beginners to develop sophisticated analytical workflows with minimal effort. Plastid is a versatile toolkit that has been used to analyze data from multiple NGS assays, including RNA-seq, ribosome profiling, and DMS-seq. It forms the genomic engine of our ORF annotation tool, ORF-RATER, and is readily adapted to novel NGS assays. Examples, tutorials, and extensive documentation can be found at https://plastid.readthedocs.io .
SNPranker 2.0: a gene-centric data mining tool for diseases associated SNP prioritization in GWAS.
Merelli, Ivan; Calabria, Andrea; Cozzi, Paolo; Viti, Federica; Mosca, Ettore; Milanesi, Luciano
2013-01-01
The capability of correlating specific genotypes with human diseases is a complex issue in spite of all advantages arisen from high-throughput technologies, such as Genome Wide Association Studies (GWAS). New tools for genetic variants interpretation and for Single Nucleotide Polymorphisms (SNPs) prioritization are actually needed. Given a list of the most relevant SNPs statistically associated to a specific pathology as result of a genotype study, a critical issue is the identification of genes that are effectively related to the disease by re-scoring the importance of the identified genetic variations. Vice versa, given a list of genes, it can be of great importance to predict which SNPs can be involved in the onset of a particular disease, in order to focus the research on their effects. We propose a new bioinformatics approach to support biological data mining in the analysis and interpretation of SNPs associated to pathologies. This system can be employed to design custom genotyping chips for disease-oriented studies and to re-score GWAS results. The proposed method relies (1) on the data integration of public resources using a gene-centric database design, (2) on the evaluation of a set of static biomolecular annotations, defined as features, and (3) on the SNP scoring function, which computes SNP scores using parameters and weights set by users. We employed a machine learning classifier to set default feature weights and an ontological annotation layer to enable the enrichment of the input gene set. We implemented our method as a web tool called SNPranker 2.0 (http://www.itb.cnr.it/snpranker), improving our first published release of this system. A user-friendly interface allows the input of a list of genes, SNPs or a biological process, and to customize the features set with relative weights. As result, SNPranker 2.0 returns a list of SNPs, localized within input and ontologically enriched genes, combined with their prioritization scores. Different databases and resources are already available for SNPs annotation, but they do not prioritize or re-score SNPs relying on a-priori biomolecular knowledge. SNPranker 2.0 attempts to fill this gap through a user-friendly integrated web resource. End users, such as researchers in medical genetics and epidemiology, may find in SNPranker 2.0 a new tool for data mining and interpretation able to support SNPs analysis. Possible scenarios are GWAS data re-scoring, SNPs selection for custom genotyping arrays and SNPs/diseases association studies.
The APA Style Converter: a Web-based interface for converting articles to APA style for publication.
Li, Ping; Cunningham, Krystal
2005-05-01
The APA Style Converter is a Web-based tool with which authors may prepare their articles in APA style according to the APA Publication Manual (5th ed.). The Converter provides a user-friendly interface that allows authors to copy and paste text and upload figures through the Web, and it automatically converts all texts, references, and figures to a structured article in APA style. The output is saved in PDF or RTF format, ready for either electronic submission or hardcopy printing.
Software Models Impact Stresses
NASA Technical Reports Server (NTRS)
Hanshaw, Timothy C.; Roy, Dipankar; Toyooka, Mark
1991-01-01
Generalized Impact Stress Software designed to assist engineers in predicting stresses caused by variety of impacts. Program straightforward, simple to implement on personal computers, "user friendly", and handles variety of boundary conditions applied to struck body being analyzed. Applications include mathematical modeling of motions and transient stresses of spacecraft, analysis of slamming of piston, of fast valve shutoffs, and play of rotating bearing assembly. Provides fast and inexpensive analytical tool for analysis of stresses and reduces dependency on expensive impact tests. Written in FORTRAN 77. Requires use of commercial software package PLOT88.
Can Accelerators Accelerate Learning?
NASA Astrophysics Data System (ADS)
Santos, A. C. F.; Fonseca, P.; Coelho, L. F. S.
2009-03-01
The 'Young Talented' education program developed by the Brazilian State Funding Agency (FAPERJ) [1] makes it possible for high-schools students from public high schools to perform activities in scientific laboratories. In the Atomic and Molecular Physics Laboratory at Federal University of Rio de Janeiro (UFRJ), the students are confronted with modern research tools like the 1.7 MV ion accelerator. Being a user-friendly machine, the accelerator is easily manageable by the students, who can perform simple hands-on activities, stimulating interest in physics, and getting the students close to modern laboratory techniques.
JEDI: Jobs and Economic Development Impact Model; NREL (National Renewable Energy Laboratory)
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The Jobs and Economic Development Impact (JEDI) models are user-friendly tools that estimate the economic impacts of constructing and operating power generation and biofuel plants at the local (usually state) level. First developed by NREL’s researchers to model wind energy jobs and impacts, JEDI has been expanded to also estimate the economic impacts of biofuels, coal, conventional hydro, concentrating solar power, geothermal, marine and hydrokinetic power, natural gas, photovoltaics, and transmission lines. This fact sheet focuses on JEDI for wind energy projects.
Tandem Mass Spectrum Sequencing: An Alternative to Database Search Engines in Shotgun Proteomics.
Muth, Thilo; Rapp, Erdmann; Berven, Frode S; Barsnes, Harald; Vaudel, Marc
2016-01-01
Protein identification via database searches has become the gold standard in mass spectrometry based shotgun proteomics. However, as the quality of tandem mass spectra improves, direct mass spectrum sequencing gains interest as a database-independent alternative. In this chapter, the general principle of this so-called de novo sequencing is introduced along with pitfalls and challenges of the technique. The main tools available are presented with a focus on user friendly open source software which can be directly applied in everyday proteomic workflows.
JEDI: Jobs and Economic Development Impact Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
The Jobs and Economic Development Impact (JEDI) models are user-friendly tools that estimate the economic impacts of constructing and operating power generation and biofuel plants at the local (usually state) level. First developed by NREL's researchers to model wind energy jobs and impacts, JEDI has been expanded to also estimate the economic impacts of biofuels, coal, conventional hydro, concentrating solar power, geothermal, marine and hydrokinetic power, natural gas, photovoltaics, and transmission lines. This fact sheet focuses on JEDI for wind energy projects and is revised with 2017 figures.
Flow Regime Based Climatologies of Lightning Probabilities for Spaceports and Airports
NASA Technical Reports Server (NTRS)
Bauman, William H., III; Volmer, Matthew; Sharp, David; Spratt, Scott; Lafosse, Richard A.
2007-01-01
Objective: provide forecasters with a "first guess" climatological lightning probability tool (1) Focus on Space Shuttle landings and NWS T AFs (2) Four circles around sites: 5-, 10-, 20- and 30 n mi (4) Three time intervals: hourly, every 3 hr and every 6 hr It is based on: (1) NLDN gridded data (2) Flow regime (3) Warm season months of May-Sep for years 1989-2004 Gridded data and available code yields squares, not circles Over 850 spread sheets converted into manageable user-friendly web-based GUI
BioImageXD: an open, general-purpose and high-throughput image-processing platform.
Kankaanpää, Pasi; Paavolainen, Lassi; Tiitta, Silja; Karjalainen, Mikko; Päivärinne, Joacim; Nieminen, Jonna; Marjomäki, Varpu; Heino, Jyrki; White, Daniel J
2012-06-28
BioImageXD puts open-source computer science tools for three-dimensional visualization and analysis into the hands of all researchers, through a user-friendly graphical interface tuned to the needs of biologists. BioImageXD has no restrictive licenses or undisclosed algorithms and enables publication of precise, reproducible and modifiable workflows. It allows simple construction of processing pipelines and should enable biologists to perform challenging analyses of complex processes. We demonstrate its performance in a study of integrin clustering in response to selected inhibitors.
Simulation of Needle-Type Corona Electrodes by the Finite Element Method
NASA Astrophysics Data System (ADS)
Yang, Shiyou; José Márcio, Machado; Nancy Mieko, Abe; Angelo, Passaro
2007-12-01
This paper describes a software tool, called LEVSOFT, suitable for the electric field simulations of corona electrodes by the Finite Element Method (FEM). Special attention was paid to the user friendly construction of geometries with corners and sharp points, and to the fast generation of highly refined triangular meshes and field maps. The execution of self-adaptive meshes was also implemented. These customized features make the code attractive for the simulation of needle-type corona electrodes. Some case examples involving needle type electrodes are presented.
OLIFE: Tight Binding Code for Transmission Coefficient Calculation
NASA Astrophysics Data System (ADS)
Mijbil, Zainelabideen Yousif
2018-05-01
A new and human friendly transport calculation code has been developed. It requires a simple tight binding Hamiltonian as the only input file and uses a convenient graphical user interface to control calculations. The effect of magnetic field on junction has also been included. Furthermore the transmission coefficient can be calculated between any two points on the scatterer which ensures high flexibility to check the system. Therefore Olife can highly be recommended as an essential tool for pretesting studying and teaching electron transport in molecular devices that saves a lot of time and effort.
NASA Technical Reports Server (NTRS)
Shih, Ming H.; Soni, Bharat K.
1993-01-01
The issue of time efficiency in grid generation is addressed by developing a user friendly graphical interface for interactive/automatic construction of structured grids around complex turbomachinery/axis-symmetric configurations. The accuracy of geometry modeling and its fidelity is accomplished by adapting the nonuniform rational b-spline (NURBS) representation. A customized interactive grid generation code, TIGER, has been developed to facilitate the grid generation process for complicated internal, external, and internal-external turbomachinery fields simulations. The FORMS Library is utilized to build user-friendly graphical interface. The algorithm allows a user to redistribute grid points interactively on curves/surfaces using NURBS formulation with accurate geometric definition. TIGER's features include multiblock, multiduct/shroud, multiblade row, uneven blade count, and patched/overlapping block interfaces. It has been applied to generate grids for various complicated turbomachinery geometries, as well as rocket and missile configurations.
CLUSTERnGO: a user-defined modelling platform for two-stage clustering of time-series data.
Fidaner, Işık Barış; Cankorur-Cetinkaya, Ayca; Dikicioglu, Duygu; Kirdar, Betul; Cemgil, Ali Taylan; Oliver, Stephen G
2016-02-01
Simple bioinformatic tools are frequently used to analyse time-series datasets regardless of their ability to deal with transient phenomena, limiting the meaningful information that may be extracted from them. This situation requires the development and exploitation of tailor-made, easy-to-use and flexible tools designed specifically for the analysis of time-series datasets. We present a novel statistical application called CLUSTERnGO, which uses a model-based clustering algorithm that fulfils this need. This algorithm involves two components of operation. Component 1 constructs a Bayesian non-parametric model (Infinite Mixture of Piecewise Linear Sequences) and Component 2, which applies a novel clustering methodology (Two-Stage Clustering). The software can also assign biological meaning to the identified clusters using an appropriate ontology. It applies multiple hypothesis testing to report the significance of these enrichments. The algorithm has a four-phase pipeline. The application can be executed using either command-line tools or a user-friendly Graphical User Interface. The latter has been developed to address the needs of both specialist and non-specialist users. We use three diverse test cases to demonstrate the flexibility of the proposed strategy. In all cases, CLUSTERnGO not only outperformed existing algorithms in assigning unique GO term enrichments to the identified clusters, but also revealed novel insights regarding the biological systems examined, which were not uncovered in the original publications. The C++ and QT source codes, the GUI applications for Windows, OS X and Linux operating systems and user manual are freely available for download under the GNU GPL v3 license at http://www.cmpe.boun.edu.tr/content/CnG. sgo24@cam.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
Barakat, Mohamed; Ortet, Philippe; Whitworth, David E
2013-04-20
Regulatory proteins (RPs) such as transcription factors (TFs) and two-component system (TCS) proteins control how prokaryotic cells respond to changes in their external and/or internal state. Identification and annotation of TFs and TCSs is non-trivial, and between-genome comparisons are often confounded by different standards in annotation. There is a need for user-friendly, fast and convenient tools to allow researchers to overcome the inherent variability in annotation between genome sequences. We have developed the web-server P2RP (Predicted Prokaryotic Regulatory Proteins), which enables users to identify and annotate TFs and TCS proteins within their sequences of interest. Users can input amino acid or genomic DNA sequences, and predicted proteins therein are scanned for the possession of DNA-binding domains and/or TCS domains. RPs identified in this manner are categorised into families, unambiguously annotated, and a detailed description of their features generated, using an integrated software pipeline. P2RP results can then be outputted in user-specified formats. Biologists have an increasing need for fast and intuitively usable tools, which is why P2RP has been developed as an interactive system. As well as assisting experimental biologists to interrogate novel sequence data, it is hoped that P2RP will be built into genome annotation pipelines and re-annotation processes, to increase the consistency of RP annotation in public genomic sequences. P2RP is the first publicly available tool for predicting and analysing RP proteins in users' sequences. The server is freely available and can be accessed along with documentation at http://www.p2rp.org.
rnaQUAST: a quality assessment tool for de novo transcriptome assemblies.
Bushmanova, Elena; Antipov, Dmitry; Lapidus, Alla; Suvorov, Vladimir; Prjibelski, Andrey D
2016-07-15
Ability to generate large RNA-Seq datasets created a demand for both de novo and reference-based transcriptome assemblers. However, while many transcriptome assemblers are now available, there is still no unified quality assessment tool for RNA-Seq assemblies. We present rnaQUAST-a tool for evaluating RNA-Seq assembly quality and benchmarking transcriptome assemblers using reference genome and gene database. rnaQUAST calculates various metrics that demonstrate completeness and correctness levels of the assembled transcripts, and outputs them in a user-friendly report. rnaQUAST is implemented in Python and is freely available at http://bioinf.spbau.ru/en/rnaquast ap@bioinf.spbau.ru Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Implementing WebGL and HTML5 in Macromolecular Visualization and Modern Computer-Aided Drug Design.
Yuan, Shuguang; Chan, H C Stephen; Hu, Zhenquan
2017-06-01
Web browsers have long been recognized as potential platforms for remote macromolecule visualization. However, the difficulty in transferring large-scale data to clients and the lack of native support for hardware-accelerated applications in the local browser undermine the feasibility of such utilities. With the introduction of WebGL and HTML5 technologies in recent years, it is now possible to exploit the power of a graphics-processing unit (GPU) from a browser without any third-party plugin. Many new tools have been developed for biological molecule visualization and modern drug discovery. In contrast to traditional offline tools, real-time computing, interactive data analysis, and cross-platform analyses feature WebGL- and HTML5-based tools, facilitating biological research in a more efficient and user-friendly way. Copyright © 2017 Elsevier Ltd. All rights reserved.
New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and UrbIS
NASA Astrophysics Data System (ADS)
Crow, M. C.; Devarakonda, R.; Hook, L.; Killeffer, T.; Krassovski, M.; Boden, T.; King, A. W.; Wullschleger, S. D.
2016-12-01
Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This discussion describes tools being used in two different projects at Oak Ridge National Laboratory (ORNL), but at different stages of the data lifecycle. The Metadata Entry and Data Search Tool is being used for the documentation, archival, and data discovery stages for the Next Generation Ecosystem Experiment - Arctic (NGEE Arctic) project while the Urban Information Systems (UrbIS) Data Catalog is being used to support indexing, cataloging, and searching. The NGEE Arctic Online Metadata Entry Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The UrbIS Data Catalog is a data discovery tool supported by the Mercury cataloging framework [2] which aims to compile urban environmental data from around the world into one location, and be searchable via a user-friendly interface. Each data record conveniently displays its title, source, and date range, and features: (1) a button for a quick view of the metadata, (2) a direct link to the data and, for some data sets, (3) a button for visualizing the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for searching by area. References: [1] Devarakonda, Ranjeet, et al. "Use of a metadata documentation and search tool for large data volumes: The NGEE arctic example." Big Data (Big Data), 2015 IEEE International Conference on. IEEE, 2015. [2] Devarakonda, R., Palanisamy, G., Wilson, B. E., & Green, J. M. (2010). Mercury: reusable metadata management, data discovery and access system. Earth Science Informatics, 3(1-2), 87-94.
Instrumentation, performance visualization, and debugging tools for multiprocessors
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Fineman, Charles E.; Hontalas, Philip J.
1991-01-01
The need for computing power has forced a migration from serial computation on a single processor to parallel processing on multiprocessor architectures. However, without effective means to monitor (and visualize) program execution, debugging, and tuning parallel programs becomes intractably difficult as program complexity increases with the number of processors. Research on performance evaluation tools for multiprocessors is being carried out at ARC. Besides investigating new techniques for instrumenting, monitoring, and presenting the state of parallel program execution in a coherent and user-friendly manner, prototypes of software tools are being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Our current tool set, the Ames Instrumentation Systems (AIMS), incorporates features from various software systems developed in academia and industry. The execution of FORTRAN programs on the Intel iPSC/860 can be automatically instrumented and monitored. Performance data collected in this manner can be displayed graphically on workstations supporting X-Windows. We have successfully compared various parallel algorithms for computational fluid dynamics (CFD) applications in collaboration with scientists from the Numerical Aerodynamic Simulation Systems Division. By performing these comparisons, we show that performance monitors and debuggers such as AIMS are practical and can illuminate the complex dynamics that occur within parallel programs.
Demir, E; Babur, O; Dogrusoz, U; Gursoy, A; Nisanci, G; Cetin-Atalay, R; Ozturk, M
2002-07-01
Availability of the sequences of entire genomes shifts the scientific curiosity towards the identification of function of the genomes in large scale as in genome studies. In the near future, data produced about cellular processes at molecular level will accumulate with an accelerating rate as a result of proteomics studies. In this regard, it is essential to develop tools for storing, integrating, accessing, and analyzing this data effectively. We define an ontology for a comprehensive representation of cellular events. The ontology presented here enables integration of fragmented or incomplete pathway information and supports manipulation and incorporation of the stored data, as well as multiple levels of abstraction. Based on this ontology, we present the architecture of an integrated environment named Patika (Pathway Analysis Tool for Integration and Knowledge Acquisition). Patika is composed of a server-side, scalable, object-oriented database and client-side editors to provide an integrated, multi-user environment for visualizing and manipulating network of cellular events. This tool features automated pathway layout, functional computation support, advanced querying and a user-friendly graphical interface. We expect that Patika will be a valuable tool for rapid knowledge acquisition, microarray generated large-scale data interpretation, disease gene identification, and drug development. A prototype of Patika is available upon request from the authors.
Fleisher, Linda; Ruggieri, Dominique G.; Miller, Suzanne M.; Manne, Sharon; Albrecht, Terrance; Buzaglo, Joanne; Collins, Michael A.; Katz, Michael; Kinzy, Tyler G.; Liu, Tasnuva; Manning, Cheri; Charap, Ellen Specker; Millard, Jennifer; Miller, Dawn M.; Poole, David; Raivitch, Stephanie; Roach, Nancy; Ross, Eric A.; Meropol, Neal J.
2014-01-01
Objective This article describes the rigorous development process and initial feedback of the PRE-ACT (Preparatory Education About Clinical Trials) web-based- intervention designed to improve preparation for decision making in cancer clinical trials. Methods The multi-step process included stakeholder input, formative research, user testing and feedback. Diverse teams (researchers, advocates and developers) participated including content refinement, identification of actors, and development of video scripts. Patient feedback was provided in the final production period and through a vanguard group (N = 100) from the randomized trial. Results Patients/advocates confirmed barriers to cancer clinical trial participation, including lack of awareness and knowledge, fear of side effects, logistical concerns, and mistrust. Patients indicated they liked the tool’s user-friendly nature, the organized and comprehensive presentation of the subject matter, and the clarity of the videos. Conclusion The development process serves as an example of operationalizing best practice approaches and highlights the value of a multi-disciplinary team to develop a theory-based, sophisticated tool that patients found useful in their decision making process. Practice implications Best practice approaches can be addressed and are important to ensure evidence-based tools that are of value to patients and supports the usefulness of a process map in the development of e-health tools. PMID:24813474
CARE 3 user-friendly interface user's guide
NASA Technical Reports Server (NTRS)
Martensen, A. L.
1987-01-01
CARE 3 predicts the unreliability of highly reliable reconfigurable fault-tolerant systems that include redundant computers or computer systems. CARE3MENU is a user-friendly interface used to create an input for the CARE 3 program. The CARE3MENU interface has been designed to minimize user input errors. Although a CARE3MENU session may be successfully completed and all parameters may be within specified limits or ranges, the CARE 3 program is not guaranteed to produce meaningful results if the user incorrectly interprets the CARE 3 stochastic model. The CARE3MENU User Guide provides complete information on how to create a CARE 3 model with the interface. The CARE3MENU interface runs under the VAX/VMS operating system.
Boman, John H; Stogner, John; Miller, Bryan Lee
2013-01-01
While it is commonly understood that the substance use of peers influences an individual's substance use, much less is understood about the interplay between substance use and friendship quality. Using a sample of 2,148 emerging adults nested within 1,074 dyadic friendships, this study separately investigates how concordance and discordance in binge drinking and marijuana use between friends is related to each friend's perceptions of friendship quality. Because "friendship quality" is a complex construct, we employ a measure containing five sub-elements--companionship, a lack of conflict, willingness to help a friend, relationship security, and closeness. Results for both binge drinking and marijuana use reveal that individuals in friendship pairs who are concordant in their substance use perceive significantly higher perceptions of friendship quality than individuals in dyads who are dissimilar in substance use. Specifically, concordant binge drinkers estimate significantly higher levels of companionship, relationship security, and willingness to help their friend than concordant non-users, discordant users, and discordant non-users. However, the highest amount of conflict in friendships is found when both friends engage in binge drinking and marijuana use. Several interpretations of these findings are discussed. Overall, concordance between friends' binge drinking and marijuana use appears to help some elements of friendship quality and harm others.
Pulse sequence programming in a dynamic visual environment: SequenceTree.
Magland, Jeremy F; Li, Cheng; Langham, Michael C; Wehrli, Felix W
2016-01-01
To describe SequenceTree, an open source, integrated software environment for implementing MRI pulse sequences and, ideally, exporting them to actual MRI scanners. The software is a user-friendly alternative to vendor-supplied pulse sequence design and editing tools and is suited for programmers and nonprogrammers alike. The integrated user interface was programmed using the Qt4/C++ toolkit. As parameters and code are modified, the pulse sequence diagram is automatically updated within the user interface. Several aspects of pulse programming are handled automatically, allowing users to focus on higher-level aspects of sequence design. Sequences can be simulated using a built-in Bloch equation solver and then exported for use on a Siemens MRI scanner. Ideally, other types of scanners will be supported in the future. SequenceTree has been used for 8 years in our laboratory and elsewhere and has contributed to more than 50 peer-reviewed publications in areas such as cardiovascular imaging, solid state and nonproton NMR, MR elastography, and high-resolution structural imaging. SequenceTree is an innovative, open source, visual pulse sequence environment for MRI combining simplicity with flexibility and is ideal both for advanced users and users with limited programming experience. © 2015 Wiley Periodicals, Inc.
MDANSE: An Interactive Analysis Environment for Molecular Dynamics Simulations.
Goret, G; Aoun, B; Pellegrini, E
2017-01-23
The MDANSE software-Molecular Dynamics Analysis of Neutron Scattering Experiments-is presented. It is an interactive application for postprocessing molecular dynamics (MD) simulations. Given the widespread use of MD simulations in material and biomolecular sciences to get a better insight for experimental techniques such as thermal neutron scattering (TNS), the development of MDANSE has focused on providing a user-friendly, interactive, graphical user interface for analyzing many trajectories in the same session and running several analyses simultaneously independently of the interface. This first version of MDANSE already proposes a broad range of analyses, and the application has been designed to facilitate the introduction of new analyses in the framework. All this makes MDANSE a valuable tool for extracting useful information from trajectories resulting from a wide range of MD codes.
Recent developments in the CCP-EM software suite.
Burnley, Tom; Palmer, Colin M; Winn, Martyn
2017-06-01
As part of its remit to provide computational support to the cryo-EM community, the Collaborative Computational Project for Electron cryo-Microscopy (CCP-EM) has produced a software framework which enables easy access to a range of programs and utilities. The resulting software suite incorporates contributions from different collaborators by encapsulating them in Python task wrappers, which are then made accessible via a user-friendly graphical user interface as well as a command-line interface suitable for scripting. The framework includes tools for project and data management. An overview of the design of the framework is given, together with a survey of the functionality at different levels. The current CCP-EM suite has particular strength in the building and refinement of atomic models into cryo-EM reconstructions, which is described in detail.
Lynx: a database and knowledge extraction engine for integrative medicine
Sulakhe, Dinanath; Balasubramanian, Sandhya; Xie, Bingqing; Feng, Bo; Taylor, Andrew; Wang, Sheng; Berrocal, Eduardo; Dave, Utpal; Xu, Jinbo; Börnigen, Daniela; Gilliam, T. Conrad; Maltsev, Natalia
2014-01-01
We have developed Lynx (http://lynx.ci.uchicago.edu)—a web-based database and a knowledge extraction engine, supporting annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Its underlying knowledge base (LynxKB) integrates various classes of information from >35 public databases and private collections, as well as manually curated data from our group and collaborators. Lynx provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization to assist the user in extracting meaningful knowledge from LynxKB and experimental data, whereas its service-oriented architecture provides public access to LynxKB and its analytical tools via user-friendly web services and interfaces. PMID:24270788
Zuberbuhler, Bruno; Galloway, Peter; Reddy, Aravind; Saldana, Manuel; Gale, Richard
2007-12-01
The aim was to develop a software tool for refractive surgeons using a standard user-friendly web-based interface, providing the user with a secure environment to protect large volumes of patient data. The software application was named "Internet-based refractive analysis" (IBRA), and was programmed with the computer languages PHP, HTML and JavaScript, attached to the opensource MySQL database. IBRA facilitated internationally accepted presentation methods including the stability chart, the predictability chart and the safety chart; it was able to perform vector analysis for the course of a single patient or for group data. With the integrated nomogram calculation, treatment could be customised to reduce the postoperative refractive error. Multicenter functions permitted quality-control comparisons between different surgeons and laser units.
PomBase: a comprehensive online resource for fission yeast
Wood, Valerie; Harris, Midori A.; McDowall, Mark D.; Rutherford, Kim; Vaughan, Brendan W.; Staines, Daniel M.; Aslett, Martin; Lock, Antonia; Bähler, Jürg; Kersey, Paul J.; Oliver, Stephen G.
2012-01-01
PomBase (www.pombase.org) is a new model organism database established to provide access to comprehensive, accurate, and up-to-date molecular data and biological information for the fission yeast Schizosaccharomyces pombe to effectively support both exploratory and hypothesis-driven research. PomBase encompasses annotation of genomic sequence and features, comprehensive manual literature curation and genome-wide data sets, and supports sophisticated user-defined queries. The implementation of PomBase integrates a Chado relational database that houses manually curated data with Ensembl software that supports sequence-based annotation and web access. PomBase will provide user-friendly tools to promote curation by experts within the fission yeast community. This will make a key contribution to shaping its content and ensuring its comprehensiveness and long-term relevance. PMID:22039153
DOE Office of Scientific and Technical Information (OSTI.GOV)
Da Rio, Nicola; Robberto, Massimo, E-mail: ndario@rssd.esa.int
We present the Tool for Astrophysical Data Analysis (TA-DA), a new software aimed to greatly simplify and improve the analysis of stellar photometric data in comparison with theoretical models, and allow the derivation of stellar parameters from multi-band photometry. Its flexibility allows one to address a number of such problems: from the interpolation of stellar models, or sets of stellar physical parameters in general, to the computation of synthetic photometry in arbitrary filters or units; from the analysis of observed color-magnitude diagrams to a Bayesian derivation of stellar parameters (and extinction) based on multi-band data. TA-DA is available as amore » pre-compiled Interactive Data Language widget-based application; its graphical user interface makes it considerably user-friendly. In this paper, we describe the software and its functionalities.« less
Data mining in newt-omics, the repository for omics data from the newt.
Looso, Mario; Braun, Thomas
2015-01-01
Salamanders are an excellent model organism to study regenerative processes due to their unique ability to regenerate lost appendages or organs. Straightforward bioinformatics tools to analyze and take advantage of the growing number of "omics" studies performed in salamanders were lacking so far. To overcome this limitation, we have generated a comprehensive data repository for the red-spotted newt Notophthalmus viridescens, named newt-omics, merging omics style datasets on the transcriptome and proteome level including expression values and annotations. The resource is freely available via a user-friendly Web-based graphical user interface ( http://newt-omics.mpi-bn.mpg.de) that allows access and queries to the database without prior bioinformatical expertise. The repository is updated regularly, incorporating new published datasets from omics technologies.
Recent developments in the CCP-EM software suite
Burnley, Tom
2017-01-01
As part of its remit to provide computational support to the cryo-EM community, the Collaborative Computational Project for Electron cryo-Microscopy (CCP-EM) has produced a software framework which enables easy access to a range of programs and utilities. The resulting software suite incorporates contributions from different collaborators by encapsulating them in Python task wrappers, which are then made accessible via a user-friendly graphical user interface as well as a command-line interface suitable for scripting. The framework includes tools for project and data management. An overview of the design of the framework is given, together with a survey of the functionality at different levels. The current CCP-EM suite has particular strength in the building and refinement of atomic models into cryo-EM reconstructions, which is described in detail. PMID:28580908
BepiPred-2.0: improving sequence-based B-cell epitope prediction using conformational epitopes
Jespersen, Martin Closter; Peters, Bjoern
2017-01-01
Abstract Antibodies have become an indispensable tool for many biotechnological and clinical applications. They bind their molecular target (antigen) by recognizing a portion of its structure (epitope) in a highly specific manner. The ability to predict epitopes from antigen sequences alone is a complex task. Despite substantial effort, limited advancement has been achieved over the last decade in the accuracy of epitope prediction methods, especially for those that rely on the sequence of the antigen only. Here, we present BepiPred-2.0 (http://www.cbs.dtu.dk/services/BepiPred/), a web server for predicting B-cell epitopes from antigen sequences. BepiPred-2.0 is based on a random forest algorithm trained on epitopes annotated from antibody-antigen protein structures. This new method was found to outperform other available tools for sequence-based epitope prediction both on epitope data derived from solved 3D structures, and on a large collection of linear epitopes downloaded from the IEDB database. The method displays results in a user-friendly and informative way, both for computer-savvy and non-expert users. We believe that BepiPred-2.0 will be a valuable tool for the bioinformatics and immunology community. PMID:28472356
Falk, L E; Fader, K A; Cui, D S; Totton, S C; Fazil, A M; Lammerding, A M; Smith, B A
2016-10-01
Although infection by the pathogenic bacterium Listeria monocytogenes is relatively rare, consequences can be severe, with a high case-fatality rate in vulnerable populations. A quantitative, probabilistic risk assessment tool was developed to compare estimates of the number of invasive listeriosis cases in vulnerable Canadian subpopulations given consumption of contaminated ready-to-eat delicatessen meats and hot dogs, under various user-defined scenarios. The model incorporates variability and uncertainty through Monte Carlo simulation. Processes considered within the model include cross-contamination, growth, risk factor prevalence, subpopulation susceptibilities, and thermal inactivation. Hypothetical contamination events were simulated. Results demonstrated varying risk depending on the consumer risk factors and implicated product (turkey delicatessen meat without growth inhibitors ranked highest for this scenario). The majority (80%) of listeriosis cases were predicted in at-risk subpopulations comprising only 20% of the total Canadian population, with the greatest number of predicted cases in the subpopulation with dialysis and/or liver disease. This tool can be used to simulate conditions and outcomes under different scenarios, such as a contamination event and/or outbreak, to inform public health interventions.
Value Addition to Cartosat-I Imagery
NASA Astrophysics Data System (ADS)
Mohan, M.
2014-11-01
In the sector of remote sensing applications, the use of stereo data is on the steady rise. An attempt is hereby made to develop a software suite specifically for exploitation of Cartosat-I data. A few algorithms to enhance the quality of basic Cartosat-I products will be presented. The algorithms heavily exploit the Rational Function Coefficients (RPCs) that are associated with the image. The algorithms include improving the geometric positioning through Bundle Block Adjustment and producing refined RPCs; generating portable stereo views using raw / refined RPCs autonomously; orthorectification and mosaicing; registering a monoscopic image rapidly with a single seed point. The outputs of these modules (including the refined RPCs) are in standard formats for further exploitation in 3rd party software. The design focus has been on minimizing the user-interaction and to customize heavily to suit the Indian context. The core libraries are in C/C++ and some of the applications come with user-friendly GUI. Further customization to suit a specific workflow is feasible as the requisite photogrammetric tools are in place and are continuously upgraded. The paper discusses the algorithms and the design considerations of developing the tools. The value-added products so produced using these tools will also be presented.
Pal, Parimal; Thakura, Ritwik; Chakrabortty, Sankha
2016-05-01
A user-friendly, menu-driven simulation software tool has been developed for the first time to optimize and analyze the system performance of an advanced continuous membrane-integrated pharmaceutical wastewater treatment plant. The software allows pre-analysis and manipulation of input data which helps in optimization and shows the software performance visually on a graphical platform. Moreover, the software helps the user to "visualize" the effects of the operating parameters through its model-predicted output profiles. The software is based on a dynamic mathematical model, developed for a systematically integrated forward osmosis-nanofiltration process for removal of toxic organic compounds from pharmaceutical wastewater. The model-predicted values have been observed to corroborate well with the extensive experimental investigations which were found to be consistent under varying operating conditions like operating pressure, operating flow rate, and draw solute concentration. Low values of the relative error (RE = 0.09) and high values of Willmott-d-index (d will = 0.981) reflected a high degree of accuracy and reliability of the software. This software is likely to be a very efficient tool for system design or simulation of an advanced membrane-integrated treatment plant for hazardous wastewater.
Erbault, M; Glikman, J; Ravineau, M; Lajzerowicz, N; Terra, J
2003-01-01
Relevant and user friendly information should be provided to professionals who wish to promote quality improvement in healthcare organisations (HCOs). In response to requests from French HCOs, we designed a compendium of methods and tools for use in quality improvement. Its contents were based on a critical review of the literature, face-to-face interviews with three industrial/business experts in quality, the views of 13 healthcare professionals knowledgeable in quality issues, and comments from over 40 potential users of the compendium. Overall, 14 methods and 20 tools relevant and applicable to the healthcare sector were identified. They were classified according to their main thrust, explained in detail, illustrated with specific cases from the literature or from personal experience, and published as a loose leaf compendium. The compendium was posted on the worldwide web and presented to healthcare managers in September 2000. It has become one of the most popular ANAES publications (approximately 5400 downloads over the first 6 months), partly because all French HCOs are legally bound to undergo accreditation which has been set up and is being implemented by ANAES. PMID:14532370
GDA, a web-based tool for Genomics and Drugs integrated analysis.
Caroli, Jimmy; Sorrentino, Giovanni; Forcato, Mattia; Del Sal, Giannino; Bicciato, Silvio
2018-05-25
Several major screenings of genetic profiling and drug testing in cancer cell lines proved that the integration of genomic portraits and compound activities is effective in discovering new genetic markers of drug sensitivity and clinically relevant anticancer compounds. Despite most genetic and drug response data are publicly available, the availability of user-friendly tools for their integrative analysis remains limited, thus hampering an effective exploitation of this information. Here, we present GDA, a web-based tool for Genomics and Drugs integrated Analysis that combines drug response data for >50 800 compounds with mutations and gene expression profiles across 73 cancer cell lines. Genomic and pharmacological data are integrated through a modular architecture that allows users to identify compounds active towards cancer cell lines bearing a specific genomic background and, conversely, the mutational or transcriptional status of cells responding or not-responding to a specific compound. Results are presented through intuitive graphical representations and supplemented with information obtained from public repositories. As both personalized targeted therapies and drug-repurposing are gaining increasing attention, GDA represents a resource to formulate hypotheses on the interplay between genomic traits and drug response in cancer. GDA is freely available at http://gda.unimore.it/.
BioSPICE: access to the most current computational tools for biologists.
Garvey, Thomas D; Lincoln, Patrick; Pedersen, Charles John; Martin, David; Johnson, Mark
2003-01-01
The goal of the BioSPICE program is to create a framework that provides biologists access to the most current computational tools. At the program midpoint, the BioSPICE member community has produced a software system that comprises contributions from approximately 20 participating laboratories integrated under the BioSPICE Dashboard and a methodology for continued software integration. These contributed software modules are the BioSPICE Dashboard, a graphical environment that combines Open Agent Architecture and NetBeans software technologies in a coherent, biologist-friendly user interface. The current Dashboard permits data sources, models, simulation engines, and output displays provided by different investigators and running on different machines to work together across a distributed, heterogeneous network. Among several other features, the Dashboard enables users to create graphical workflows by configuring and connecting available BioSPICE components. Anticipated future enhancements to BioSPICE include a notebook capability that will permit researchers to browse and compile data to support model building, a biological model repository, and tools to support the development, control, and data reduction of wet-lab experiments. In addition to the BioSPICE software products, a project website supports information exchange and community building.
Workstation-Based Simulation for Rapid Prototyping and Piloted Evaluation of Control System Designs
NASA Technical Reports Server (NTRS)
Mansur, M. Hossein; Colbourne, Jason D.; Chang, Yu-Kuang; Aiken, Edwin W. (Technical Monitor)
1998-01-01
The development and optimization of flight control systems for modem fixed- and rotary-. wing aircraft consume a significant portion of the overall time and cost of aircraft development. Substantial savings can be achieved if the time required to develop and flight test the control system, and the cost, is reduced. To bring about such reductions, software tools such as Matlab/Simulink are being used to readily implement block diagrams and rapidly evaluate the expected responses of the completed system. Moreover, tools such as CONDUIT (CONtrol Designer's Unified InTerface) have been developed that enable the controls engineers to optimize their control laws and ensure that all the relevant quantitative criteria are satisfied, all within a fully interactive, user friendly, unified software environment.
TabPath: interactive tables for metabolic pathway analysis.
Moraes, Lauro Ângelo Gonçalves de; Felestrino, Érica Barbosa; Assis, Renata de Almeida Barbosa; Matos, Diogo; Lima, Joubert de Castro; Lima, Leandro de Araújo; Almeida, Nalvo Franco; Setubal, João Carlos; Garcia, Camila Carrião Machado; Moreira, Leandro Marcio
2018-03-15
Information about metabolic pathways in a comparative context is one of the most powerful tool to help the understanding of genome-based differences in phenotypes among organisms. Although several platforms exist that provide a wealth of information on metabolic pathways of diverse organisms, the comparison among organisms using metabolic pathways is still a difficult task. We present TabPath (Tables for Metabolic Pathway), a web-based tool to facilitate comparison of metabolic pathways in genomes based on KEGG. From a selection of pathways and genomes of interest on the menu, TabPath generates user-friendly tables that facilitate analysis of variations in metabolism among the selected organisms. TabPath is available at http://200.239.132.160:8686. lmmorei@gmail.com.
AMIDE: a free software tool for multimodality medical image analysis.
Loening, Andreas Markus; Gambhir, Sanjiv Sam
2003-07-01
Amide's a Medical Image Data Examiner (AMIDE) has been developed as a user-friendly, open-source software tool for displaying and analyzing multimodality volumetric medical images. Central to the package's abilities to simultaneously display multiple data sets (e.g., PET, CT, MRI) and regions of interest is the on-demand data reslicing implemented within the program. Data sets can be freely shifted, rotated, viewed, and analyzed with the program automatically handling interpolation as needed from the original data. Validation has been performed by comparing the output of AMIDE with that of several existing software packages. AMIDE runs on UNIX, Macintosh OS X, and Microsoft Windows platforms, and it is freely available with source code under the terms of the GNU General Public License.
Constructing and Modifying Sequence Statistics for relevent Using informR in 𝖱
Marcum, Christopher Steven; Butts, Carter T.
2015-01-01
The informR package greatly simplifies the analysis of complex event histories in 𝖱 by providing user friendly tools to build sufficient statistics for the relevent package. Historically, building sufficient statistics to model event sequences (of the form a→b) using the egocentric generalization of Butts’ (2008) relational event framework for modeling social action has been cumbersome. The informR package simplifies the construction of the complex list of arrays needed by the rem() model fitting for a variety of cases involving egocentric event data, multiple event types, and/or support constraints. This paper introduces these tools using examples from real data extracted from the American Time Use Survey. PMID:26185488
Kwf-Grid workflow management system for Earth science applications
NASA Astrophysics Data System (ADS)
Tran, V.; Hluchy, L.
2009-04-01
In this paper, we present workflow management tool for Earth science applications in EGEE. The workflow management tool was originally developed within K-wf Grid project for GT4 middleware and has many advanced features like semi-automatic workflow composition, user-friendly GUI for managing workflows, knowledge management. In EGEE, we are porting the workflow management tool to gLite middleware for Earth science applications K-wf Grid workflow management system was developed within "Knowledge-based Workflow System for Grid Applications" under the 6th Framework Programme. The workflow mangement system intended to - semi-automatically compose a workflow of Grid services, - execute the composed workflow application in a Grid computing environment, - monitor the performance of the Grid infrastructure and the Grid applications, - analyze the resulting monitoring information, - capture the knowledge that is contained in the information by means of intelligent agents, - and finally to reuse the joined knowledge gathered from all participating users in a collaborative way in order to efficiently construct workflows for new Grid applications. Kwf Grid workflow engines can support different types of jobs (e.g. GRAM job, web services) in a workflow. New class of gLite job has been added to the system, allows system to manage and execute gLite jobs in EGEE infrastructure. The GUI has been adapted to the requirements of EGEE users, new credential management servlet is added to portal. Porting K-wf Grid workflow management system to gLite would allow EGEE users to use the system and benefit from its avanced features. The system is primarly tested and evaluated with applications from ES clusters.
NASA Astrophysics Data System (ADS)
El Naqa, I.; Suneja, G.; Lindsay, P. E.; Hope, A. J.; Alaly, J. R.; Vicic, M.; Bradley, J. D.; Apte, A.; Deasy, J. O.
2006-11-01
Radiotherapy treatment outcome models are a complicated function of treatment, clinical and biological factors. Our objective is to provide clinicians and scientists with an accurate, flexible and user-friendly software tool to explore radiotherapy outcomes data and build statistical tumour control or normal tissue complications models. The software tool, called the dose response explorer system (DREES), is based on Matlab, and uses a named-field structure array data type. DREES/Matlab in combination with another open-source tool (CERR) provides an environment for analysing treatment outcomes. DREES provides many radiotherapy outcome modelling features, including (1) fitting of analytical normal tissue complication probability (NTCP) and tumour control probability (TCP) models, (2) combined modelling of multiple dose-volume variables (e.g., mean dose, max dose, etc) and clinical factors (age, gender, stage, etc) using multi-term regression modelling, (3) manual or automated selection of logistic or actuarial model variables using bootstrap statistical resampling, (4) estimation of uncertainty in model parameters, (5) performance assessment of univariate and multivariate analyses using Spearman's rank correlation and chi-square statistics, boxplots, nomograms, Kaplan-Meier survival plots, and receiver operating characteristics curves, and (6) graphical capabilities to visualize NTCP or TCP prediction versus selected variable models using various plots. DREES provides clinical researchers with a tool customized for radiotherapy outcome modelling. DREES is freely distributed. We expect to continue developing DREES based on user feedback.
For Professors, "Friending" Can Be Fraught
ERIC Educational Resources Information Center
Lipka, Sara
2007-01-01
People connect on Facebook by asking to "friend" one another. A typical user lists at least 100 such connections, while newbies are informed, "You don't have any friends yet." A humbling statement. It might make one want to find some. But friending students can be even dicier than befriending them. In the real world, casual professors may ask…
MODSNOW-Tool: an operational tool for daily snow cover monitoring using MODIS data
NASA Astrophysics Data System (ADS)
Gafurov, Abror; Lüdtke, Stefan; Unger-Shayesteh, Katy; Vorogushyn, Sergiy; Schöne, Tilo; Schmidt, Sebastian; Kalashnikova, Olga; Merz, Bruno
2017-04-01
Spatially distributed snow cover information in mountain areas is extremely important for water storage estimations, seasonal water availability forecasting, or the assessment of snow-related hazards (e.g. enhanced snow-melt following intensive rains, or avalanche events). Moreover, spatially distributed snow cover information can be used to calibrate and/or validate hydrological models. We present the MODSNOW-Tool - an operational monitoring tool offers a user-friendly application which can be used for catchment-based operational snow cover monitoring. The application automatically downloads and processes freely available daily Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover data. The MODSNOW-Tool uses a step-wise approach for cloud removal and delivers cloud-free snow cover maps for the selected river basins including basin specific snow cover extent statistics. The accuracy of cloud-eliminated MODSNOW snow cover maps was validated for 84 almost cloud-free days in the Karadarya river basin in Central Asia, and an average accuracy of 94 % was achieved. The MODSNOW-Tool can be used in operational and non-operational mode. In the operational mode, the tool is set up as a scheduled task on a local computer allowing automatic execution without user interaction and delivers snow cover maps on a daily basis. In the non-operational mode, the tool can be used to process historical time series of snow cover maps. The MODSNOW-Tool is currently implemented and in use at the national hydrometeorological services of four Central Asian states - Kazakhstan, Kyrgyzstan, Uzbekistan and Turkmenistan and used for seasonal water availability forecast.
Khachane, Amit; Kumar, Ranjit; Jain, Sanyam; Jain, Samta; Banumathy, Gowrishankar; Singh, Varsha; Nagpal, Saurabh; Tatu, Utpal
2005-01-01
Bioinformatics tools to aid gene and protein sequence analysis have become an integral part of biology in the post-genomic era. Release of the Plasmodium falciparum genome sequence has allowed biologists to define the gene and the predicted protein content as well as their sequences in the parasite. Using pI and molecular weight as characteristics unique to each protein, we have developed a bioinformatics tool to aid identification of proteins from Plasmodium falciparum. The tool makes use of a Virtual 2-DE generated by plotting all of the proteins from the Plasmodium database on a pI versus molecular weight scale. Proteins are identified by comparing the position of migration of desired protein spots from an experimental 2-DE and that on a virtual 2-DE. The procedure has been automated in the form of user-friendly software called "Plasmo2D". The tool can be downloaded from http://144.16.89.25/Plasmo2D.zip.
ERIC Educational Resources Information Center
Porter, Lon A., Jr.; Chapman, Cole A.; Alaniz, Jacob A.
2017-01-01
In this work, a versatile and user-friendly selection of stereolithography (STL) files and computer-aided design (CAD) models are shared to assist educators and students in the production of simple and inexpensive 3D printed filter fluorometer instruments. These devices are effective resources for supporting active learners in the exploration of…
Jabłoński, Michał; Starčuková, Jana; Starčuk, Zenon
2017-01-23
Proton magnetic resonance spectroscopy is a non-invasive measurement technique which provides information about concentrations of up to 20 metabolites participating in intracellular biochemical processes. In order to obtain any metabolic information from measured spectra a processing should be done in specialized software, like jMRUI. The processing is interactive and complex and often requires many trials before obtaining a correct result. This paper proposes a jMRUI enhancement for efficient and unambiguous history tracking and file identification. A database storing all processing steps, parameters and files used in processing was developed for jMRUI. The solution was developed in Java, authors used a SQL database for robust storage of parameters and SHA-256 hash code for unambiguous file identification. The developed system was integrated directly in jMRUI and it will be publically available. A graphical user interface was implemented in order to make the user experience more comfortable. The database operation is invisible from the point of view of the common user, all tracking operations are performed in the background. The implemented jMRUI database is a tool that can significantly help the user to track the processing history performed on data in jMRUI. The created tool is oriented to be user-friendly, robust and easy to use. The database GUI allows the user to browse the whole processing history of a selected file and learn e.g. what processing lead to the results, where the original data are stored, to obtain the list of all processing actions performed on spectra.
Web-based hydrodynamics computing
NASA Astrophysics Data System (ADS)
Shimoide, Alan; Lin, Luping; Hong, Tracie-Lynne; Yoon, Ilmi; Aragon, Sergio R.
2005-01-01
Proteins are long chains of amino acids that have a definite 3-d conformation and the shape of each protein is vital to its function. Since proteins are normally in solution, hydrodynamics (describes the movement of solvent around a protein as a function of shape and size of the molecule) can be used to probe the size and shape of proteins compared to those derived from X-ray crystallography. The computation chain needed for these hydrodynamics calculations consists of several separate programs by different authors on various platforms and often requires 3D visualizations of intermediate results. Due to the complexity, tools developed by a particular research group are not readily available for use by other groups, nor even by the non-experts within the same research group. To alleviate this situation, and to foment the easy and wide distribution of computational tools worldwide, we developed a web based interactive computational environment (WICE) including interactive 3D visualization that can be used with any web browser. Java based technologies were used to provide a platform neutral, user-friendly solution. Java Server Pages (JSP), Java Servlets, Java Beans, JOGL (Java bindings for OpenGL), and Java Web Start were used to create a solution that simplifies the computing chain for the user allowing the user to focus on their scientific research. WICE hides complexity from the user and provides robust and sophisticated visualization through a web browser.
Web-based hydrodynamics computing
NASA Astrophysics Data System (ADS)
Shimoide, Alan; Lin, Luping; Hong, Tracie-Lynne; Yoon, Ilmi; Aragon, Sergio R.
2004-12-01
Proteins are long chains of amino acids that have a definite 3-d conformation and the shape of each protein is vital to its function. Since proteins are normally in solution, hydrodynamics (describes the movement of solvent around a protein as a function of shape and size of the molecule) can be used to probe the size and shape of proteins compared to those derived from X-ray crystallography. The computation chain needed for these hydrodynamics calculations consists of several separate programs by different authors on various platforms and often requires 3D visualizations of intermediate results. Due to the complexity, tools developed by a particular research group are not readily available for use by other groups, nor even by the non-experts within the same research group. To alleviate this situation, and to foment the easy and wide distribution of computational tools worldwide, we developed a web based interactive computational environment (WICE) including interactive 3D visualization that can be used with any web browser. Java based technologies were used to provide a platform neutral, user-friendly solution. Java Server Pages (JSP), Java Servlets, Java Beans, JOGL (Java bindings for OpenGL), and Java Web Start were used to create a solution that simplifies the computing chain for the user allowing the user to focus on their scientific research. WICE hides complexity from the user and provides robust and sophisticated visualization through a web browser.
Inferring Tie Strength from Online Directed Behavior
Jones, Jason J.; Settle, Jaime E.; Bond, Robert M.; Fariss, Christopher J.; Marlow, Cameron; Fowler, James H.
2013-01-01
Some social connections are stronger than others. People have not only friends, but also best friends. Social scientists have long recognized this characteristic of social connections and researchers frequently use the term tie strength to refer to this concept. We used online interaction data (specifically, Facebook interactions) to successfully identify real-world strong ties. Ground truth was established by asking users themselves to name their closest friends in real life. We found the frequency of online interaction was diagnostic of strong ties, and interaction frequency was much more useful diagnostically than were attributes of the user or the user’s friends. More private communications (messages) were not necessarily more informative than public communications (comments, wall posts, and other interactions). PMID:23300964
Steiner, Andreas; Hella, Jerry; Grüninger, Servan; Mhalu, Grace; Mhimbira, Francis; Cercamondi, Colin I; Doulla, Basra; Maire, Nicolas; Fenner, Lukas
2016-09-01
A software tool is developed to facilitate data entry and to monitor research projects in under-resourced countries in real-time. The eManagement tool "odk_planner" is written in the scripting languages PHP and Python. The odk_planner is lightweight and uses minimal internet resources. It was designed to be used with the open source software Open Data Kit (ODK). The users can easily configure odk_planner to meet their needs, and the online interface displays data collected from ODK forms in a graphically informative way. The odk_planner also allows users to upload pictures and laboratory results and sends text messages automatically. User-defined access rights protect data and privacy. We present examples from four field applications in Tanzania successfully using the eManagement tool: 1) clinical trial; 2) longitudinal Tuberculosis (TB) Cohort Study with a complex visit schedule, where it was used to graphically display missing case report forms, upload digitalized X-rays, and send text message reminders to patients; 3) intervention study to improve TB case detection, carried out at pharmacies: a tablet-based electronic referral system monitored referred patients, and sent automated messages to remind pharmacy clients to visit a TB Clinic; and 4) TB retreatment case monitoring designed to improve drug resistance surveillance: clinicians at four public TB clinics and lab technicians at the TB reference laboratory used a smartphone-based application that tracked sputum samples, and collected clinical and laboratory data. The user friendly, open source odk_planner is a simple, but multi-functional, Web-based eManagement tool with add-ons that helps researchers conduct studies in under-resourced countries. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Automatic Camera Calibration for Cultural Heritage Applications Using Unstructured Planar Objects
NASA Astrophysics Data System (ADS)
Adam, K.; Kalisperakis, I.; Grammatikopoulos, L.; Karras, G.; Petsa, E.
2013-07-01
As a rule, image-based documentation of cultural heritage relies today on ordinary digital cameras and commercial software. As such projects often involve researchers not familiar with photogrammetry, the question of camera calibration is important. Freely available open-source user-friendly software for automatic camera calibration, often based on simple 2D chess-board patterns, are an answer to the demand for simplicity and automation. However, such tools cannot respond to all requirements met in cultural heritage conservation regarding possible imaging distances and focal lengths. Here we investigate the practical possibility of camera calibration from unknown planar objects, i.e. any planar surface with adequate texture; we have focused on the example of urban walls covered with graffiti. Images are connected pair-wise with inter-image homographies, which are estimated automatically through a RANSAC-based approach after extracting and matching interest points with the SIFT operator. All valid points are identified on all images on which they appear. Provided that the image set includes a "fronto-parallel" view, inter-image homographies with this image are regarded as emulations of image-to-world homographies and allow computing initial estimates for the interior and exterior orientation elements. Following this initialization step, the estimates are introduced into a final self-calibrating bundle adjustment. Measures are taken to discard unsuitable images and verify object planarity. Results from practical experimentation indicate that this method may produce satisfactory results. The authors intend to incorporate the described approach into their freely available user-friendly software tool, which relies on chess-boards, to assist non-experts in their projects with image-based approaches.
BMPOS: a Flexible and User-Friendly Tool Sets for Microbiome Studies.
Pylro, Victor S; Morais, Daniel K; de Oliveira, Francislon S; Dos Santos, Fausto G; Lemos, Leandro N; Oliveira, Guilherme; Roesch, Luiz F W
2016-08-01
Recent advances in science and technology are leading to a revision and re-orientation of methodologies, addressing old and current issues under a new perspective. Advances in next generation sequencing (NGS) are allowing comparative analysis of the abundance and diversity of whole microbial communities, generating a large amount of data and findings at a systems level. The current limitation for biologists has been the increasing demand for computational power and training required for processing of NGS data. Here, we describe the deployment of the Brazilian Microbiome Project Operating System (BMPOS), a flexible and user-friendly Linux distribution dedicated to microbiome studies. The Brazilian Microbiome Project (BMP) has developed data analyses pipelines for metagenomic studies (phylogenetic marker genes), conducted using the two main high-throughput sequencing platforms (Ion Torrent and Illumina MiSeq). The BMPOS is freely available and possesses the entire requirement of bioinformatics packages and databases to perform all the pipelines suggested by the BMP team. The BMPOS may be used as a bootable live USB stick or installed in any computer with at least 1 GHz CPU and 512 MB RAM, independent of the operating system previously installed. The BMPOS has proved to be effective for sequences processing, sequences clustering, alignment, taxonomic annotation, statistical analysis, and plotting of metagenomic data. The BMPOS has been used during several metagenomic analyses courses, being valuable as a tool for training, and an excellent starting point to anyone interested in performing metagenomic studies. The BMPOS and its documentation are available at http://www.brmicrobiome.org .
Chemozart: a web-based 3D molecular structure editor and visualizer platform.
Mohebifar, Mohamad; Sajadi, Fatemehsadat
2015-01-01
Chemozart is a 3D Molecule editor and visualizer built on top of native web components. It offers an easy to access service, user-friendly graphical interface and modular design. It is a client centric web application which communicates with the server via a representational state transfer style web service. Both client-side and server-side application are written in JavaScript. A combination of JavaScript and HTML is used to draw three-dimensional structures of molecules. With the help of WebGL, three-dimensional visualization tool is provided. Using CSS3 and HTML5, a user-friendly interface is composed. More than 30 packages are used to compose this application which adds enough flexibility to it to be extended. Molecule structures can be drawn on all types of platforms and is compatible with mobile devices. No installation is required in order to use this application and it can be accessed through the internet. This application can be extended on both server-side and client-side by implementing modules in JavaScript. Molecular compounds are drawn on the HTML5 Canvas element using WebGL context. Chemozart is a chemical platform which is powerful, flexible, and easy to access. It provides an online web-based tool used for chemical visualization along with result oriented optimization for cloud based API (application programming interface). JavaScript libraries which allow creation of web pages containing interactive three-dimensional molecular structures has also been made available. The application has been released under Apache 2 License and is available from the project website https://chemozart.com.