Sample records for analysis tool named

  1. An Integrated Approach to Risk Assessment for Concurrent Design

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Voss, Luke; Feather, Martin; Cornford, Steve

    2005-01-01

    This paper describes an approach to risk assessment and analysis suited to the early phase, concurrent design of a space mission. The approach integrates an agile, multi-user risk collection tool, a more in-depth risk analysis tool, and repositories of risk information. A JPL developed tool, named RAP, is used for collecting expert opinions about risk from designers involved in the concurrent design of a space mission. Another in-house developed risk assessment tool, named DDP, is used for the analysis.

  2. High Frequency Scattering Code in a Distributed Processing Environment

    DTIC Science & Technology

    1991-06-01

    Block 6. Author(s). Name(s) of person (s) Block 14. Subiect Terms. Keywords or phrases responsible for writing the report, performing identifying major...use of auttomated analysis tools is indicated. One tool developed by Pacific-Sierra Re- 22 search Corporation and marketed by Intel Corporation for...XQ: EXECUTE CODE EN : END CODE This input deck differs from that in the manual because the "PP" option is disabled in the modified code. 45 A.3

  3. Development of a User Interface for a Regression Analysis Software Tool

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred; Volden, Thomas R.

    2010-01-01

    An easy-to -use user interface was implemented in a highly automated regression analysis tool. The user interface was developed from the start to run on computers that use the Windows, Macintosh, Linux, or UNIX operating system. Many user interface features were specifically designed such that a novice or inexperienced user can apply the regression analysis tool with confidence. Therefore, the user interface s design minimizes interactive input from the user. In addition, reasonable default combinations are assigned to those analysis settings that influence the outcome of the regression analysis. These default combinations will lead to a successful regression analysis result for most experimental data sets. The user interface comes in two versions. The text user interface version is used for the ongoing development of the regression analysis tool. The official release of the regression analysis tool, on the other hand, has a graphical user interface that is more efficient to use. This graphical user interface displays all input file names, output file names, and analysis settings for a specific software application mode on a single screen which makes it easier to generate reliable analysis results and to perform input parameter studies. An object-oriented approach was used for the development of the graphical user interface. This choice keeps future software maintenance costs to a reasonable limit. Examples of both the text user interface and graphical user interface are discussed in order to illustrate the user interface s overall design approach.

  4. Case and Administrative Support Tools

    EPA Pesticide Factsheets

    Case and Administrative Support Tools (CAST) is the secure portion of the Office of General Counsel (OGC) Dashboard business process automation tool used to help reduce office administrative labor costs while increasing employee effectiveness. CAST supports business functions which rely on and store Privacy Act sensitive data (PII). Specific business processes included in CAST (and respective PII) are: -Civil Rights Cast Tracking (name, partial medical history, summary of case, and case correspondance). -Employment Law Case Tracking (name, summary of case). -Federal Tort Claims Act Incident Tracking (name, summary of incidents). -Ethics Program Support Tools and Tracking (name, partial financial history). -Summer Honors Application Tracking (name, home address, telephone number, employment history). -Workforce Flexibility Initiative Support Tools (name, alternative workplace phone number). -Resource and Personnel Management Support Tools (name, partial employment and financial history).

  5. Classification Algorithms for Big Data Analysis, a Map Reduce Approach

    NASA Astrophysics Data System (ADS)

    Ayma, V. A.; Ferreira, R. S.; Happ, P.; Oliveira, D.; Feitosa, R.; Costa, G.; Plaza, A.; Gamba, P.

    2015-03-01

    Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of InterIMAGE Cloud Platform (ICP), which is an open-source, distributed framework for automatic image interpretation, is presented. The tool, named ICP: Data Mining Package, is able to perform supervised classification procedures on huge amounts of data, usually referred as big data, on a distributed infrastructure using Hadoop MapReduce. The tool has four classification algorithms implemented, taken from WEKA's machine learning library, namely: Decision Trees, Naïve Bayes, Random Forest and Support Vector Machines (SVM). The results of an experimental analysis using a SVM classifier on data sets of different sizes for different cluster configurations demonstrates the potential of the tool, as well as aspects that affect its performance.

  6. Survey of Human Systems Integration (HSI) Tools for USCG Acquisitions

    DTIC Science & Technology

    2009-04-01

    an IMPRINT HPM. IMPRINT uses task network modeling to represent human performance. As the name implies, task networks use a flowchart type format...tools; and built-in tutoring support for beginners . A perceptual/motor layer extending ACT-R’s theory of cognition to perception and action is also...chisystems.com B.8 Information and Functional Flow Analysis Description In information flow analysis, a flowchart of the information and decisions

  7. Deliberate teaching tools for clinical teaching encounters: A critical scoping review and thematic analysis to establish definitional clarity.

    PubMed

    Sidhu, Navdeep S; Edwards, Morgan

    2018-04-27

    We conducted a scoping review of tools designed to add structure to clinical teaching, with a thematic analysis to establish definitional clarity. Six thousand and forty nine citations were screened, 434 reviewed for eligibility, and 230 identified as meeting study inclusion criteria. Eighty-nine names and 51 definitions were identified. Based on a post facto thematic analysis, we propose that these tools be named "deliberate teaching tools" (DTTs) and defined as "frameworks that enable clinicians to have a purposeful and considered approach to teaching encounters by incorporating elements identified with good teaching practice." We identified 46 DTTs in the literature, with 38 (82.6%) originally described for the medical setting. Forty justification articles consisted of 16 feedback surveys, 13 controlled trials, seven pre-post intervention studies with no control group, and four observation studies. Current evidence of efficacy is not entirely conclusive, and many studies contain methodology flaws. Forty-nine clarification articles comprised 12 systematic reviews and 37 narrative reviews. The most number of DTTs described by any review was four. A common design theme was identified in approximately three-quarters of DTTs. Applicability of DTTs to specific alternate settings should be considered in context, and appropriately designed justification studies are warranted to demonstrate efficacy.

  8. PANDA-view: An easy-to-use tool for statistical analysis and visualization of quantitative proteomics data.

    PubMed

    Chang, Cheng; Xu, Kaikun; Guo, Chaoping; Wang, Jinxia; Yan, Qi; Zhang, Jian; He, Fuchu; Zhu, Yunping

    2018-05-22

    Compared with the numerous software tools developed for identification and quantification of -omics data, there remains a lack of suitable tools for both downstream analysis and data visualization. To help researchers better understand the biological meanings in their -omics data, we present an easy-to-use tool, named PANDA-view, for both statistical analysis and visualization of quantitative proteomics data and other -omics data. PANDA-view contains various kinds of analysis methods such as normalization, missing value imputation, statistical tests, clustering and principal component analysis, as well as the most commonly-used data visualization methods including an interactive volcano plot. Additionally, it provides user-friendly interfaces for protein-peptide-spectrum representation of the quantitative proteomics data. PANDA-view is freely available at https://sourceforge.net/projects/panda-view/. 1987ccpacer@163.com and zhuyunping@gmail.com. Supplementary data are available at Bioinformatics online.

  9. Design and application of a tool for structuring, capitalizing and making more accessible information and lessons learned from accidents involving machinery.

    PubMed

    Sadeghi, Samira; Sadeghi, Leyla; Tricot, Nicolas; Mathieu, Luc

    2017-12-01

    Accident reports are published in order to communicate the information and lessons learned from accidents. An efficient accident recording and analysis system is a necessary step towards improvement of safety. However, currently there is a shortage of efficient tools to support such recording and analysis. In this study we introduce a flexible and customizable tool that allows structuring and analysis of this information. This tool has been implemented under TEEXMA®. We named our prototype TEEXMA®SAFETY. This tool provides an information management system to facilitate data collection, organization, query, analysis and reporting of accidents. A predefined information retrieval module provides ready access to data which allows the user to quickly identify the possible hazards for specific machines and provides information on the source of hazards. The main target audience for this tool includes safety personnel, accident reporters and designers. The proposed data model has been developed by analyzing different accident reports.

  10. Decision Support Methods and Tools

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Alexandrov, Natalia M.; Brown, Sherilyn A.; Cerro, Jeffrey A.; Gumbert, Clyde r.; Sorokach, Michael R.; Burg, Cecile M.

    2006-01-01

    This paper is one of a set of papers, developed simultaneously and presented within a single conference session, that are intended to highlight systems analysis and design capabilities within the Systems Analysis and Concepts Directorate (SACD) of the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC). This paper focuses on the specific capabilities of uncertainty/risk analysis, quantification, propagation, decomposition, and management, robust/reliability design methods, and extensions of these capabilities into decision analysis methods within SACD. These disciplines are discussed together herein under the name of Decision Support Methods and Tools. Several examples are discussed which highlight the application of these methods within current or recent aerospace research at the NASA LaRC. Where applicable, commercially available, or government developed software tools are also discussed

  11. Applications of Graph-Theoretic Tests to Online Change Detection

    DTIC Science & Technology

    2014-05-09

    NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT ...assessment, crime investigation, and environmental field analysis. Our work offers a new tool for change detection that can be employed in real- time in very...this paper such MSTs and bipartite matchings. Ruth (2009) reports run times for MNBM ensembles created using Derigs’ (1998) algorithm on the order of

  12. Data Use Disclaimer Agreement | Energy Analysis | NREL

    Science.gov Websites

    ;). Access to and use of this Tool shall impose the following obligations on the user, as set forth in this Agreement. The user is granted the right, without any fee or cost, to use, copy, modify, alter, enhance and the use of this Tool. The names DOE/NREL/ALLIANCE, however, may not be used in any advertising or

  13. AMD NOX REDUCTION IMPACTS

    EPA Science Inventory

    This is the first phase of a potentially multi-phase project aimed at identifying scientific methodologies that will lead to the development of innnovative analytical tools supporting the analysis of control strategy effectiveness, namely. accountabilty. Significant reductions i...

  14. Grid Stability Awareness System (GSAS) Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feuerborn, Scott; Ma, Jian; Black, Clifton

    The project team developed a software suite named Grid Stability Awareness System (GSAS) for power system near real-time stability monitoring and analysis based on synchrophasor measurement. The software suite consists of five analytical tools: an oscillation monitoring tool, a voltage stability monitoring tool, a transient instability monitoring tool, an angle difference monitoring tool, and an event detection tool. These tools have been integrated into one framework to provide power grid operators with both real-time or near real-time stability status of a power grid and historical information about system stability status. These tools are being considered for real-time use in themore » operation environment.« less

  15. On Bi-Grid Local Mode Analysis of Solution Techniques for 3-D Euler and Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Ibraheem, S. O.; Demuren, A. O.

    1994-01-01

    A procedure is presented for utilizing a bi-grid stability analysis as a practical tool for predicting multigrid performance in a range of numerical methods for solving Euler and Navier-Stokes equations. Model problems based on the convection, diffusion and Burger's equation are used to illustrate the superiority of the bi-grid analysis as a predictive tool for multigrid performance in comparison to the smoothing factor derived from conventional von Neumann analysis. For the Euler equations, bi-grid analysis is presented for three upwind difference based factorizations, namely Spatial, Eigenvalue and Combination splits, and two central difference based factorizations, namely LU and ADI methods. In the former, both the Steger-Warming and van Leer flux-vector splitting methods are considered. For the Navier-Stokes equations, only the Beam-Warming (ADI) central difference scheme is considered. In each case, estimates of multigrid convergence rates from the bi-grid analysis are compared to smoothing factors obtained from single-grid stability analysis. Effects of grid aspect ratio and flow skewness are examined. Both predictions are compared with practical multigrid convergence rates for 2-D Euler and Navier-Stokes solutions based on the Beam-Warming central scheme.

  16. Topological and Geometric Tools for the Analysis fo Complex Networks

    DTIC Science & Technology

    2013-10-01

    CONTRACT NUMBER FA 9550-09-1-0090 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) Ali Jadbabaie (Penn) Shing-Tung Yau (Harvard) Fan Chung...NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) University of Pennsylvania 34th and Spruce Street, Philadelphia 19104-6303 8. PERFORMING...ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME( S ) AND ADDRESS(ES) Air Force Office of Scientific Research 875 North Randolph Street

  17. Zeus++ - A GUI-Based Flowfield Analysis Tool, Version 1.0, User’s Manual

    DTIC Science & Technology

    1999-02-01

    A.B. and Priolo, F.J., Personal Communication and unpublished documentation. 9 . Tecplot v7.0 Plotting Package, Amtec Engineering, 1998. lO.Hymer... 9 . SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 5. FUNDING NUMBERS 8. PERFORMING ORGANIZATION REPORT NUMBER NSWCDD/TR-98/147 10...12 7 VON KARMAN OGIVE PARAMETERS 13 HAACK SERIES NOSE PARAMETERS 13 9 POWER SERIES NOSE PARAMETERS 14 10 MISCELLANEOUS OPTIONS 15 11

  18. A Taiwanese Mandarin Main Concept Analysis (TM-MCA) for Quantification of Aphasic Oral Discourse

    ERIC Educational Resources Information Center

    Kong, Anthony Pak-Hin; Yeh, Chun-Chih

    2015-01-01

    Background: Various quantitative systems have been proposed to examine aphasic oral narratives in English. A clinical tool for assessing discourse produced by Cantonese-speaking persons with aphasia (PWA), namely Main Concept Analysis (MCA), was developed recently for quantifying the presence, accuracy and completeness of a narrative. Similar…

  19. OpenPrescribing: normalised data and software tool to research trends in English NHS primary care prescribing 1998-2016.

    PubMed

    Curtis, Helen J; Goldacre, Ben

    2018-02-23

    We aimed to compile and normalise England's national prescribing data for 1998-2016 to facilitate research on long-term time trends and create an open-data exploration tool for wider use. We compiled data from each individual year's national statistical publications and normalised them by mapping each drug to its current classification within the national formulary where possible. We created a freely accessible, interactive web tool to allow anyone to interact with the processed data. We downloaded all available annual prescription cost analysis datasets, which include cost and quantity for all prescription items dispensed in the community in England. Medical devices and appliances were excluded. We measured the extent of normalisation of data and aimed to produce a functioning accessible analysis tool. All data were imported successfully. 87.5% of drugs were matched exactly on name to the current formulary and a further 6.5% to similar drug names. All drugs in core clinical chapters were reconciled to their current location in the data schema, with only 1.26% of drugs not assigned a current chemical code. We created an openly accessible interactive tool to facilitate wider use of these data. Publicly available data can be made accessible through interactive online tools to help researchers and policy-makers explore time trends in prescribing. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. OntoCheck: verifying ontology naming conventions and metadata completeness in Protégé 4.

    PubMed

    Schober, Daniel; Tudose, Ilinca; Svatek, Vojtech; Boeker, Martin

    2012-09-21

    Although policy providers have outlined minimal metadata guidelines and naming conventions, ontologies of today still display inter- and intra-ontology heterogeneities in class labelling schemes and metadata completeness. This fact is at least partially due to missing or inappropriate tools. Software support can ease this situation and contribute to overall ontology consistency and quality by helping to enforce such conventions. We provide a plugin for the Protégé Ontology editor to allow for easy checks on compliance towards ontology naming conventions and metadata completeness, as well as curation in case of found violations. In a requirement analysis, derived from a prior standardization approach carried out within the OBO Foundry, we investigate the needed capabilities for software tools to check, curate and maintain class naming conventions. A Protégé tab plugin was implemented accordingly using the Protégé 4.1 libraries. The plugin was tested on six different ontologies. Based on these test results, the plugin could be refined, also by the integration of new functionalities. The new Protégé plugin, OntoCheck, allows for ontology tests to be carried out on OWL ontologies. In particular the OntoCheck plugin helps to clean up an ontology with regard to lexical heterogeneity, i.e. enforcing naming conventions and metadata completeness, meeting most of the requirements outlined for such a tool. Found test violations can be corrected to foster consistency in entity naming and meta-annotation within an artefact. Once specified, check constraints like name patterns can be stored and exchanged for later re-use. Here we describe a first version of the software, illustrate its capabilities and use within running ontology development efforts and briefly outline improvements resulting from its application. Further, we discuss OntoChecks capabilities in the context of related tools and highlight potential future expansions. The OntoCheck plugin facilitates labelling error detection and curation, contributing to lexical quality assurance in OWL ontologies. Ultimately, we hope this Protégé extension will ease ontology alignments as well as lexical post-processing of annotated data and hence can increase overall secondary data usage by humans and computers.

  1. A Standardized Reference Data Set for Vertebrate Taxon Name Resolution

    PubMed Central

    Zermoglio, Paula F.; Guralnick, Robert P.; Wieczorek, John R.

    2016-01-01

    Taxonomic names associated with digitized biocollections labels have flooded into repositories such as GBIF, iDigBio and VertNet. The names on these labels are often misspelled, out of date, or present other problems, as they were often captured only once during accessioning of specimens, or have a history of label changes without clear provenance. Before records are reliably usable in research, it is critical that these issues be addressed. However, still missing is an assessment of the scope of the problem, the effort needed to solve it, and a way to improve effectiveness of tools developed to aid the process. We present a carefully human-vetted analysis of 1000 verbatim scientific names taken at random from those published via the data aggregator VertNet, providing the first rigorously reviewed, reference validation data set. In addition to characterizing formatting problems, human vetting focused on detecting misspelling, synonymy, and the incorrect use of Darwin Core. Our results reveal a sobering view of the challenge ahead, as less than 47% of name strings were found to be currently valid. More optimistically, nearly 97% of name combinations could be resolved to a currently valid name, suggesting that computer-aided approaches may provide feasible means to improve digitized content. Finally, we associated names back to biocollections records and fit logistic models to test potential drivers of issues. A set of candidate variables (geographic region, year collected, higher-level clade, and the institutional digitally accessible data volume) and their 2-way interactions all predict the probability of records having taxon name issues, based on model selection approaches. We strongly encourage further experiments to use this reference data set as a means to compare automated or computer-aided taxon name tools for their ability to resolve and improve the existing wealth of legacy data. PMID:26760296

  2. Echo™ User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey, Dustin Yewell

    Echo™ is a MATLAB-based software package designed for robust and scalable analysis of complex data workflows. An alternative to tedious, error-prone conventional processes, Echo is based on three transformative principles for data analysis: self-describing data, name-based indexing, and dynamic resource allocation. The software takes an object-oriented approach to data analysis, intimately connecting measurement data with associated metadata. Echo operations in an analysis workflow automatically track and merge metadata and computation parameters to provide a complete history of the process used to generate final results, while automated figure and report generation tools eliminate the potential to mislabel those results. History reportingmore » and visualization methods provide straightforward auditability of analysis processes. Furthermore, name-based indexing on metadata greatly improves code readability for analyst collaboration and reduces opportunities for errors to occur. Echo efficiently manages large data sets using a framework that seamlessly allocates resources such that only the necessary computations to produce a given result are executed. Echo provides a versatile and extensible framework, allowing advanced users to add their own tools and data classes tailored to their own specific needs. Applying these transformative principles and powerful features, Echo greatly improves analyst efficiency and quality of results in many application areas.« less

  3. ReMatch: a web-based tool to construct, store and share stoichiometric metabolic models with carbon maps for metabolic flux analysis.

    PubMed

    Pitkänen, Esa; Akerlund, Arto; Rantanen, Ari; Jouhten, Paula; Ukkonen, Esko

    2008-08-25

    ReMatch is a web-based, user-friendly tool that constructs stoichiometric network models for metabolic flux analysis, integrating user-developed models into a database collected from several comprehensive metabolic data resources, including KEGG, MetaCyc and CheBI. Particularly, ReMatch augments the metabolic reactions of the model with carbon mappings to facilitate (13)C metabolic flux analysis. The construction of a network model consisting of biochemical reactions is the first step in most metabolic modelling tasks. This model construction can be a tedious task as the required information is usually scattered to many separate databases whose interoperability is suboptimal, due to the heterogeneous naming conventions of metabolites in different databases. Another, particularly severe data integration problem is faced in (13)C metabolic flux analysis, where the mappings of carbon atoms from substrates into products in the model are required. ReMatch has been developed to solve the above data integration problems. First, ReMatch matches the imported user-developed model against the internal ReMatch database while considering a comprehensive metabolite name thesaurus. This, together with wild card support, allows the user to specify the model quickly without having to look the names up manually. Second, ReMatch is able to augment reactions of the model with carbon mappings, obtained either from the internal database or given by the user with an easy-touse tool. The constructed models can be exported into 13C-FLUX and SBML file formats. Further, a stoichiometric matrix and visualizations of the network model can be generated. The constructed models of metabolic networks can be optionally made available to the other users of ReMatch. Thus, ReMatch provides a common repository for metabolic network models with carbon mappings for the needs of metabolic flux analysis community. ReMatch is freely available for academic use at http://www.cs.helsinki.fi/group/sysfys/software/rematch/.

  4. Vids: Version 2.0 Alpha Visualization Engine

    DTIC Science & Technology

    2018-04-25

    fidelity than existing efforts. Vids is a project aimed at producing more dynamic and interactive visualization tools using modern computer game ...move through and interact with the data to improve informational understanding. The Vids software leverages off-the-shelf modern game development...analysis and correlations. Recently, an ARL-pioneered project named Virtual Reality Data Analysis Environment (VRDAE) used VR and a modern game engine

  5. Particle shape analysis of volcanic clast samples with the Matlab tool MORPHEO

    NASA Astrophysics Data System (ADS)

    Charpentier, Isabelle; Sarocchi, Damiano; Rodriguez Sedano, Luis Angel

    2013-02-01

    This paper presents a modular Matlab tool, namely MORPHEO, devoted to the study of particle morphology by Fourier analysis. A benchmark made of four sample images with different features (digitized coins, a pebble chart, gears, digitized volcanic clasts) is then proposed to assess the abilities of the software. Attention is brought to the Weibull distribution introduced to enhance fine variations of particle morphology. Finally, as an example, samples pertaining to a lahar deposit located in La Lumbre ravine (Colima Volcano, Mexico) are analysed. MORPHEO and the benchmark are freely available for research purposes.

  6. Principal component analysis as a tool for library design: a case study investigating natural products, brand-name drugs, natural product-like libraries, and drug-like libraries.

    PubMed

    Wenderski, Todd A; Stratton, Christopher F; Bauer, Renato A; Kopp, Felix; Tan, Derek S

    2015-01-01

    Principal component analysis (PCA) is a useful tool in the design and planning of chemical libraries. PCA can be used to reveal differences in structural and physicochemical parameters between various classes of compounds by displaying them in a convenient graphical format. Herein, we demonstrate the use of PCA to gain insight into structural features that differentiate natural products, synthetic drugs, natural product-like libraries, and drug-like libraries, and show how the results can be used to guide library design.

  7. Principal Component Analysis as a Tool for Library Design: A Case Study Investigating Natural Products, Brand-Name Drugs, Natural Product-Like Libraries, and Drug-Like Libraries

    PubMed Central

    Wenderski, Todd A.; Stratton, Christopher F.; Bauer, Renato A.; Kopp, Felix; Tan, Derek S.

    2015-01-01

    Principal component analysis (PCA) is a useful tool in the design and planning of chemical libraries. PCA can be used to reveal differences in structural and physicochemical parameters between various classes of compounds by displaying them in a convenient graphical format. Herein, we demonstrate the use of PCA to gain insight into structural features that differentiate natural products, synthetic drugs, natural product-like libraries, and drug-like libraries, and show how the results can be used to guide library design. PMID:25618349

  8. OntoCheck: verifying ontology naming conventions and metadata completeness in Protégé 4

    PubMed Central

    2012-01-01

    Background Although policy providers have outlined minimal metadata guidelines and naming conventions, ontologies of today still display inter- and intra-ontology heterogeneities in class labelling schemes and metadata completeness. This fact is at least partially due to missing or inappropriate tools. Software support can ease this situation and contribute to overall ontology consistency and quality by helping to enforce such conventions. Objective We provide a plugin for the Protégé Ontology editor to allow for easy checks on compliance towards ontology naming conventions and metadata completeness, as well as curation in case of found violations. Implementation In a requirement analysis, derived from a prior standardization approach carried out within the OBO Foundry, we investigate the needed capabilities for software tools to check, curate and maintain class naming conventions. A Protégé tab plugin was implemented accordingly using the Protégé 4.1 libraries. The plugin was tested on six different ontologies. Based on these test results, the plugin could be refined, also by the integration of new functionalities. Results The new Protégé plugin, OntoCheck, allows for ontology tests to be carried out on OWL ontologies. In particular the OntoCheck plugin helps to clean up an ontology with regard to lexical heterogeneity, i.e. enforcing naming conventions and metadata completeness, meeting most of the requirements outlined for such a tool. Found test violations can be corrected to foster consistency in entity naming and meta-annotation within an artefact. Once specified, check constraints like name patterns can be stored and exchanged for later re-use. Here we describe a first version of the software, illustrate its capabilities and use within running ontology development efforts and briefly outline improvements resulting from its application. Further, we discuss OntoChecks capabilities in the context of related tools and highlight potential future expansions. Conclusions The OntoCheck plugin facilitates labelling error detection and curation, contributing to lexical quality assurance in OWL ontologies. Ultimately, we hope this Protégé extension will ease ontology alignments as well as lexical post-processing of annotated data and hence can increase overall secondary data usage by humans and computers. PMID:23046606

  9. Modeling, Simulation, and Operations Analysis in Afghanistan and Iraq: Operational Vignettes, Lessons Learned, and a Survey of Selected Efforts

    DTIC Science & Technology

    2014-01-01

    GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S...21 For example, see DoD, Sustaining U.S. Global Leadership: Priorities for 21st Century Defense, January 2012. 22 U.S. Joint Chiefs of Staff, 2011, p... projects whenever possible.10 And most of them recog- nized a need for a common set of tools and capabilities. Competence with the Micro- soft Excel and

  10. Domestic and Foreign Trade Position of the United States Aircraft Turbine Engine Industry. Task Six. Short-Term Gas Turbine Propulsion Analysis and Assessment

    DTIC Science & Technology

    1991-06-01

    500 remaining machine tool firms had less than twenty employees each. Manufacturing rationalization was negligible; product specialization and combined...terms, most of the fuselage. Over 130 Japanese employees were dispatched to Seattle during the 767 development, even though the agreement was for...through, and consider not just the name plates, but who’s involved in sharing the risk- -and the rewards , if any--you recite lots of other names: M.T.U

  11. Memory Forensics: Review of Acquisition and Analysis Techniques

    DTIC Science & Technology

    2013-11-01

    Management Overview Processes running on modern multitasking operating systems operate on an abstraction of RAM, called virtual memory [7]. In these systems...information such as user names, email addresses and passwords [7]. Analysts also use tools such as WinHex to identify headers or other suspicious data within

  12. FY17 Status Report on NEAMS Neutronics Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C. H.; Jung, Y. S.; Smith, M. A.

    2017-09-30

    Under the U.S. DOE NEAMS program, the high-fidelity neutronics code system has been developed to support the multiphysics modeling and simulation capability named SHARP. The neutronics code system includes the high-fidelity neutronics code PROTEUS, the cross section library and preprocessing tools, the multigroup cross section generation code MC2-3, the in-house meshing generation tool, the perturbation and sensitivity analysis code PERSENT, and post-processing tools. The main objectives of the NEAMS neutronics activities in FY17 are to continue development of an advanced nodal solver in PROTEUS for use in nuclear reactor design and analysis projects, implement a simplified sub-channel based thermal-hydraulic (T/H)more » capability into PROTEUS to efficiently compute the thermal feedback, improve the performance of PROTEUS-MOCEX using numerical acceleration and code optimization, improve the cross section generation tools including MC2-3, and continue to perform verification and validation tests for PROTEUS.« less

  13. Interactive visualization of multi-data-set Rietveld analyses using Cinema:Debye-Scherrer.

    PubMed

    Vogel, Sven C; Biwer, Chris M; Rogers, David H; Ahrens, James P; Hackenberg, Robert E; Onken, Drew; Zhang, Jianzhong

    2018-06-01

    A tool named Cinema:Debye-Scherrer to visualize the results of a series of Rietveld analyses is presented. The multi-axis visualization of the high-dimensional data sets resulting from powder diffraction analyses allows identification of analysis problems, prediction of suitable starting values, identification of gaps in the experimental parameter space and acceleration of scientific insight from the experimental data. The tool is demonstrated with analysis results from 59 U-Nb alloy samples with different compositions, annealing times and annealing temperatures as well as with a high-temperature study of the crystal structure of CsPbBr 3 . A script to extract parameters from a series of Rietveld analyses employing the widely used GSAS Rietveld software is also described. Both software tools are available for download.

  14. Interactive visualization of multi-data-set Rietveld analyses using Cinema:Debye-Scherrer

    PubMed Central

    Biwer, Chris M.; Rogers, David H.; Ahrens, James P.; Hackenberg, Robert E.; Onken, Drew; Zhang, Jianzhong

    2018-01-01

    A tool named Cinema:Debye-Scherrer to visualize the results of a series of Rietveld analyses is presented. The multi-axis visualization of the high-dimensional data sets resulting from powder diffraction analyses allows identification of analysis problems, prediction of suitable starting values, identification of gaps in the experimental parameter space and acceleration of scientific insight from the experimental data. The tool is demonstrated with analysis results from 59 U–Nb alloy samples with different compositions, annealing times and annealing temperatures as well as with a high-temperature study of the crystal structure of CsPbBr3. A script to extract parameters from a series of Rietveld analyses employing the widely used GSAS Rietveld software is also described. Both software tools are available for download. PMID:29896062

  15. Computer program to assess impact of fatigue and fracture criteria on weight and cost of transport aircraft

    NASA Technical Reports Server (NTRS)

    Tanner, C. J.; Kruse, G. S.; Oman, B. H.

    1975-01-01

    A preliminary design analysis tool for rapidly performing trade-off studies involving fatigue, fracture, static strength, weight, and cost is presented. Analysis subprograms were developed for fatigue life, crack growth life, and residual strength; and linked to a structural synthesis module which in turn was integrated into a computer program. The part definition module of a cost and weight analysis program was expanded to be compatible with the upgraded structural synthesis capability. The resultant vehicle design and evaluation program is named VDEP-2. It is an accurate and useful tool for estimating purposes at the preliminary design stage of airframe development. A sample case along with an explanation of program applications and input preparation is presented.

  16. Cognitive Tools for Successful Branding

    ERIC Educational Resources Information Center

    Hernandez, Lorena Perez

    2011-01-01

    This article aims to fill a gap in current studies on the semantics of branding. Through the analysis of a number of well-known international brand names, we provide ample evidence supporting the claim that a finite set of cognitive operations, such as those of domain reduction and expansion, mitigation, and strengthening, among others, can…

  17. Development of the Texas revenue estimator and needs determination system (T.R.E.N.D.S.) model.

    DOT National Transportation Integrated Search

    2010-05-01

    The original purpose of Project 0-6395-TI was to assess the usefulness and viability of the Joint Analysis : Using Combined Knowledge (J.A.C.K.) model as a planning and forecasting tool. What originally was : named the J.A.C.K. model was substantiall...

  18. Recognizing Mechanistic Reasoning in Student Scientific Inquiry: A Framework for Discourse Analysis Developed from Philosophy of Science

    ERIC Educational Resources Information Center

    Russ, Rosemary S.; Scherr, Rachel E.; Hammer, David; Mikeska, Jamie

    2008-01-01

    Science education reform has long focused on assessing student inquiry, and there has been progress in developing tools specifically with respect to experimentation and argumentation. We suggest the need for attention to another aspect of inquiry, namely "mechanistic reasoning." Scientific inquiry focuses largely on understanding causal…

  19. Analysis Supporting Factors and Constraints LPMP Performance in Improving the Quality of Education in Jambi Province

    ERIC Educational Resources Information Center

    Rosadi, Kemas Imron

    2015-01-01

    Development of education in Indonesia is based on three aspects, namely equity and expansion, quality and relevance, as well as good governance. Quality education is influenced by several factors related to quality education managerial leaders, limited funds, facilities, educational facilities, media, learning resources, tools and training…

  20. Loss of Coolant Accident (LOCA) / Emergency Core Coolant System (ECCS Evaluation of Risk-Informed Margins Management Strategies for a Representative Pressurized Water Reactor (PWR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szilard, Ronaldo Henriques

    A Risk Informed Safety Margin Characterization (RISMC) toolkit and methodology are proposed for investigating nuclear power plant core, fuels design and safety analysis, including postulated Loss-of-Coolant Accident (LOCA) analysis. This toolkit, under an integrated evaluation model framework, is name LOCA toolkit for the US (LOTUS). This demonstration includes coupled analysis of core design, fuel design, thermal hydraulics and systems analysis, using advanced risk analysis tools and methods to investigate a wide range of results.

  1. Planar Inlet Design and Analysis Process (PINDAP)

    NASA Technical Reports Server (NTRS)

    Slater, John W.; Gruber, Christopher R.

    2005-01-01

    The Planar Inlet Design and Analysis Process (PINDAP) is a collection of software tools that allow the efficient aerodynamic design and analysis of planar (two-dimensional and axisymmetric) inlets. The aerodynamic analysis is performed using the Wind-US computational fluid dynamics (CFD) program. A major element in PINDAP is a Fortran 90 code named PINDAP that can establish the parametric design of the inlet and efficiently model the geometry and generate the grid for CFD analysis with design changes to those parameters. The use of PINDAP is demonstrated for subsonic, supersonic, and hypersonic inlets.

  2. Mapping, Awareness, And Virtualization Network Administrator Training Tool Virtualization Module

    DTIC Science & Technology

    2016-03-01

    AND VIRTUALIZATION NETWORK ADMINISTRATOR TRAINING TOOL VIRTUALIZATION MODULE by Erik W. Berndt March 2016 Thesis Advisor: John Gibson...REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE MAPPING, AWARENESS, AND VIRTUALIZATION NETWORK ADMINISTRATOR TRAINING TOOL... VIRTUALIZATION MODULE 5. FUNDING NUMBERS 6. AUTHOR(S) Erik W. Berndt 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School

  3. Open Source Live Distributions for Computer Forensics

    NASA Astrophysics Data System (ADS)

    Giustini, Giancarlo; Andreolini, Mauro; Colajanni, Michele

    Current distributions of open source forensic software provide digital investigators with a large set of heterogeneous tools. Their use is not always focused on the target and requires high technical expertise. We present a new GNU/Linux live distribution, named CAINE (Computer Aided INvestigative Environment) that contains a collection of tools wrapped up into a user friendly environment. The CAINE forensic framework introduces novel important features, aimed at filling the interoperability gap across different forensic tools. Moreover, it provides a homogeneous graphical interface that drives digital investigators during the acquisition and analysis of electronic evidence, and it offers a semi-automatic mechanism for the creation of the final report.

  4. Revisiting Information Technology tools serving authorship and editorship: a case-guided tutorial to statistical analysis and plagiarism detection

    PubMed Central

    Bamidis, P D; Lithari, C; Konstantinidis, S T

    2010-01-01

    With the number of scientific papers published in journals, conference proceedings, and international literature ever increasing, authors and reviewers are not only facilitated with an abundance of information, but unfortunately continuously confronted with risks associated with the erroneous copy of another's material. In parallel, Information Communication Technology (ICT) tools provide to researchers novel and continuously more effective ways to analyze and present their work. Software tools regarding statistical analysis offer scientists the chance to validate their work and enhance the quality of published papers. Moreover, from the reviewers and the editor's perspective, it is now possible to ensure the (text-content) originality of a scientific article with automated software tools for plagiarism detection. In this paper, we provide a step-bystep demonstration of two categories of tools, namely, statistical analysis and plagiarism detection. The aim is not to come up with a specific tool recommendation, but rather to provide useful guidelines on the proper use and efficiency of either category of tools. In the context of this special issue, this paper offers a useful tutorial to specific problems concerned with scientific writing and review discourse. A specific neuroscience experimental case example is utilized to illustrate the young researcher's statistical analysis burden, while a test scenario is purpose-built using open access journal articles to exemplify the use and comparative outputs of seven plagiarism detection software pieces. PMID:21487489

  5. Revisiting Information Technology tools serving authorship and editorship: a case-guided tutorial to statistical analysis and plagiarism detection.

    PubMed

    Bamidis, P D; Lithari, C; Konstantinidis, S T

    2010-12-01

    With the number of scientific papers published in journals, conference proceedings, and international literature ever increasing, authors and reviewers are not only facilitated with an abundance of information, but unfortunately continuously confronted with risks associated with the erroneous copy of another's material. In parallel, Information Communication Technology (ICT) tools provide to researchers novel and continuously more effective ways to analyze and present their work. Software tools regarding statistical analysis offer scientists the chance to validate their work and enhance the quality of published papers. Moreover, from the reviewers and the editor's perspective, it is now possible to ensure the (text-content) originality of a scientific article with automated software tools for plagiarism detection. In this paper, we provide a step-bystep demonstration of two categories of tools, namely, statistical analysis and plagiarism detection. The aim is not to come up with a specific tool recommendation, but rather to provide useful guidelines on the proper use and efficiency of either category of tools. In the context of this special issue, this paper offers a useful tutorial to specific problems concerned with scientific writing and review discourse. A specific neuroscience experimental case example is utilized to illustrate the young researcher's statistical analysis burden, while a test scenario is purpose-built using open access journal articles to exemplify the use and comparative outputs of seven plagiarism detection software pieces.

  6. A Chain of Modeling Tools For Gas and Aqueous Phase Chemstry

    NASA Astrophysics Data System (ADS)

    Audiffren, N.; Djouad, R.; Sportisse, B.

    Atmospheric chemistry is characterized by the use of large set of chemical species and reactions. Handling with the set of data required for the definition of the model is a quite difficult task. We prsent in this short article a preprocessor for diphasic models (gas phase and aqueous phase in cloud droplets) named SPACK. The main interest of SPACK is the automatic generation of lumped species related to fast equilibria. We also developped a linear tangent model using the automatic differentiation tool named ODYSSEE in order to perform a sensitivity analysis of an atmospheric multi- phase mechanism based on RADM2 kinetic scheme.Local sensitivity coefficients are computed for two different scenarii. We focus in this study on the sensitivity of the ozone,NOx,HOx, system with respect to some aqueous phase reactions and we inves- tigate the influence of the reduction in the photolysis rates in the area below the cloud region.

  7. miRToolsGallery: a tag-based and rankable microRNA bioinformatics resources database portal

    PubMed Central

    Chen, Liang; Heikkinen, Liisa; Wang, ChangLiang; Yang, Yang; Knott, K Emily

    2018-01-01

    Abstract Hundreds of bioinformatics tools have been developed for MicroRNA (miRNA) investigations including those used for identification, target prediction, structure and expression profile analysis. However, finding the correct tool for a specific application requires the tedious and laborious process of locating, downloading, testing and validating the appropriate tool from a group of nearly a thousand. In order to facilitate this process, we developed a novel database portal named miRToolsGallery. We constructed the portal by manually curating > 950 miRNA analysis tools and resources. In the portal, a query to locate the appropriate tool is expedited by being searchable, filterable and rankable. The ranking feature is vital to quickly identify and prioritize the more useful from the obscure tools. Tools are ranked via different criteria including the PageRank algorithm, date of publication, number of citations, average of votes and number of publications. miRToolsGallery provides links and data for the comprehensive collection of currently available miRNA tools with a ranking function which can be adjusted using different criteria according to specific requirements. Database URL: http://www.mirtoolsgallery.org PMID:29688355

  8. Adapting Web content for low-literacy readers by using lexical elaboration and named entities labeling

    NASA Astrophysics Data System (ADS)

    Watanabe, W. M.; Candido, A.; Amâncio, M. A.; De Oliveira, M.; Pardo, T. A. S.; Fortes, R. P. M.; Aluísio, S. M.

    2010-12-01

    This paper presents an approach for assisting low-literacy readers in accessing Web online information. The "Educational FACILITA" tool is a Web content adaptation tool that provides innovative features and follows more intuitive interaction models regarding accessibility concerns. Especially, we propose an interaction model and a Web application that explore the natural language processing tasks of lexical elaboration and named entity labeling for improving Web accessibility. We report on the results obtained from a pilot study on usability analysis carried out with low-literacy users. The preliminary results show that "Educational FACILITA" improves the comprehension of text elements, although the assistance mechanisms might also confuse users when word sense ambiguity is introduced, by gathering, for a complex word, a list of synonyms with multiple meanings. This fact evokes a future solution in which the correct sense for a complex word in a sentence is identified, solving this pervasive characteristic of natural languages. The pilot study also identified that experienced computer users find the tool to be more useful than novice computer users do.

  9. A Graphics Editor for Structured Analysis with a Data Dictionary.

    DTIC Science & Technology

    1987-12-01

    4-3 Human/Computer Interface Considerations 4-3 Screen Layout .... ............. 4-4 Menu System ..... .............. 4-6 Voice Feedback...central computer system . This project is a direct follow on to the 1986 thesis by James W. Urscheler. lie created an initial version of a tool (nicknamed...graphics information. Background r SADT. SADT is the name of SofTech’s methodology for doing requirement analysis and system design. It was first published

  10. gHRV: Heart rate variability analysis made easy.

    PubMed

    Rodríguez-Liñares, L; Lado, M J; Vila, X A; Méndez, A J; Cuesta, P

    2014-08-01

    In this paper, the gHRV software tool is presented. It is a simple, free and portable tool developed in python for analysing heart rate variability. It includes a graphical user interface and it can import files in multiple formats, analyse time intervals in the signal, test statistical significance and export the results. This paper also contains, as an example of use, a clinical analysis performed with the gHRV tool, namely to determine whether the heart rate variability indexes change across different stages of sleep. Results from tests completed by researchers who have tried gHRV are also explained: in general the application was positively valued and results reflect a high level of satisfaction. gHRV is in continuous development and new versions will include suggestions made by testers. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. CFD - Mature Technology?

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan

    2005-01-01

    Over the past 30 years, numerical methods and simulation tools for fluid dynamic problems have advanced as a new discipline, namely, computational fluid dynamics (CFD). Although a wide spectrum of flow regimes are encountered in many areas of science and engineering, simulation of compressible flow has been the major driver for developing computational algorithms and tools. This is probably due to a large demand for predicting the aerodynamic performance characteristics of flight vehicles, such as commercial, military, and space vehicles. As flow analysis is required to be more accurate and computationally efficient for both commercial and mission-oriented applications (such as those encountered in meteorology, aerospace vehicle development, general fluid engineering and biofluid analysis) CFD tools for engineering become increasingly important for predicting safety, performance and cost. This paper presents the author's perspective on the maturity of CFD, especially from an aerospace engineering point of view.

  12. On the interplay between mathematics and biology. Hallmarks toward a new systems biology

    NASA Astrophysics Data System (ADS)

    Bellomo, Nicola; Elaiw, Ahmed; Althiabi, Abdullah M.; Alghamdi, Mohammed Ali

    2015-03-01

    This paper proposes a critical analysis of the existing literature on mathematical tools developed toward systems biology approaches and, out of this overview, develops a new approach whose main features can be briefly summarized as follows: derivation of mathematical structures suitable to capture the complexity of biological, hence living, systems, modeling, by appropriate mathematical tools, Darwinian type dynamics, namely mutations followed by selection and evolution. Moreover, multiscale methods to move from genes to cells, and from cells to tissue are analyzed in view of a new systems biology approach.

  13. The Learning Organization: Tracking Progress in a Developing Country--A Comparative Analysis Using the DLOQ

    ERIC Educational Resources Information Center

    Jamali, Dima; Sidani, Yusuf; Zouein, Charbel

    2009-01-01

    Purpose: The purpose of this paper is to survey the various measurement instruments of the learning organization on offer, leading to the adoption of a tool that was considered most suitable for gauging progress towards the learning organization in two sectors of the Lebanese economy, namely banking and information technology (IT).…

  14. A Satellite Data Analysis and CubeSat Instrument Simulator Tool for Simultaneous Multi-spacecraft Measurements of Solar Energetic Particles

    NASA Astrophysics Data System (ADS)

    Vannitsen, Jordan; Rizzitelli, Federico; Wang, Kaiti; Segret, Boris; Juang, Jyh-Ching; Miau, Jiun-Jih

    2017-12-01

    This paper presents a Multi-satellite Data Analysis and Simulator Tool (MDAST), developed with the original goal to support the science requirements of a Martian 3-Unit CubeSat mission profile named Bleeping Interplanetary Radiation Determination Yo-yo (BIRDY). MDAST was firstly designed and tested by taking into account the positions, attitudes, instruments field of view and energetic particles flux measurements from four spacecrafts (ACE, MSL, STEREO A, and STEREO B). Secondly, the simulated positions, attitudes and instrument field of view from the BIRDY CubeSat have been adapted for input. And finally, this tool can be used for data analysis of the measurements from the four spacecrafts mentioned above so as to simulate the instrument trajectory and observation capabilities of the BIRDY CubeSat. The onset, peak and end time of a solar particle event is specifically defined and identified with this tool. It is not only useful for the BIRDY mission but also for analyzing data from the four satellites aforementioned and can be utilized for other space weather missions with further customization.

  15. Design and Analysis Tool for External-Compression Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Slater, John W.

    2012-01-01

    A computational tool named SUPIN has been developed to design and analyze external-compression supersonic inlets for aircraft at cruise speeds from Mach 1.6 to 2.0. The inlet types available include the axisymmetric outward-turning, two-dimensional single-duct, two-dimensional bifurcated-duct, and streamline-traced Busemann inlets. The aerodynamic performance is characterized by the flow rates, total pressure recovery, and drag. The inlet flowfield is divided into parts to provide a framework for the geometry and aerodynamic modeling and the parts are defined in terms of geometric factors. The low-fidelity aerodynamic analysis and design methods are based on analytic, empirical, and numerical methods which provide for quick analysis. SUPIN provides inlet geometry in the form of coordinates and surface grids useable by grid generation methods for higher-fidelity computational fluid dynamics (CFD) analysis. SUPIN is demonstrated through a series of design studies and CFD analyses were performed to verify some of the analysis results.

  16. Mnemonics Are an Effective Tool for Adult Beginners Learning Plant Identification

    ERIC Educational Resources Information Center

    Stagg, Bethan C.; Donkin, Maria E.

    2016-01-01

    Most beginners are introduced to plant diversity through identification keys, which develop differentiation skills but not species memorisation. We propose that mnemonics, memorable "name clues" linking a species name with morphological characters, are a complementary learning tool for promoting species memorisation. In the first of two…

  17. Applications of a broad-spectrum tool for conservation and fisheries analysis: aquatic gap analysis

    USGS Publications Warehouse

    McKenna, James E.; Steen, Paul J.; Lyons, John; Stewart, Jana S.

    2009-01-01

    Natural resources support all of our social and economic activities, as well as our biological existence. Humans have little control over most of the physical, biological, and sociological conditions dictating the status and capacity of natural resources in any particular area. However, the most rapid and threatening influences on natural resources typically are anthropogenic overuse and degradation. In addition, living natural resources (i.e., organisms) do not respect political boundaries, but are aware of their optimal habitat and environmental conditions. Most organisms have wider spatial ranges than the jurisdictional boundaries of environmental agencies that deal with them; even within those jurisdictions, information is patchy and disconnected. Planning and projecting effects of ecological management are difficult, because many organisms, habitat conditions, and interactions are involved. Conservation and responsible resource use involves wise management and manipulation of the aspects of the environment and biological communities that can be effectively changed. Tools and data sets that provide new insights and analysis capabilities can enhance the ability of resource managers to make wise decisions and plan effective, long-term management strategies. Aquatic gap analysis has been developed to provide those benefits. Gap analysis is more than just the assessment of the match or mis-match (i.e., gaps) between habitats of ecological value and areas with an appropriate level of environmental protection (e.g., refuges, parks, preserves), as the name suggests. Rather, a Gap Analysis project is a process which leads to an organized database of georeferenced information and previously available tools to examine conservation and other ecological issues; it provides a geographic analysis platform that serves as a foundation for aquatic ecological studies. This analytical tool box allows one to conduct assessments of all habitat elements within an area of interest. Aquatic gap analysis naturally focuses on aquatic habitats. The analytical tools are largely based on specification of the species-habitat relations for the system and organism group of interest (Morrison et al. 2003; McKenna et al. 2006; Steen et al. 2006; Sowa et al. 2007). The Great Lakes Regional Aquatic Gap Analysis (GLGap) project focuses primarily on lotic habitat of the U.S. Great Lakes drainage basin and associated states and has been developed to address fish and fisheries issues. These tools are unique because they allow us to address problems at a range of scales from the region to the stream segment and include the ability to predict species specific occurrence or abundance for most of the fish species in the study area. The results and types of questions that can be addressed provide better global understanding of the ecological context within which specific natural resources fit (e.g., neighboring environments and resources, and large and small scale processes). The geographic analysis platform consists of broad and flexible geospatial tools (and associated data) with many potential applications. The objectives of this article are to provide a brief overview of GLGap methods and analysis tools, and demonstrate conservation and planning applications of those data and tools. Although there are many potential applications, we will highlight just three: (1) support for the Eastern Brook Trout Joint Venture (EBTJV), (2) Aquatic Life classification in Wisconsin, and (3) an educational tool that makes use of Google Earth (use of trade or product names does not imply endorsement by the U.S. Government) and Internet accessibility.

  18. MAFALDA: An early warning modeling tool to forecast volcanic ash dispersal and deposition

    NASA Astrophysics Data System (ADS)

    Barsotti, S.; Nannipieri, L.; Neri, A.

    2008-12-01

    Forecasting the dispersal of ash from explosive volcanoes is a scientific challenge to modern volcanology. It also represents a fundamental step in mitigating the potential impact of volcanic ash on urban areas and transport routes near explosive volcanoes. To this end we developed a Web-based early warning modeling tool named MAFALDA (Modeling and Forecasting Ash Loading and Dispersal in the Atmosphere) able to quantitatively forecast ash concentrations in the air and on the ground. The main features of MAFALDA are the usage of (1) a dispersal model, named VOL-CALPUFF, that couples the column ascent phase with the ash cloud transport and (2) high-resolution weather forecasting data, the capability to run and merge multiple scenarios, and the Web-based structure of the procedure that makes it suitable as an early warning tool. MAFALDA produces plots for a detailed analysis of ash cloud dynamics and ground deposition, as well as synthetic 2-D maps of areas potentially affected by dangerous concentrations of ash. A first application of MAFALDA to the long-lasting weak plumes produced at Mt. Etna (Italy) is presented. A similar tool can be useful to civil protection authorities and volcanic observatories in reducing the impact of the eruptive events. MAFALDA can be accessed at http://mafalda.pi.ingv.it.

  19. Orbit Design Based on the Global Maps of Telecom Metrics

    NASA Technical Reports Server (NTRS)

    Lee, Charles H.; Cheung, Kar-Ming; Edwards, Chad; Noreen, Gary K.; Vaisnys, Arvydas

    2004-01-01

    In this paper we describe an orbit design aide tool, called Telecom Orbit Analysis and Simulation Tool(TOAST). Although it can be used for studying and selecting orbits for any planet, we solely concentrate on its use for Mars. By specifying the six orbital elements for an orbit, a time frame of interest, a horizon mask angle, and some telecom parameters such as the transmitting power, frequency, antenna gains, antenna losses, link margin, received threshold powers for the rates, etc. this tool enables the user to view the animation of the orbit in two and three-dimensional different telecom metrics at any point on the Mars, namely the global planetary map.

  20. Quality tools and resources to support organisational improvement integral to high-quality primary care: a systematic review of published and grey literature.

    PubMed

    Janamian, Tina; Upham, Susan J; Crossland, Lisa; Jackson, Claire L

    2016-04-18

    To conduct a systematic review of the literature to identify existing online primary care quality improvement tools and resources to support organisational improvement related to the seven elements in the Primary Care Practice Improvement Tool (PC-PIT), with the identified tools and resources to progress to a Delphi study for further assessment of relevance and utility. Systematic review of the international published and grey literature. CINAHL, Embase and PubMed databases were searched in March 2014 for articles published between January 2004 and December 2013. GreyNet International and other relevant websites and repositories were also searched in March-April 2014 for documents dated between 1992 and 2012. All citations were imported into a bibliographic database. Published and unpublished tools and resources were included in the review if they were in English, related to primary care quality improvement and addressed any of the seven PC-PIT elements of a high-performing practice. Tools and resources that met the eligibility criteria were then evaluated for their accessibility, relevance, utility and comprehensiveness using a four-criteria appraisal framework. We used a data extraction template to systematically extract information from eligible tools and resources. A content analysis approach was used to explore the tools and resources and collate relevant information: name of the tool or resource, year and country of development, author, name of the organisation that provided access and its URL, accessibility information or problems, overview of each tool or resource and the quality improvement element(s) it addresses. If available, a copy of the tool or resource was downloaded into the bibliographic database, along with supporting evidence (published or unpublished) on its use in primary care. This systematic review identified 53 tools and resources that can potentially be provided as part of a suite of tools and resources to support primary care practices in improving the quality of their practice, to achieve improved health outcomes.

  1. dada - a web-based 2D detector analysis tool

    NASA Astrophysics Data System (ADS)

    Osterhoff, Markus

    2017-06-01

    The data daemon, dada, is a server backend for unified access to 2D pixel detector image data stored with different detectors, file formats and saved with varying naming conventions and folder structures across instruments. Furthermore, dada implements basic pre-processing and analysis routines from pixel binning over azimuthal integration to raster scan processing. Common user interactions with dada are by a web frontend, but all parameters for an analysis are encoded into a Uniform Resource Identifier (URI) which can also be written by hand or scripts for batch processing.

  2. Analysis of design tool attributes with regards to sustainability benefits

    NASA Astrophysics Data System (ADS)

    Zain, S.; Ismail, A. F.; Ahmad, Z.; Adesta, E. Y. T.

    2018-01-01

    The trend of global manufacturing competitiveness has shown a significant shift from profit and customer driven business to a more harmonious sustainability paradigm. This new direction, which emphasises the interests of three pillars of sustainability, i.e., social, economic and environment dimensions, has changed the ways products are designed. As a result, the roles of design tools in the product development stage of manufacturing in adapting to the new strategy are vital and increasingly challenging. The aim of this paper is to review the literature on the attributes of design tools with regards to the sustainability perspective. Four well-established design tools are selected, namely Quality Function Deployment (QFD), Failure Mode and Element Analysis (FMEA), Design for Six Sigma (DFSS) and Design for Environment (DfE). By analysing previous studies, the main attributes of each design tool and its benefits with respect to each sustainability dimension throughout four stages of product lifecycle are discussed. From this study, it is learnt that each of the design tools contributes to the three pillars of sustainability either directly or indirectly, but they are unbalanced and not holistic. Therefore, the prospective of improving and optimising the design tools is projected, and the possibility of collaboration between the different tools is discussed.

  3. DASS-GUI: a user interface for identification and analysis of significant patterns in non-sequential data.

    PubMed

    Hollunder, Jens; Friedel, Maik; Kuiper, Martin; Wilhelm, Thomas

    2010-04-01

    Many large 'omics' datasets have been published and many more are expected in the near future. New analysis methods are needed for best exploitation. We have developed a graphical user interface (GUI) for easy data analysis. Our discovery of all significant substructures (DASS) approach elucidates the underlying modularity, a typical feature of complex biological data. It is related to biclustering and other data mining approaches. Importantly, DASS-GUI also allows handling of multi-sets and calculation of statistical significances. DASS-GUI contains tools for further analysis of the identified patterns: analysis of the pattern hierarchy, enrichment analysis, module validation, analysis of additional numerical data, easy handling of synonymous names, clustering, filtering and merging. Different export options allow easy usage of additional tools such as Cytoscape. Source code, pre-compiled binaries for different systems, a comprehensive tutorial, case studies and many additional datasets are freely available at http://www.ifr.ac.uk/dass/gui/. DASS-GUI is implemented in Qt.

  4. Performance modeling & simulation of complex systems (A systems engineering design & analysis approach)

    NASA Technical Reports Server (NTRS)

    Hall, Laverne

    1995-01-01

    Modeling of the Multi-mission Image Processing System (MIPS) will be described as an example of the use of a modeling tool to design a distributed system that supports multiple application scenarios. This paper examines: (a) modeling tool selection, capabilities, and operation (namely NETWORK 2.5 by CACl), (b) pointers for building or constructing a model and how the MIPS model was developed, (c) the importance of benchmarking or testing the performance of equipment/subsystems being considered for incorporation the design/architecture, (d) the essential step of model validation and/or calibration using the benchmark results, (e) sample simulation results from the MIPS model, and (f) how modeling and simulation analysis affected the MIPS design process by having a supportive and informative impact.

  5. SlideJ: An ImageJ plugin for automated processing of whole slide images.

    PubMed

    Della Mea, Vincenzo; Baroni, Giulia L; Pilutti, David; Di Loreto, Carla

    2017-01-01

    The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images-up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations.

  6. SlideJ: An ImageJ plugin for automated processing of whole slide images

    PubMed Central

    Baroni, Giulia L.; Pilutti, David; Di Loreto, Carla

    2017-01-01

    The digital slide, or Whole Slide Image, is a digital image, acquired with specific scanners, that represents a complete tissue sample or cytological specimen at microscopic level. While Whole Slide image analysis is recognized among the most interesting opportunities, the typical size of such images—up to Gpixels- can be very demanding in terms of memory requirements. Thus, while algorithms and tools for processing and analysis of single microscopic field images are available, Whole Slide images size makes the direct use of such tools prohibitive or impossible. In this work a plugin for ImageJ, named SlideJ, is proposed with the objective to seamlessly extend the application of image analysis algorithms implemented in ImageJ for single microscopic field images to a whole digital slide analysis. The plugin has been complemented by examples of macro in the ImageJ scripting language to demonstrate its use in concrete situations. PMID:28683129

  7. MetaDP: a comprehensive web server for disease prediction of 16S rRNA metagenomic datasets.

    PubMed

    Xu, Xilin; Wu, Aiping; Zhang, Xinlei; Su, Mingming; Jiang, Taijiao; Yuan, Zhe-Ming

    2016-01-01

    High-throughput sequencing-based metagenomics has garnered considerable interest in recent years. Numerous methods and tools have been developed for the analysis of metagenomic data. However, it is still a daunting task to install a large number of tools and complete a complicated analysis, especially for researchers with minimal bioinformatics backgrounds. To address this problem, we constructed an automated software named MetaDP for 16S rRNA sequencing data analysis, including data quality control, operational taxonomic unit clustering, diversity analysis, and disease risk prediction modeling. Furthermore, a support vector machine-based prediction model for intestinal bowel syndrome (IBS) was built by applying MetaDP to microbial 16S sequencing data from 108 children. The success of the IBS prediction model suggests that the platform may also be applied to other diseases related to gut microbes, such as obesity, metabolic syndrome, or intestinal cancer, among others (http://metadp.cn:7001/).

  8. Oracle Applications Patch Administration Tool (PAT) Beta Version

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2002-01-04

    PAT is a Patch Administration Tool that provides analysis, tracking, and management of Oracle Application patches. This includes capabilities as outlined below: Patch Analysis & Management Tool Outline of capabilities: Administration Patch Data Maintenance -- track Oracle Application patches applied to what database instance & machine Patch Analysis capture text files (readme.txt and driver files) form comparison detail report comparison detail PL/SQL package comparison detail SQL scripts detail JSP module comparison detail Parse and load the current applptch.txt (10.7) or load patch data from Oracle Application database patch tables (11i) Display Analysis -- Compare patch to be applied with currentmore » Oracle Application installed Appl_top code versions Patch Detail Module comparison detail Analyze and display one Oracle Application module patch. Patch Management -- automatic queue and execution of patches Administration Parameter maintenance -- setting for directory structure of Oracle Application appl_top Validation data maintenance -- machine names and instances to patch Operation Patch Data Maintenance Schedule a patch (queue for later execution) Run a patch (queue for immediate execution) Review the patch logs Patch Management Reports« less

  9. The Human Oral Microbiome Database: a web accessible resource for investigating oral microbe taxonomic and genomic information

    PubMed Central

    Chen, Tsute; Yu, Wen-Han; Izard, Jacques; Baranova, Oxana V.; Lakshmanan, Abirami; Dewhirst, Floyd E.

    2010-01-01

    The human oral microbiome is the most studied human microflora, but 53% of the species have not yet been validly named and 35% remain uncultivated. The uncultivated taxa are known primarily from 16S rRNA sequence information. Sequence information tied solely to obscure isolate or clone numbers, and usually lacking accurate phylogenetic placement, is a major impediment to working with human oral microbiome data. The goal of creating the Human Oral Microbiome Database (HOMD) is to provide the scientific community with a body site-specific comprehensive database for the more than 600 prokaryote species that are present in the human oral cavity based on a curated 16S rRNA gene-based provisional naming scheme. Currently, two primary types of information are provided in HOMD—taxonomic and genomic. Named oral species and taxa identified from 16S rRNA gene sequence analysis of oral isolates and cloning studies were placed into defined 16S rRNA phylotypes and each given unique Human Oral Taxon (HOT) number. The HOT interlinks phenotypic, phylogenetic, genomic, clinical and bibliographic information for each taxon. A BLAST search tool is provided to match user 16S rRNA gene sequences to a curated, full length, 16S rRNA gene reference data set. For genomic analysis, HOMD provides comprehensive set of analysis tools and maintains frequently updated annotations for all the human oral microbial genomes that have been sequenced and publicly released. Oral bacterial genome sequences, determined as part of the Human Microbiome Project, are being added to the HOMD as they become available. We provide HOMD as a conceptual model for the presentation of microbiome data for other human body sites. Database URL: http://www.homd.org PMID:20624719

  10. A Lean Six Sigma approach to the improvement of the selenium analysis method.

    PubMed

    Cloete, Bronwyn C; Bester, André

    2012-11-02

    Reliable results represent the pinnacle assessment of quality of an analytical laboratory, and therefore variability is considered to be a critical quality problem associated with the selenium analysis method executed at Western Cape Provincial Veterinary Laboratory (WCPVL). The elimination and control of variability is undoubtedly of significant importance because of the narrow margin of safety between toxic and deficient doses of the trace element for good animal health. A quality methodology known as Lean Six Sigma was believed to present the most feasible solution for overcoming the adverse effect of variation, through steps towards analytical process improvement. Lean Six Sigma represents a form of scientific method type, which is empirical, inductive and deductive, and systematic, which relies on data, and is fact-based. The Lean Six Sigma methodology comprises five macro-phases, namely Define, Measure, Analyse, Improve and Control (DMAIC). Both qualitative and quantitative laboratory data were collected in terms of these phases. Qualitative data were collected by using quality-tools, namely an Ishikawa diagram, a Pareto chart, Kaizen analysis and a Failure Mode Effect analysis tool. Quantitative laboratory data, based on the analytical chemistry test method, were collected through a controlled experiment. The controlled experiment entailed 13 replicated runs of the selenium test method, whereby 11 samples were repetitively analysed, whilst Certified Reference Material (CRM) was also included in 6 of the runs. Laboratory results obtained from the controlled experiment was analysed by using statistical methods, commonly associated with quality validation of chemistry procedures. Analysis of both sets of data yielded an improved selenium analysis method, believed to provide greater reliability of results, in addition to a greatly reduced cycle time and superior control features. Lean Six Sigma may therefore be regarded as a valuable tool in any laboratory, and represents both a management discipline, and a standardised approach to problem solving and process optimisation.

  11. Symbolic dynamic filtering and language measure for behavior identification of mobile robots.

    PubMed

    Mallapragada, Goutham; Ray, Asok; Jin, Xin

    2012-06-01

    This paper presents a procedure for behavior identification of mobile robots, which requires limited or no domain knowledge of the underlying process. While the features of robot behavior are extracted by symbolic dynamic filtering of the observed time series, the behavior patterns are classified based on language measure theory. The behavior identification procedure has been experimentally validated on a networked robotic test bed by comparison with commonly used tools, namely, principal component analysis for feature extraction and Bayesian risk analysis for pattern classification.

  12. On the interplay between mathematics and biology: hallmarks toward a new systems biology.

    PubMed

    Bellomo, Nicola; Elaiw, Ahmed; Althiabi, Abdullah M; Alghamdi, Mohammed Ali

    2015-03-01

    This paper proposes a critical analysis of the existing literature on mathematical tools developed toward systems biology approaches and, out of this overview, develops a new approach whose main features can be briefly summarized as follows: derivation of mathematical structures suitable to capture the complexity of biological, hence living, systems, modeling, by appropriate mathematical tools, Darwinian type dynamics, namely mutations followed by selection and evolution. Moreover, multiscale methods to move from genes to cells, and from cells to tissue are analyzed in view of a new systems biology approach. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Investigating System Dependability Modeling Using AADL

    NASA Technical Reports Server (NTRS)

    Hall, Brendan; Driscoll, Kevin R.; Madl, Gabor

    2013-01-01

    This report describes Architecture Analysis & Design Language (AADL) models for a diverse set of fault-tolerant, embedded data networks and describes the methods and tools used to created these models. It also includes error models per the AADL Error Annex. Some networks were modeled using Error Detection Isolation Containment Types (EDICT). This report gives a brief description for each of the networks, a description of its modeling, the model itself, and evaluations of the tools used for creating the models. The methodology includes a naming convention that supports a systematic way to enumerate all of the potential failure modes.

  14. Network Meta-Analysis Using R: A Review of Currently Available Automated Packages

    PubMed Central

    Neupane, Binod; Richer, Danielle; Bonner, Ashley Joel; Kibret, Taddele; Beyene, Joseph

    2014-01-01

    Network meta-analysis (NMA) – a statistical technique that allows comparison of multiple treatments in the same meta-analysis simultaneously – has become increasingly popular in the medical literature in recent years. The statistical methodology underpinning this technique and software tools for implementing the methods are evolving. Both commercial and freely available statistical software packages have been developed to facilitate the statistical computations using NMA with varying degrees of functionality and ease of use. This paper aims to introduce the reader to three R packages, namely, gemtc, pcnetmeta, and netmeta, which are freely available software tools implemented in R. Each automates the process of performing NMA so that users can perform the analysis with minimal computational effort. We present, compare and contrast the availability and functionality of different important features of NMA in these three packages so that clinical investigators and researchers can determine which R packages to implement depending on their analysis needs. Four summary tables detailing (i) data input and network plotting, (ii) modeling options, (iii) assumption checking and diagnostic testing, and (iv) inference and reporting tools, are provided, along with an analysis of a previously published dataset to illustrate the outputs available from each package. We demonstrate that each of the three packages provides a useful set of tools, and combined provide users with nearly all functionality that might be desired when conducting a NMA. PMID:25541687

  15. Network meta-analysis using R: a review of currently available automated packages.

    PubMed

    Neupane, Binod; Richer, Danielle; Bonner, Ashley Joel; Kibret, Taddele; Beyene, Joseph

    2014-01-01

    Network meta-analysis (NMA)--a statistical technique that allows comparison of multiple treatments in the same meta-analysis simultaneously--has become increasingly popular in the medical literature in recent years. The statistical methodology underpinning this technique and software tools for implementing the methods are evolving. Both commercial and freely available statistical software packages have been developed to facilitate the statistical computations using NMA with varying degrees of functionality and ease of use. This paper aims to introduce the reader to three R packages, namely, gemtc, pcnetmeta, and netmeta, which are freely available software tools implemented in R. Each automates the process of performing NMA so that users can perform the analysis with minimal computational effort. We present, compare and contrast the availability and functionality of different important features of NMA in these three packages so that clinical investigators and researchers can determine which R packages to implement depending on their analysis needs. Four summary tables detailing (i) data input and network plotting, (ii) modeling options, (iii) assumption checking and diagnostic testing, and (iv) inference and reporting tools, are provided, along with an analysis of a previously published dataset to illustrate the outputs available from each package. We demonstrate that each of the three packages provides a useful set of tools, and combined provide users with nearly all functionality that might be desired when conducting a NMA.

  16. Supporting Open Access to European Academic Courses: The ASK-CDM-ECTS Tool

    ERIC Educational Resources Information Center

    Sampson, Demetrios G.; Zervas, Panagiotis

    2013-01-01

    Purpose: This paper aims to present and evaluate a web-based tool, namely ASK-CDM-ECTS, which facilitates authoring and publishing on the web descriptions of (open) academic courses in machine-readable format using an application profile of the Course Description Metadata (CDM) specification, namely CDM-ECTS. Design/methodology/approach: The paper…

  17. Development of nonlinear acoustic propagation analysis tool toward realization of loud noise environment prediction in aeronautics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanamori, Masashi, E-mail: kanamori.masashi@jaxa.jp; Takahashi, Takashi, E-mail: takahashi.takashi@jaxa.jp; Aoyama, Takashi, E-mail: aoyama.takashi@jaxa.jp

    2015-10-28

    Shown in this paper is an introduction of a prediction tool for the propagation of loud noise with the application to the aeronautics in mind. The tool, named SPnoise, is based on HOWARD approach, which can express almost exact multidimensionality of the diffraction effect at the cost of back scattering. This paper argues, in particular, the prediction of the effect of atmospheric turbulence on sonic boom as one of the important issues in aeronautics. Thanks to the simple and efficient modeling of the atmospheric turbulence, SPnoise successfully re-creates the feature of the effect, which often emerges in the region justmore » behind the front and rear shock waves in the sonic boom signature.« less

  18. Clone DB: an integrated NCBI resource for clone-associated data

    PubMed Central

    Schneider, Valerie A.; Chen, Hsiu-Chuan; Clausen, Cliff; Meric, Peter A.; Zhou, Zhigang; Bouk, Nathan; Husain, Nora; Maglott, Donna R.; Church, Deanna M.

    2013-01-01

    The National Center for Biotechnology Information (NCBI) Clone DB (http://www.ncbi.nlm.nih.gov/clone/) is an integrated resource providing information about and facilitating access to clones, which serve as valuable research reagents in many fields, including genome sequencing and variation analysis. Clone DB represents an expansion and replacement of the former NCBI Clone Registry and has records for genomic and cell-based libraries and clones representing more than 100 different eukaryotic taxa. Records provide details of library construction, associated sequences, map positions and information about resource distribution. Clone DB is indexed in the NCBI Entrez system and can be queried by fields that include organism, clone name, gene name and sequence identifier. Whenever possible, genomic clones are mapped to reference assemblies and their map positions provided in clone records. Clones mapping to specific genomic regions can also be searched for using the NCBI Clone Finder tool, which accepts queries based on sequence coordinates or features such as gene or transcript names. Clone DB makes reports of library, clone and placement data on its FTP site available for download. With Clone DB, users now have available to them a centralized resource that provides them with the tools they will need to make use of these important research reagents. PMID:23193260

  19. Role-play as an educational tool in medication communication skills: Students’ perspectives

    PubMed Central

    Lavanya, S. H.; Kalpana, L.; Veena, R. M.; Bharath Kumar, V. D.

    2016-01-01

    Objectives: Medication communication skills are vital aspects of patient care that may influence treatment outcomes. However, traditional pharmacology curriculum deals with imparting factual information, with little emphasis on patient communication. The current study aims to explore students’ perceptions of role-play as an educational tool in acquiring communication skills and to ascertain the need of role-play for their future clinical practice. Materials and Methods: This questionnaire-based study was done in 2nd professional MBBS students. A consolidated concept of six training cases, focusing on major communication issues related to medication prescription in pharmacology, were developed for peer-role-play sessions for 2nd professional MBBS (n = 122) students. Structured scripts with specific emphasis on prescription medication communication and checklists for feedback were developed. Prevalidated questionnaires measured the quantitative aspects of role-plays in relation to their relevance as teaching–learning tool, perceived benefits of sessions, and their importance for future use. Statistical Analysis: Data analysis was performed using descriptive statistics. Results: The role-play concept was well appreciated and considered an effective means for acquiring medication communication skills. The structured feedback by peers and faculty was well received by many. Over 90% of the students reported immense confidence in communicating therapy details, namely, drug name, purpose, mechanism, dosing details, and precautions. Majority reported a better retention of pharmacology concepts and preferred more such sessions. Conclusions: Most students consider peer-role-play as an indispensable tool to acquire effective communication skills regarding drug therapy. By virtue of providing experiential learning opportunities and its feasibility of implementation, role-play sessions justify inclusion in undergraduate medical curricula. PMID:28031605

  20. PAINT: a promoter analysis and interaction network generation tool for gene regulatory network identification.

    PubMed

    Vadigepalli, Rajanikanth; Chakravarthula, Praveen; Zak, Daniel E; Schwaber, James S; Gonye, Gregory E

    2003-01-01

    We have developed a bioinformatics tool named PAINT that automates the promoter analysis of a given set of genes for the presence of transcription factor binding sites. Based on coincidence of regulatory sites, this tool produces an interaction matrix that represents a candidate transcriptional regulatory network. This tool currently consists of (1) a database of promoter sequences of known or predicted genes in the Ensembl annotated mouse genome database, (2) various modules that can retrieve and process the promoter sequences for binding sites of known transcription factors, and (3) modules for visualization and analysis of the resulting set of candidate network connections. This information provides a substantially pruned list of genes and transcription factors that can be examined in detail in further experimental studies on gene regulation. Also, the candidate network can be incorporated into network identification methods in the form of constraints on feasible structures in order to render the algorithms tractable for large-scale systems. The tool can also produce output in various formats suitable for use in external visualization and analysis software. In this manuscript, PAINT is demonstrated in two case studies involving analysis of differentially regulated genes chosen from two microarray data sets. The first set is from a neuroblastoma N1E-115 cell differentiation experiment, and the second set is from neuroblastoma N1E-115 cells at different time intervals following exposure to neuropeptide angiotensin II. PAINT is available for use as an agent in BioSPICE simulation and analysis framework (www.biospice.org), and can also be accessed via a WWW interface at www.dbi.tju.edu/dbi/tools/paint/.

  1. A study on using pre-forming blank in single point incremental forming process by finite element analysis

    NASA Astrophysics Data System (ADS)

    Abass, K. I.

    2016-11-01

    Single Point Incremental Forming process (SPIF) is a forming technique of sheet material based on layered manufacturing principles. The edges of sheet material are clamped while the forming tool is moved along the tool path. The CNC milling machine is used to manufacturing the product. SPIF involves extensive plastic deformation and the description of the process is more complicated by highly nonlinear boundary conditions, namely contact and frictional effects have been accomplished. However, due to the complex nature of these models, numerical approaches dominated by Finite Element Analysis (FEA) are now in widespread use. The paper presents the data and main results of a study on effect of using preforming blank in SPIF through FEA. The considered SPIF has been studied under certain process conditions referring to the test work piece, tool, etc., applying ANSYS 11. The results show that the simulation model can predict an ideal profile of processing track, the behaviour of contact tool-workpiece, the product accuracy by evaluation its thickness, surface strain and the stress distribution along the deformed blank section during the deformation stages.

  2. Quantum random oracle model for quantum digital signature

    NASA Astrophysics Data System (ADS)

    Shang, Tao; Lei, Qi; Liu, Jianwei

    2016-10-01

    The goal of this work is to provide a general security analysis tool, namely, the quantum random oracle (QRO), for facilitating the security analysis of quantum cryptographic protocols, especially protocols based on quantum one-way function. QRO is used to model quantum one-way function and different queries to QRO are used to model quantum attacks. A typical application of quantum one-way function is the quantum digital signature, whose progress has been hampered by the slow pace of the experimental realization. Alternatively, we use the QRO model to analyze the provable security of a quantum digital signature scheme and elaborate the analysis procedure. The QRO model differs from the prior quantum-accessible random oracle in that it can output quantum states as public keys and give responses to different queries. This tool can be a test bed for the cryptanalysis of more quantum cryptographic protocols based on the quantum one-way function.

  3. Correcting names of bacteria deposited in National Microbial Repositories: an analysed sequence data necessary for taxonomic re-categorization of misclassified bacteria-ONE example, genus Lysinibacillus.

    PubMed

    Rekadwad, Bhagwan N; Gonzalez, Juan M

    2017-08-01

    A report on 16S rRNA gene sequence re-analysis and digitalization is presented using Lysinibacillus species (one example) deposited in National Microbial Repositories in India. Lysinibacillus species 16S rRNA gene sequences were digitalized to provide quick response (QR) codes, Chaose Game Representation (CGR) and Frequency of Chaose Game Representation (FCGR). GC percentage, phylogenetic analysis, and principal component analysis (PCA) are tools used for the differentiation and reclassification of the strains under investigation. The seven reasons supporting the statements made by us as misclassified Lysinibacillus species deposited in National Microbial Depositories are given in this paper. Based on seven reasons, bacteria deposited in National Microbial Repositories such as Lysinibacillus and many other needs reanalyses for their exact identity. Leaves of identity with type strains of related species shows difference 2 to 8 % suggesting that reclassification is needed to correctly assign species names to the analyzed Lysinibacillus strains available in National Microbial Repositories.

  4. KNIME for reproducible cross-domain analysis of life science data.

    PubMed

    Fillbrunn, Alexander; Dietz, Christian; Pfeuffer, Julianus; Rahn, René; Landrum, Gregory A; Berthold, Michael R

    2017-11-10

    Experiments in the life sciences often involve tools from a variety of domains such as mass spectrometry, next generation sequencing, or image processing. Passing the data between those tools often involves complex scripts for controlling data flow, data transformation, and statistical analysis. Such scripts are not only prone to be platform dependent, they also tend to grow as the experiment progresses and are seldomly well documented, a fact that hinders the reproducibility of the experiment. Workflow systems such as KNIME Analytics Platform aim to solve these problems by providing a platform for connecting tools graphically and guaranteeing the same results on different operating systems. As an open source software, KNIME allows scientists and programmers to provide their own extensions to the scientific community. In this review paper we present selected extensions from the life sciences that simplify data exploration, analysis, and visualization and are interoperable due to KNIME's unified data model. Additionally, we name other workflow systems that are commonly used in the life sciences and highlight their similarities and differences to KNIME. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Floquet analysis of Kuznetsov-Ma breathers: A path towards spectral stability of rogue waves.

    PubMed

    Cuevas-Maraver, J; Kevrekidis, P G; Frantzeskakis, D J; Karachalios, N I; Haragus, M; James, G

    2017-07-01

    In the present work, we aim at taking a step towards the spectral stability analysis of Peregrine solitons, i.e., wave structures that are used to emulate extreme wave events. Given the space-time localized nature of Peregrine solitons, this is a priori a nontrivial task. Our main tool in this effort will be the study of the spectral stability of the periodic generalization of the Peregrine soliton in the evolution variable, namely the Kuznetsov-Ma breather. Given the periodic structure of the latter, we compute the corresponding Floquet multipliers, and examine them in the limit where the period of the orbit tends to infinity. This way, we extrapolate towards the stability of the limiting structure, namely the Peregrine soliton. We find that multiple unstable modes of the background are enhanced, yet no additional unstable eigenmodes arise as the Peregrine limit is approached. We explore the instability evolution also in direct numerical simulations.

  6. Improved intra-array and interarray normalization of peptide microarray phosphorylation for phosphorylome and kinome profiling by rational selection of relevant spots

    PubMed Central

    Scholma, Jetse; Fuhler, Gwenny M.; Joore, Jos; Hulsman, Marc; Schivo, Stefano; List, Alan F.; Reinders, Marcel J. T.; Peppelenbosch, Maikel P.; Post, Janine N.

    2016-01-01

    Massive parallel analysis using array technology has become the mainstay for analysis of genomes and transcriptomes. Analogously, the predominance of phosphorylation as a regulator of cellular metabolism has fostered the development of peptide arrays of kinase consensus substrates that allow the charting of cellular phosphorylation events (often called kinome profiling). However, whereas the bioinformatical framework for expression array analysis is well-developed, no advanced analysis tools are yet available for kinome profiling. Especially intra-array and interarray normalization of peptide array phosphorylation remain problematic, due to the absence of “housekeeping” kinases and the obvious fallacy of the assumption that different experimental conditions should exhibit equal amounts of kinase activity. Here we describe the development of analysis tools that reliably quantify phosphorylation of peptide arrays and that allow normalization of the signals obtained. We provide a method for intraslide gradient correction and spot quality control. We describe a novel interarray normalization procedure, named repetitive signal enhancement, RSE, which provides a mathematical approach to limit the false negative results occuring with the use of other normalization procedures. Using in silico and biological experiments we show that employing such protocols yields superior insight into cellular physiology as compared to classical analysis tools for kinome profiling. PMID:27225531

  7. EDNA: Expert fault digraph analysis using CLIPS

    NASA Technical Reports Server (NTRS)

    Dixit, Vishweshwar V.

    1990-01-01

    Traditionally fault models are represented by trees. Recently, digraph models have been proposed (Sack). Digraph models closely imitate the real system dependencies and hence are easy to develop, validate and maintain. However, they can also contain directed cycles and analysis algorithms are hard to find. Available algorithms tend to be complicated and slow. On the other hand, the tree analysis (VGRH, Tayl) is well understood and rooted in vast research effort and analytical techniques. The tree analysis algorithms are sophisticated and orders of magnitude faster. Transformation of a digraph (cyclic) into trees (CLP, LP) is a viable approach to blend the advantages of the representations. Neither the digraphs nor the trees provide the ability to handle heuristic knowledge. An expert system, to capture the engineering knowledge, is essential. We propose an approach here, namely, expert network analysis. We combine the digraph representation and tree algorithms. The models are augmented by probabilistic and heuristic knowledge. CLIPS, an expert system shell from NASA-JSC will be used to develop a tool. The technique provides the ability to handle probabilities and heuristic knowledge. Mixed analysis, some nodes with probabilities, is possible. The tool provides graphics interface for input, query, and update. With the combined approach it is expected to be a valuable tool in the design process as well in the capture of final design knowledge.

  8. Development of Systems Engineering Maturity Models and Management Tools

    DTIC Science & Technology

    2011-01-21

    Ph.D., Senior Personnel, Stevens Institute of Technology Abhi Deshmukh, Ph.D., Senior Personnel, Texas A&M University Matin Sarfaraz, Research ...WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Stevens Institute of Technology,Systems Engineering Research Center (SERC),1...tools (MPT) for effectively and efficiently addressing these challenges are likewise being challenged. The goal of this research was to develop a

  9. Preliminary Exploration of Encounter During Transit Across Southern Africa

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stroud, Phillip David; Cuellar-Hengartner, Leticia; Kubicek, Deborah Ann

    Los Alamos National Laboratory (LANL) is utilizing the Probability Effectiveness Methodology (PEM) tools, particularly the Pathway Analysis, Threat Response and Interdiction Options Tool (PATRIOT) to support the DNDO Architecture and Planning Directorate’s (APD) development of a multi-region terrorist risk assessment tool. The effort is divided into three stages. The first stage is an exploration of what can be done with PATRIOT essentially as is, to characterize encounter rate during transit across a single selected region. The second stage is to develop, condition, and implement required modifications to the data and conduct analysis to generate a well-founded assessment of the transitmore » reliability across that selected region, and to identify any issues in the process. The final stage is to extend the work to a full multi-region global model. This document provides the results of the first stage, namely preliminary explorations with PATRIOT to assess the transit reliability across the region of southern Africa.« less

  10. AITSO: A Tool for Spatial Optimization Based on Artificial Immune Systems

    PubMed Central

    Zhao, Xiang; Liu, Yaolin; Liu, Dianfeng; Ma, Xiaoya

    2015-01-01

    A great challenge facing geocomputation and spatial analysis is spatial optimization, given that it involves various high-dimensional, nonlinear, and complicated relationships. Many efforts have been made with regard to this specific issue, and the strong ability of artificial immune system algorithms has been proven in previous studies. However, user-friendly professional software is still unavailable, which is a great impediment to the popularity of artificial immune systems. This paper describes a free, universal tool, named AITSO, which is capable of solving various optimization problems. It provides a series of standard application programming interfaces (APIs) which can (1) assist researchers in the development of their own problem-specific application plugins to solve practical problems and (2) allow the implementation of some advanced immune operators into the platform to improve the performance of an algorithm. As an integrated, flexible, and convenient tool, AITSO contributes to knowledge sharing and practical problem solving. It is therefore believed that it will advance the development and popularity of spatial optimization in geocomputation and spatial analysis. PMID:25678911

  11. A new spatial multi-criteria decision support tool for site selection for implementation of managed aquifer recharge.

    PubMed

    Rahman, M Azizur; Rusteberg, Bernd; Gogu, R C; Lobo Ferreira, J P; Sauter, Martin

    2012-05-30

    This study reports the development of a new spatial multi-criteria decision analysis (SMCDA) software tool for selecting suitable sites for Managed Aquifer Recharge (MAR) systems. The new SMCDA software tool functions based on the combination of existing multi-criteria evaluation methods with modern decision analysis techniques. More specifically, non-compensatory screening, criteria standardization and weighting, and Analytical Hierarchy Process (AHP) have been combined with Weighted Linear Combination (WLC) and Ordered Weighted Averaging (OWA). This SMCDA tool may be implemented with a wide range of decision maker's preferences. The tool's user-friendly interface helps guide the decision maker through the sequential steps for site selection, those steps namely being constraint mapping, criteria hierarchy, criteria standardization and weighting, and criteria overlay. The tool offers some predetermined default criteria and standard methods to increase the trade-off between ease-of-use and efficiency. Integrated into ArcGIS, the tool has the advantage of using GIS tools for spatial analysis, and herein data may be processed and displayed. The tool is non-site specific, adaptive, and comprehensive, and may be applied to any type of site-selection problem. For demonstrating the robustness of the new tool, a case study was planned and executed at Algarve Region, Portugal. The efficiency of the SMCDA tool in the decision making process for selecting suitable sites for MAR was also demonstrated. Specific aspects of the tool such as built-in default criteria, explicit decision steps, and flexibility in choosing different options were key features, which benefited the study. The new SMCDA tool can be augmented by groundwater flow and transport modeling so as to achieve a more comprehensive approach to the selection process for the best locations of the MAR infiltration basins, as well as the locations of recovery wells and areas of groundwater protection. The new spatial multicriteria analysis tool has already been implemented within the GIS based Gabardine decision support system as an innovative MAR planning tool. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. A learning tool for optical and microwave satellite image processing and analysis

    NASA Astrophysics Data System (ADS)

    Dashondhi, Gaurav K.; Mohanty, Jyotirmoy; Eeti, Laxmi N.; Bhattacharya, Avik; De, Shaunak; Buddhiraju, Krishna M.

    2016-04-01

    This paper presents a self-learning tool, which contains a number of virtual experiments for processing and analysis of Optical/Infrared and Synthetic Aperture Radar (SAR) images. The tool is named Virtual Satellite Image Processing and Analysis Lab (v-SIPLAB) Experiments that are included in Learning Tool are related to: Optical/Infrared - Image and Edge enhancement, smoothing, PCT, vegetation indices, Mathematical Morphology, Accuracy Assessment, Supervised/Unsupervised classification etc.; Basic SAR - Parameter extraction and range spectrum estimation, Range compression, Doppler centroid estimation, Azimuth reference function generation and compression, Multilooking, image enhancement, texture analysis, edge and detection. etc.; SAR Interferometry - BaseLine Calculation, Extraction of single look SAR images, Registration, Resampling, and Interferogram generation; SAR Polarimetry - Conversion of AirSAR or Radarsat data to S2/C3/T3 matrix, Speckle Filtering, Power/Intensity image generation, Decomposition of S2/C3/T3, Classification of S2/C3/T3 using Wishart Classifier [3]. A professional quality polarimetric SAR software can be found at [8], a part of whose functionality can be found in our system. The learning tool also contains other modules, besides executable software experiments, such as aim, theory, procedure, interpretation, quizzes, link to additional reading material and user feedback. Students can have understanding of Optical and SAR remotely sensed images through discussion of basic principles and supported by structured procedure for running and interpreting the experiments. Quizzes for self-assessment and a provision for online feedback are also being provided to make this Learning tool self-contained. One can download results after performing experiments.

  13. A Simple Tool for the Design and Analysis of Multiple-Reflector Antennas in a Multi-Disciplinary Environment

    NASA Technical Reports Server (NTRS)

    Katz, Daniel S.; Cwik, Tom; Fu, Chuigang; Imbriale, William A.; Jamnejad, Vahraz; Springer, Paul L.; Borgioli, Andrea

    2000-01-01

    The process of designing and analyzing a multiple-reflector system has traditionally been time-intensive, requiring large amounts of both computational and human time. At many frequencies, a discrete approximation of the radiation integral may be used to model the system. The code which implements this physical optics (PO) algorithm was developed at the Jet Propulsion Laboratory. It analyzes systems of antennas in pairs, and for each pair, the analysis can be computationally time-consuming. Additionally, the antennas must be described using a local coordinate system for each antenna, which makes it difficult to integrate the design into a multi-disciplinary framework in which there is traditionally one global coordinate system, even before considering deforming the antenna as prescribed by external structural and/or thermal factors. Finally, setting up the code to correctly analyze all the antenna pairs in the system can take a fair amount of time, and introduces possible human error. The use of parallel computing to reduce the computational time required for the analysis of a given pair of antennas has been previously discussed. This paper focuses on the other problems mentioned above. It will present a methodology and examples of use of an automated tool that performs the analysis of a complete multiple-reflector system in an integrated multi-disciplinary environment (including CAD modeling, and structural and thermal analysis) at the click of a button. This tool, named MOD Tool (Millimeter-wave Optics Design Tool), has been designed and implemented as a distributed tool, with a client that runs almost identically on Unix, Mac, and Windows platforms, and a server that runs primarily on a Unix workstation and can interact with parallel supercomputers with simple instruction from the user interacting with the client.

  14. Building a protein name dictionary from full text: a machine learning term extraction approach.

    PubMed

    Shi, Lei; Campagne, Fabien

    2005-04-07

    The majority of information in the biological literature resides in full text articles, instead of abstracts. Yet, abstracts remain the focus of many publicly available literature data mining tools. Most literature mining tools rely on pre-existing lexicons of biological names, often extracted from curated gene or protein databases. This is a limitation, because such databases have low coverage of the many name variants which are used to refer to biological entities in the literature. We present an approach to recognize named entities in full text. The approach collects high frequency terms in an article, and uses support vector machines (SVM) to identify biological entity names. It is also computationally efficient and robust to noise commonly found in full text material. We use the method to create a protein name dictionary from a set of 80,528 full text articles. Only 8.3% of the names in this dictionary match SwissProt description lines. We assess the quality of the dictionary by studying its protein name recognition performance in full text. This dictionary term lookup method compares favourably to other published methods, supporting the significance of our direct extraction approach. The method is strong in recognizing name variants not found in SwissProt.

  15. Building a protein name dictionary from full text: a machine learning term extraction approach

    PubMed Central

    Shi, Lei; Campagne, Fabien

    2005-01-01

    Background The majority of information in the biological literature resides in full text articles, instead of abstracts. Yet, abstracts remain the focus of many publicly available literature data mining tools. Most literature mining tools rely on pre-existing lexicons of biological names, often extracted from curated gene or protein databases. This is a limitation, because such databases have low coverage of the many name variants which are used to refer to biological entities in the literature. Results We present an approach to recognize named entities in full text. The approach collects high frequency terms in an article, and uses support vector machines (SVM) to identify biological entity names. It is also computationally efficient and robust to noise commonly found in full text material. We use the method to create a protein name dictionary from a set of 80,528 full text articles. Only 8.3% of the names in this dictionary match SwissProt description lines. We assess the quality of the dictionary by studying its protein name recognition performance in full text. Conclusion This dictionary term lookup method compares favourably to other published methods, supporting the significance of our direct extraction approach. The method is strong in recognizing name variants not found in SwissProt. PMID:15817129

  16. Translocation as a Conservation Tool for Restoring Insular Avifauna

    DTIC Science & Technology

    2011-11-01

    5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University of Missouri, Fisheries and Wildlife, Suite...within the foreseeable future. One approach to conservation includes establishing new communities of threatened species on islands where they did not...4A C-86 TRANSLOCATION AS A CONSERVATION TOOL FOR INSULAR AVIFAUNA DR. DYLAN KESLER University of Missouri Fisheries and Wildlife, Suite

  17. The Development and Validation of a Rapid Assessment Tool of Primary Care in China

    PubMed Central

    Mei, Jie; Liang, Yuan; Shi, LeiYu; Zhao, JingGe; Wang, YuTan; Kuang, Li

    2016-01-01

    Introduction. With Chinese health care reform increasingly emphasizing the importance of primary care, the need for a tool to evaluate primary care performance and service delivery is clear. This study presents a methodology for a rapid assessment of primary care organizations and service delivery in China. Methods. The study translated and adapted the Primary Care Assessment Tool-Adult Edition (PCAT-AE) into a Chinese version to measure core dimensions of primary care, namely, first contact, continuity, comprehensiveness, and coordination. A cross-sectional survey was conducted to assess the validity and reliability of the Chinese Rapid Primary Care Assessment Tool (CR-PCAT). Eight community health centers in Guangdong province have been selected to participate in the survey. Results. A total of 1465 effective samples were included for data analysis. Eight items were eliminated following principal component analysis and reliability testing. The principal component analysis extracted five multiple-item scales (first contact utilization, first contact accessibility, ongoing care, comprehensiveness, and coordination). The tests of scaling assumptions were basically met. Conclusion. The standard psychometric evaluation indicates that the scales have achieved relatively good reliability and validity. The CR-PCAT provides a rapid and reliable measure of four core dimensions of primary care, which could be applied in various scenarios. PMID:26885509

  18. PARPs database: A LIMS systems for protein-protein interaction data mining or laboratory information management system

    PubMed Central

    Droit, Arnaud; Hunter, Joanna M; Rouleau, Michèle; Ethier, Chantal; Picard-Cloutier, Aude; Bourgais, David; Poirier, Guy G

    2007-01-01

    Background In the "post-genome" era, mass spectrometry (MS) has become an important method for the analysis of proteins and the rapid advancement of this technique, in combination with other proteomics methods, results in an increasing amount of proteome data. This data must be archived and analysed using specialized bioinformatics tools. Description We herein describe "PARPs database," a data analysis and management pipeline for liquid chromatography tandem mass spectrometry (LC-MS/MS) proteomics. PARPs database is a web-based tool whose features include experiment annotation, protein database searching, protein sequence management, as well as data-mining of the peptides and proteins identified. Conclusion Using this pipeline, we have successfully identified several interactions of biological significance between PARP-1 and other proteins, namely RFC-1, 2, 3, 4 and 5. PMID:18093328

  19. A 2-year study of Gram stain competency assessment in 40 clinical laboratories.

    PubMed

    Goodyear, Nancy; Kim, Sara; Reeves, Mary; Astion, Michael L

    2006-01-01

    We used a computer-based competency assessment tool for Gram stain interpretation to assess the performance of 278 laboratory staff from 40 laboratories on 40 multiple-choice questions. We report test reliability, mean scores, median, item difficulty, discrimination, and analysis of the highest- and lowest-scoring questions. The questions were reliable (KR-20 coefficient, 0.80). Overall mean score was 88% (range, 63%-98%). When categorized by cell type, the means were host cells, 93%; other cells (eg, yeast), 92%; gram-positive, 90%; and gram-negative, 88%. When categorized by type of interpretation, the means were other (eg, underdecolorization), 92%; identify by structure (eg, bacterial morphologic features), 91%; and identify by name (eg, genus and species), 87%. Of the 6 highest-scoring questions (mean scores, > or = 99%) 5 were identify by structure and 1 was identify by name. Of the 6 lowest-scoring questions (mean scores, < 75%) 5 were gram-negative and 1 was host cells. By type of interpretation, 2 were identify by structure and 4 were identify by name. Computer-based Gram stain competency assessment examinations are reliable. Our analysis helps laboratories identify areas for continuing education in Gram stain interpretation and will direct future revisions of the tests.

  20. Liver Rapid Reference Set Application: Kevin Qu-Quest (2011) — EDRN Public Portal

    Cancer.gov

    We propose to evaluate the performance of a novel serum biomarker panel for early detection of hepatocellular carcinoma (HCC). This panel is based on markers from the ubiquitin-proteasome system (UPS) in combination with the existing known HCC biomarkers, namely, alpha-fetoprotein (AFP), AFP-L3%, and des-y-carboxy prothrombin (DCP). To this end, we applied multivariate logistic regression analysis to optimize this biomarker algorithm tool.

  1. Inducing Multilingual Text Analysis Tools via Robust Projection across Aligned Corpora

    DTIC Science & Technology

    2001-01-01

    monolingual dictionary - derived list of canonical roots would resolve ambiguity re- garding which is the appropriate target. � Many of the errors are...system and set of algorithms for automati- cally inducing stand-alone monolingual part-of-speech taggers, base noun-phrase bracketers, named-entity...corpora has tended to focus on their use in translation model training for MT rather than on monolingual applications. One exception is bilin- gual parsing

  2. Annotation-based inference of transporter function.

    PubMed

    Lee, Thomas J; Paulsen, Ian; Karp, Peter

    2008-07-01

    We present a method for inferring and constructing transport reactions for transporter proteins based primarily on the analysis of the names of individual proteins in the genome annotation of an organism. Transport reactions are declarative descriptions of transporter activities, and thus can be manipulated computationally, unlike free-text protein names. Once transporter activities are encoded as transport reactions, a number of computational analyses are possible including database queries by transporter activity; inclusion of transporters into an automatically generated metabolic-map diagram that can be painted with omics data to aid in their interpretation; detection of anomalies in the metabolic and transport networks, such as substrates that are transported into the cell but are not inputs to any metabolic reaction or pathway; and comparative analyses of the transport capabilities of different organisms. On randomly selected organisms, the method achieves precision and recall rates of 0.93 and 0.90, respectively in identifying transporter proteins by name within the complete genome. The method obtains 67.5% accuracy in predicting complete transport reactions; if allowance is made for predictions that are overly general yet not incorrect, reaction prediction accuracy is 82.5%. The method is implemented as part of PathoLogic, the inference component of the Pathway Tools software. Pathway Tools is freely available to researchers at non-commercial institutions, including source code; a fee applies to commercial institutions. Supplementary data are available at Bioinformatics online.

  3. DICOM index tracker enterprise: advanced system for enterprise-wide quality assurance and patient safety monitoring

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Pavlicek, William; Panda, Anshuman; Langer, Steve G.; Morin, Richard; Fetterly, Kenneth A.; Paden, Robert; Hanson, James; Wu, Lin-Wei; Wu, Teresa

    2015-03-01

    DICOM Index Tracker (DIT) is an integrated platform to harvest rich information available from Digital Imaging and Communications in Medicine (DICOM) to improve quality assurance in radiology practices. It is designed to capture and maintain longitudinal patient-specific exam indices of interests for all diagnostic and procedural uses of imaging modalities. Thus, it effectively serves as a quality assurance and patient safety monitoring tool. The foundation of DIT is an intelligent database system which stores the information accepted and parsed via a DICOM receiver and parser. The database system enables the basic dosimetry analysis. The success of DIT implementation at Mayo Clinic Arizona calls for the DIT deployment at the enterprise level which requires significant improvements. First, for geographically distributed multi-site implementation, the first bottleneck is the communication (network) delay; the second is the scalability of the DICOM parser to handle the large volume of exams from different sites. To address this issue, DICOM receiver and parser are separated and decentralized by site. To facilitate the enterprise wide Quality Assurance (QA), a notable challenge is the great diversities of manufacturers, modalities and software versions, as the solution DIT Enterprise provides the standardization tool for device naming, protocol naming, physician naming across sites. Thirdly, advanced analytic engines are implemented online which support the proactive QA in DIT Enterprise.

  4. The taxonomic name resolution service: an online tool for automated standardization of plant names

    PubMed Central

    2013-01-01

    Background The digitization of biodiversity data is leading to the widespread application of taxon names that are superfluous, ambiguous or incorrect, resulting in mismatched records and inflated species numbers. The ultimate consequences of misspelled names and bad taxonomy are erroneous scientific conclusions and faulty policy decisions. The lack of tools for correcting this ‘names problem’ has become a fundamental obstacle to integrating disparate data sources and advancing the progress of biodiversity science. Results The TNRS, or Taxonomic Name Resolution Service, is an online application for automated and user-supervised standardization of plant scientific names. The TNRS builds upon and extends existing open-source applications for name parsing and fuzzy matching. Names are standardized against multiple reference taxonomies, including the Missouri Botanical Garden's Tropicos database. Capable of processing thousands of names in a single operation, the TNRS parses and corrects misspelled names and authorities, standardizes variant spellings, and converts nomenclatural synonyms to accepted names. Family names can be included to increase match accuracy and resolve many types of homonyms. Partial matching of higher taxa combined with extraction of annotations, accession numbers and morphospecies allows the TNRS to standardize taxonomy across a broad range of active and legacy datasets. Conclusions We show how the TNRS can resolve many forms of taxonomic semantic heterogeneity, correct spelling errors and eliminate spurious names. As a result, the TNRS can aid the integration of disparate biological datasets. Although the TNRS was developed to aid in standardizing plant names, its underlying algorithms and design can be extended to all organisms and nomenclatural codes. The TNRS is accessible via a web interface at http://tnrs.iplantcollaborative.org/ and as a RESTful web service and application programming interface. Source code is available at https://github.com/iPlantCollaborativeOpenSource/TNRS/. PMID:23324024

  5. ADAM: analysis of discrete models of biological systems using computer algebra.

    PubMed

    Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard

    2011-07-20

    Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.

  6. Neointellectuals: Willing Tools on a Veritable Crusade

    ERIC Educational Resources Information Center

    Kovacs, Philip

    2008-01-01

    As both Maxine Greene and Paulo Freire would remind that obstacles must be named before being transcended, the author writes then with the intention of naming, and he names with the hope of transcending. For the purposes of this paper, transcendence means the replacement of a homogenizing public school system--one that indoctrinates children…

  7. SNAD: Sequence Name Annotation-based Designer.

    PubMed

    Sidorov, Igor A; Reshetov, Denis A; Gorbalenya, Alexander E

    2009-08-14

    A growing diversity of biological data is tagged with unique identifiers (UIDs) associated with polynucleotides and proteins to ensure efficient computer-mediated data storage, maintenance, and processing. These identifiers, which are not informative for most people, are often substituted by biologically meaningful names in various presentations to facilitate utilization and dissemination of sequence-based knowledge. This substitution is commonly done manually that may be a tedious exercise prone to mistakes and omissions. Here we introduce SNAD (Sequence Name Annotation-based Designer) that mediates automatic conversion of sequence UIDs (associated with multiple alignment or phylogenetic tree, or supplied as plain text list) into biologically meaningful names and acronyms. This conversion is directed by precompiled or user-defined templates that exploit wealth of annotation available in cognate entries of external databases. Using examples, we demonstrate how this tool can be used to generate names for practical purposes, particularly in virology. A tool for controllable annotation-based conversion of sequence UIDs into biologically meaningful names and acronyms has been developed and placed into service, fostering links between quality of sequence annotation, and efficiency of communication and knowledge dissemination among researchers.

  8. A Study of Topic and Topic Change in Conversational Threads

    DTIC Science & Technology

    2009-09-01

    AUTHOR(S) 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS( ES ) 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING / MONITORING AGENCY NAME(S) AND...ADDRESS( ES ) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION / AVAILABILITY STATEMENT 13. SUPPLEMENTARY NOTES...unigrams. By converting documents to a vector space representations, the tools of geometry and algebra can be applied, and questions of difference

  9. Quality Assurance System. Volume 1. Report (Technology Transfer Program)

    DTIC Science & Technology

    1980-03-03

    WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Surface Warfare Center CD Code 2230 - Design Integration Tools Building...192 Room 128-9500 MacArthur Blvd Bethesda, MD 20817-5700 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS...presented herein. TABLE OF CONTENTS VOLUME I - FINDINGS AND CONCLUSIONS SECTION PARAGRAPH TITLE 1 INTRODUCTION 1.1 Purpose and Scope 1.2 Organization of

  10. An attenuation of the 'normal' category effect in patients with Alzheimer's disease: a review and bootstrap analysis.

    PubMed

    Moreno-Martínez, F Javier; Laws, Keith R

    2007-03-01

    There is a consensus that Alzheimer's disease (AD) impairs semantic information, with one of the first markers being anomia i.e. an impaired ability to name items. Doubts remain, however, about whether this naming impairment differentially affects items from the living and nonliving knowledge domains. Most studies have reported an impairment for naming living things (e.g. animals or plants), a minority have found an impairment for nonliving things (e.g. tools or vehicles), and some have found no category-specific effect. A survey of the literature reveals that this lack of agreement may reflect a failure to control for intrinsic variables (such as familiarity) and the problems associated with ceiling effects in the control data. Investigating picture naming in 32 AD patients and 34 elderly controls, we used bootstrap techniques to deal with the abnormal distributions in both groups. Our analyses revealed the previously reported impairment for naming living things in AD patients and that this persisted even when intrinsic variables were covaried; however, covarying control performance eliminated the significant category effect. Indeed, the within-group comparison of living and nonliving naming revealed a larger effect size for controls than patients. We conclude that the category effect in Alzheimer's disease is no larger than is expected in the healthy brain and may even represent a small diminution of the normal profile.

  11. Opportunity Arm and Gagarin Rock, Sol 405

    NASA Image and Video Library

    2011-04-08

    NASA Mars Exploration Rover Opportunity used its rock abrasion tool on a rock informally named Gagarin, leaving a circular mark. At the end of the rover arm, the tool turret is positioned with the rock abrasion tool pointing upward.

  12. A proposal to rationalize within-species plant virus nomenclature: benefits and implications of inaction.

    PubMed

    Jones, Roger A C; Kehoe, Monica A

    2016-07-01

    Current approaches used to name within-species, plant virus phylogenetic groups are often misleading and illogical. They involve names based on biological properties, sequence differences and geographical, country or place-association designations, or any combination of these. This type of nomenclature is becoming increasingly unsustainable as numbers of sequences of the same virus from new host species and different parts of the world increase. Moreover, this increase is accelerating as world trade and agriculture expand, and climate change progresses. Serious consequences for virus research and disease management might arise from incorrect assumptions made when current within-species phylogenetic group names incorrectly identify properties of group members. This could result in development of molecular tools that incorrectly target dangerous virus strains, potentially leading to unjustified impediments to international trade or failure to prevent such strains being introduced to countries, regions or continents formerly free of them. Dangerous strains might be missed or misdiagnosed by diagnostic laboratories and monitoring programs, and new cultivars with incorrect strain-specific resistances released. Incorrect deductions are possible during phylogenetic analysis of plant virus sequences and errors from strain misidentification during molecular and biological virus research activities. A nomenclature system for within-species plant virus phylogenetic group names is needed which avoids such problems. We suggest replacing all other naming approaches with Latinized numerals, restricting biologically based names only to biological strains and removing geographically based names altogether. Our recommendations have implications for biosecurity authorities, diagnostic laboratories, disease-management programs, plant breeders and researchers.

  13. SUPIN: A Computational Tool for Supersonic Inlet Design

    NASA Technical Reports Server (NTRS)

    Slater, John W.

    2016-01-01

    A computational tool named SUPIN is being developed to design and analyze the aerodynamic performance of supersonic inlets. The inlet types available include the axisymmetric pitot, three-dimensional pitot, axisymmetric outward-turning, two-dimensional single-duct, two-dimensional bifurcated-duct, and streamline-traced inlets. The aerodynamic performance is characterized by the flow rates, total pressure recovery, and drag. The inlet flow-field is divided into parts to provide a framework for the geometry and aerodynamic modeling. Each part of the inlet is defined in terms of geometric factors. The low-fidelity aerodynamic analysis and design methods are based on analytic, empirical, and numerical methods which provide for quick design and analysis. SUPIN provides inlet geometry in the form of coordinates, surface angles, and cross-sectional areas. SUPIN can generate inlet surface grids and three-dimensional, structured volume grids for use with higher-fidelity computational fluid dynamics (CFD) analysis. Capabilities highlighted in this paper include the design and analysis of streamline-traced external-compression inlets, modeling of porous bleed, and the design and analysis of mixed-compression inlets. CFD analyses are used to verify the SUPIN results.

  14. PaintOmics 3: a web resource for the pathway analysis and visualization of multi-omics data.

    PubMed

    Hernández-de-Diego, Rafael; Tarazona, Sonia; Martínez-Mira, Carlos; Balzano-Nogueira, Leandro; Furió-Tarí, Pedro; Pappas, Georgios J; Conesa, Ana

    2018-05-25

    The increasing availability of multi-omic platforms poses new challenges to data analysis. Joint visualization of multi-omics data is instrumental in better understanding interconnections across molecular layers and in fully utilizing the multi-omic resources available to make biological discoveries. We present here PaintOmics 3, a web-based resource for the integrated visualization of multiple omic data types onto KEGG pathway diagrams. PaintOmics 3 combines server-end capabilities for data analysis with the potential of modern web resources for data visualization, providing researchers with a powerful framework for interactive exploration of their multi-omics information. Unlike other visualization tools, PaintOmics 3 covers a comprehensive pathway analysis workflow, including automatic feature name/identifier conversion, multi-layered feature matching, pathway enrichment, network analysis, interactive heatmaps, trend charts, and more. It accepts a wide variety of omic types, including transcriptomics, proteomics and metabolomics, as well as region-based approaches such as ATAC-seq or ChIP-seq data. The tool is freely available at www.paintomics.org.

  15. Atmospheric Delay Reduction Using KARAT for GPS Analysis and Implications for VLBI

    NASA Technical Reports Server (NTRS)

    Ichikawa, Ryuichi; Hobiger, Thomas; Koyama, Yasuhiro; Kondo, Tetsuro

    2010-01-01

    We have been developing a state-of-the-art tool to estimate the atmospheric path delays by raytracing through mesoscale analysis (MANAL) data, which is operationally used for numerical weather prediction by the Japan Meteorological Agency (JMA). The tools, which we have named KAshima RAytracing Tools (KARAT)', are capable of calculating total slant delays and ray-bending angles considering real atmospheric phenomena. The KARAT can estimate atmospheric slant delays by an analytical 2-D ray-propagation model by Thayer and a 3-D Eikonal solver. We compared PPP solutions using KARAT with that using the Global Mapping Function (GMF) and Vienna Mapping Function 1 (VMF1) for GPS sites of the GEONET (GPS Earth Observation Network System) operated by Geographical Survey Institute (GSI). In our comparison 57 stations of GEONET during the year of 2008 were processed. The KARAT solutions are slightly better than the solutions using VMF1 and GMF with linear gradient model for horizontal and height positions. Our results imply that KARAT is a useful tool for an efficient reduction of atmospheric path delays in radio-based space geodetic techniques such as GNSS and VLBI.

  16. Academic health sciences library Website navigation: an analysis of forty-one Websites and their navigation tools.

    PubMed

    Brower, Stewart M

    2004-10-01

    The analysis included forty-one academic health sciences library (HSL) Websites as captured in the first two weeks of January 2001. Home pages and persistent navigational tools (PNTs) were analyzed for layout, technology, and links, and other general site metrics were taken. Websites were selected based on rank in the National Network of Libraries of Medicine, with regional and resource libraries given preference on the basis that these libraries are recognized as leaders in their regions and would be the most reasonable source of standards for best practice. A three-page evaluation tool was developed based on previous similar studies. All forty-one sites were evaluated in four specific areas: library general information, Website aids and tools, library services, and electronic resources. Metrics taken for electronic resources included orientation of bibliographic databases alphabetically by title or by subject area and with links to specifically named databases. Based on the results, a formula for determining obligatory links was developed, listing items that should appear on all academic HSL Web home pages and PNTs. These obligatory links demonstrate a series of best practices that may be followed in the design and construction of academic HSL Websites.

  17. EpiHosp: A web-based visualization tool enabling the exploratory analysis of complications of implantable medical devices from a nationwide hospital database.

    PubMed

    Ficheur, Grégoire; Ferreira Careira, Lionel; Beuscart, Régis; Chazard, Emmanuel

    2015-01-01

    Administrative data can be used for the surveillance of the outcomes of implantable medical devices (IMDs). The objective of this work is to build a web-based tool allowing for an exploratory analysis of time-dependent events that may occur after the implementation of an IMD. This tool should enable a pharmacoepidemiologist to explore on the fly the relationship between a given IMD and a potential outcome. This tool mine the French nationwide database of inpatient stays from 2008 to 2013. The data are preprocessed in order to optimize the queries. A web tool is developed in PHP, MySQL and Javascript. The user selects one or a group of IMD from a tree, and can filter the results using years and hospital names. Four result pages describe the selected inpatient stays: (1) temporal and demographic description, (2) a description of the geographical location of the hospital, (3) a description of the geographical place of residence of the patient and (4) a table showing the rehospitalization reasons by decreasing order of frequency. Then, the user can select one readmission reason and display dynamically the probability of readmission by mean of a Kaplan-Meier curve with confidence intervals. This tool enables to dynamically monitor the occurrence of time-dependent complications of IMD.

  18. The Strength of Ethical Matrixes as a Tool for Normative Analysis Related to Technological Choices: The Case of Geological Disposal for Radioactive Waste.

    PubMed

    Kermisch, Céline; Depaus, Christophe

    2018-02-01

    The ethical matrix is a participatory tool designed to structure ethical reflection about the design, the introduction, the development or the use of technologies. Its collective implementation, in the context of participatory decision-making, has shown its potential usefulness. On the contrary, its implementation by a single researcher has not been thoroughly analyzed. The aim of this paper is precisely to assess the strength of ethical matrixes implemented by a single researcher as a tool for conceptual normative analysis related to technological choices. Therefore, the ethical matrix framework is applied to the management of high-level radioactive waste, more specifically to retrievable and non-retrievable geological disposal. The results of this analysis show that the usefulness of ethical matrixes is twofold and that they provide a valuable input for further decision-making. Indeed, by using ethical matrixes, implicit ethically relevant issues were revealed-namely issues of equity associated with health impacts and differences between close and remote future generations regarding ethical impacts. Moreover, the ethical matrix framework was helpful in synthesizing and comparing systematically the ethical impacts of the technologies under scrutiny, and hence in highlighting the potential ethical conflicts.

  19. Climate Change Adaptation Practices in Various Countries

    NASA Astrophysics Data System (ADS)

    Tanik, A.; Tekten, D.

    2017-08-01

    The paper will be a review work on the recent strategies of EU in general, and will underline the inspected sectoral based adaptation practices and action plans of 7 countries; namely Germany, France, Spain, Italy, Denmark, USA and Kenya from Africa continent. Although every countries’ action plan have some similarities on sectoral analysis, each country in accordance with the specific nature of the problem seems to create its own sectoral analysis. Within this context, green and white documents of EU adaptation to climate change, EU strategy on climate change, EU targets of 2020 on climate change and EU adaptation support tools are investigated.

  20. SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX-80

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.; Watson, Brian C.

    1992-11-01

    The finite element method has proven to be an invaluable tool for analysis and design of complex, high performance systems, such as bladed-disk assemblies in aircraft turbofan engines. However, as the problem size increase, the computation time required by conventional computers can be prohibitively high. Parallel processing computers provide the means to overcome these computation time limits. This report summarizes the results of a research activity aimed at providing a finite element capability for analyzing turbomachinery bladed-disk assemblies in a vector/parallel processing environment. A special purpose code, named with the acronym SAPNEW, has been developed to perform static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements. SAPNEW provides a stand alone capability for static and eigen analysis on the Alliant FX/80, a parallel processing computer. A preprocessor, named with the acronym NTOS, has been developed to accept NASTRAN input decks and convert them to the SAPNEW format to make SAPNEW more readily used by researchers at NASA Lewis Research Center.

  1. Maintaining the Database for Information Object Analysis, Intent, Dissemination and Enhancement (IOAIDE) and the US Army Research Laboratory Campus Sensor Network (ARL CSN)

    DTIC Science & Technology

    2017-01-01

    CII-B 2800 Powder Mill Road Adelphi, MD 20783-1138 8. PERFORMING ORGANIZATION REPORT NUMBER ARL-TR-7921 9. SPONSORING/MONITORING AGENCY NAME(S...server database, structured query language, information objects, instructions, maintenance , cursor on target events, unattended ground sensors...unlimited. iii Contents List of Figures iv 1. Introduction 1 2. Computer and Software Development Tools Requirements 1 3. Database Maintenance 2 3.1

  2. Role of capillary electrophoresis in the fight against doping in sports.

    PubMed

    Harrison, Christopher R

    2013-08-06

    At present the role of capillary electrophoresis in the detection of doping agents in athletes is, for the most part, nonexistent. More traditional techniques, namely gas and liquid chromatography with mass spectrometric detection, remain the gold standard of antidoping tests. This Feature will investigate the in-roads that capillary electrophoresis has made, the limitations that the technique suffers from, and where the technique may grow into being a key tool for antidoping analysis.

  3. A Geographic and Functional Network Flow Analysis Tool

    DTIC Science & Technology

    2014-06-01

    a context for the games to be run. By definition, a dystopia is “a place where bad things happen”—a fitting name for a place where the scenarios are...design identified by Topology Zoo and the BRITE Topology generator (Byers et al. 2014; Bowden 2013). Both projects aim to accurately map the network...Knight, Simon, Nguyen, Hung, Falkner, Nickolas, and Matthew Roughan. 2014. “The Internet Topology Zoo .” The Internet Topology Zoo . Accessed March

  4. A Quantitative Approach to Analyzing Architectures in the Presence of Uncertainty

    DTIC Science & Technology

    2009-07-01

    SAR) 18. NUMBER OF PAGES 33 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b . ABSTRACT unclassified c. THIS PAGE unclassified Standard...hence requires appropriate tool support. 3 3.1 Architecture Modeling To facilitate this form of modeling, the modeling language must allow the archi ...can (a) capture the steady-state behavior of the model, ( b ) allow for the analysis of some property in the context of a specific state or condition

  5. FIESTA ROC: A new finite element analysis program for solar cell simulation

    NASA Technical Reports Server (NTRS)

    Clark, Ralph O.

    1991-01-01

    The Finite Element Semiconductor Three-dimensional Analyzer by Ralph O. Clark (FIESTA ROC) is a computational tool for investigating in detail the performance of arbitrary solar cell structures. As its name indicates, it uses the finite element technique to solve the fundamental semiconductor equations in the cell. It may be used for predicting the performance (thereby dictating the design parameters) of a proposed cell or for investigating the limiting factors in an established design.

  6. Bridging the Gap between RF and Optical Patch Antenna Analysis via the Cavity Model.

    PubMed

    Unal, G S; Aksun, M I

    2015-11-02

    Although optical antennas with a variety of shapes and for a variety of applications have been proposed and studied, they are still in their infancy compared to their radio frequency (rf) counterparts. Optical antennas have mainly utilized the geometrical attributes of rf antennas rather than the analysis tools that have been the source of intuition for antenna engineers in rf. This study intends to narrow the gap of experience and intuition in the design of optical patch antennas by introducing an easy-to-understand and easy-to-implement analysis tool in rf, namely, the cavity model, into the optical regime. The importance of this approach is not only its simplicity in understanding and implementation but also its applicability to a broad class of patch antennas and, more importantly, its ability to provide the intuition needed to predict the outcome without going through the trial-and-error simulations with no or little intuitive guidance by the user.

  7. TNA4OptFlux – a software tool for the analysis of strain optimization strategies

    PubMed Central

    2013-01-01

    Background Rational approaches for Metabolic Engineering (ME) deal with the identification of modifications that improve the microbes’ production capabilities of target compounds. One of the major challenges created by strain optimization algorithms used in these ME problems is the interpretation of the changes that lead to a given overproduction. Often, a single gene knockout induces changes in the fluxes of several reactions, as compared with the wild-type, and it is therefore difficult to evaluate the physiological differences of the in silico mutant. This is aggravated by the fact that genome-scale models per se are difficult to visualize, given the high number of reactions and metabolites involved. Findings We introduce a software tool, the Topological Network Analysis for OptFlux (TNA4OptFlux), a plug-in which adds to the open-source ME platform OptFlux the capability of creating and performing topological analysis over metabolic networks. One of the tool’s major advantages is the possibility of using these tools in the analysis and comparison of simulated phenotypes, namely those coming from the results of strain optimization algorithms. We illustrate the capabilities of the tool by using it to aid the interpretation of two E. coli strains designed in OptFlux for the overproduction of succinate and glycine. Conclusions Besides adding new functionalities to the OptFlux software tool regarding topological analysis, TNA4OptFlux methods greatly facilitate the interpretation of non-intuitive ME strategies by automating the comparison between perturbed and non-perturbed metabolic networks. The plug-in is available on the web site http://www.optflux.org, together with extensive documentation. PMID:23641878

  8. Space Science Cloud: a Virtual Space Science Research Platform Based on Cloud Model

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoyan; Tong, Jizhou; Zou, Ziming

    Through independent and co-operational science missions, Strategic Pioneer Program (SPP) on Space Science, the new initiative of space science program in China which was approved by CAS and implemented by National Space Science Center (NSSC), dedicates to seek new discoveries and new breakthroughs in space science, thus deepen the understanding of universe and planet earth. In the framework of this program, in order to support the operations of space science missions and satisfy the demand of related research activities for e-Science, NSSC is developing a virtual space science research platform based on cloud model, namely the Space Science Cloud (SSC). In order to support mission demonstration, SSC integrates interactive satellite orbit design tool, satellite structure and payloads layout design tool, payload observation coverage analysis tool, etc., to help scientists analyze and verify space science mission designs. Another important function of SSC is supporting the mission operations, which runs through the space satellite data pipelines. Mission operators can acquire and process observation data, then distribute the data products to other systems or issue the data and archives with the services of SSC. In addition, SSC provides useful data, tools and models for space researchers. Several databases in the field of space science are integrated and an efficient retrieve system is developing. Common tools for data visualization, deep processing (e.g., smoothing and filtering tools), analysis (e.g., FFT analysis tool and minimum variance analysis tool) and mining (e.g., proton event correlation analysis tool) are also integrated to help the researchers to better utilize the data. The space weather models on SSC include magnetic storm forecast model, multi-station middle and upper atmospheric climate model, solar energetic particle propagation model and so on. All the services above-mentioned are based on the e-Science infrastructures of CAS e.g. cloud storage and cloud computing. SSC provides its users with self-service storage and computing resources at the same time.At present, the prototyping of SSC is underway and the platform is expected to be put into trial operation in August 2014. We hope that as SSC develops, our vision of Digital Space may come true someday.

  9. A precise goniometer/tensiometer using a low cost single-board computer

    NASA Astrophysics Data System (ADS)

    Favier, Benoit; Chamakos, Nikolaos T.; Papathanasiou, Athanasios G.

    2017-12-01

    Measuring the surface tension and the Young contact angle of a droplet is extremely important for many industrial applications. Here, considering the booming interest for small and cheap but precise experimental instruments, we have constructed a low-cost contact angle goniometer/tensiometer, based on a single-board computer (Raspberry Pi). The device runs an axisymmetric drop shape analysis (ADSA) algorithm written in Python. The code, here named DropToolKit, was developed in-house. We initially present the mathematical framework of our algorithm and then we validate our software tool against other well-established ADSA packages, including the commercial ramé-hart DROPimage Advanced as well as the DropAnalysis plugin in ImageJ. After successfully testing for various combinations of liquids and solid surfaces, we concluded that our prototype device would be highly beneficial for industrial applications as well as for scientific research in wetting phenomena compared to the commercial solutions.

  10. Complexity analysis and mathematical tools towards the modelling of living systems.

    PubMed

    Bellomo, N; Bianca, C; Delitala, M

    2009-09-01

    This paper is a review and critical analysis of the mathematical kinetic theory of active particles applied to the modelling of large living systems made up of interacting entities. The first part of the paper is focused on a general presentation of the mathematical tools of the kinetic theory of active particles. The second part provides a review of a variety of mathematical models in life sciences, namely complex social systems, opinion formation, evolution of epidemics with virus mutations, and vehicular traffic, crowds and swarms. All the applications are technically related to the mathematical structures reviewed in the first part of the paper. The overall contents are based on the concept that living systems, unlike the inert matter, have the ability to develop behaviour geared towards their survival, or simply to improve the quality of their life. In some cases, the behaviour evolves in time and generates destructive and/or proliferative events.

  11. Metabolome searcher: a high throughput tool for metabolite identification and metabolic pathway mapping directly from mass spectrometry and using genome restriction.

    PubMed

    Dhanasekaran, A Ranjitha; Pearson, Jon L; Ganesan, Balasubramanian; Weimer, Bart C

    2015-02-25

    Mass spectrometric analysis of microbial metabolism provides a long list of possible compounds. Restricting the identification of the possible compounds to those produced by the specific organism would benefit the identification process. Currently, identification of mass spectrometry (MS) data is commonly done using empirically derived compound databases. Unfortunately, most databases contain relatively few compounds, leaving long lists of unidentified molecules. Incorporating genome-encoded metabolism enables MS output identification that may not be included in databases. Using an organism's genome as a database restricts metabolite identification to only those compounds that the organism can produce. To address the challenge of metabolomic analysis from MS data, a web-based application to directly search genome-constructed metabolic databases was developed. The user query returns a genome-restricted list of possible compound identifications along with the putative metabolic pathways based on the name, formula, SMILES structure, and the compound mass as defined by the user. Multiple queries can be done simultaneously by submitting a text file created by the user or obtained from the MS analysis software. The user can also provide parameters specific to the experiment's MS analysis conditions, such as mass deviation, adducts, and detection mode during the query so as to provide additional levels of evidence to produce the tentative identification. The query results are provided as an HTML page and downloadable text file of possible compounds that are restricted to a specific genome. Hyperlinks provided in the HTML file connect the user to the curated metabolic databases housed in ProCyc, a Pathway Tools platform, as well as the KEGG Pathway database for visualization and metabolic pathway analysis. Metabolome Searcher, a web-based tool, facilitates putative compound identification of MS output based on genome-restricted metabolic capability. This enables researchers to rapidly extend the possible identifications of large data sets for metabolites that are not in compound databases. Putative compound names with their associated metabolic pathways from metabolomics data sets are returned to the user for additional biological interpretation and visualization. This novel approach enables compound identification by restricting the possible masses to those encoded in the genome.

  12. a New Tool for Facilitating the Retrieval and Recording of the Place Name Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Bozzini, C.; Conedera, M.; Krebs, P.

    2013-07-01

    Traditional place names (toponyms) represent the immaterial cultural heritage of past land uses, particular characteristics of the territory, landscape related events or inhabitants, as well as related cultural and religious background. In Euopean countries where the cultural landscape has a very long history, this heritage is particularly considerable. Often most of the detailed knowledge about traditional place names and their precise localization is non-written and familiar only to old local native persons who experienced the former rural civilization. In the next future this important heritage will be seriously threatened because of the physical disappearance of its living custodians. One of the major problems that one has to face, when trying to trace and document the knowledge related to place names and their localization, is to translate the memory and the former landscape experiences of the respondents into maps and structured records. In this contribution we present a new tool based on the monoplotting principle and ad hoc developed to enable the synchronization of terrestrial oblique landscape pictures with the represented digital elevation model. The local respondents are then just asked to show the place name localization on historical landscape pictures they are familiar with. The tool automatically gives back the corresponding world coordinates, what makes the interviewing process more rapid and smooth as well as motivating and less stress-inducing for the informants.

  13. Assessing the Liquidity of Firms: Robust Neural Network Regression as an Alternative to the Current Ratio

    NASA Astrophysics Data System (ADS)

    de Andrés, Javier; Landajo, Manuel; Lorca, Pedro; Labra, Jose; Ordóñez, Patricia

    Artificial neural networks have proven to be useful tools for solving financial analysis problems such as financial distress prediction and audit risk assessment. In this paper we focus on the performance of robust (least absolute deviation-based) neural networks on measuring liquidity of firms. The problem of learning the bivariate relationship between the components (namely, current liabilities and current assets) of the so-called current ratio is analyzed, and the predictive performance of several modelling paradigms (namely, linear and log-linear regressions, classical ratios and neural networks) is compared. An empirical analysis is conducted on a representative data base from the Spanish economy. Results indicate that classical ratio models are largely inadequate as a realistic description of the studied relationship, especially when used for predictive purposes. In a number of cases, especially when the analyzed firms are microenterprises, the linear specification is improved by considering the flexible non-linear structures provided by neural networks.

  14. Influence of friction stir welding parameters on titanium-aluminum heterogeneous lap joining configuration

    NASA Astrophysics Data System (ADS)

    Picot, Florent; Gueydan, Antoine; Hug, Éric

    2017-10-01

    Lap joining configuration for Friction Stir Welding process is a methodology mostly dedicated to heterogeneous bonding. This welding technology was applied to join pure titanium with pure aluminum by varying the rotation speed and the movement speed of the tool. Regardless of the process parameters, it was found that the maximum strength of the junction remains almost constant. Microstructural observations by means of Scanning Electron Microscopy and Energy Dispersive Spectrometry analysis enable to describe the interfacial join and reveal asymmetric Cold Lap Defects on the sides of the junction. Chemical analysis shows the presence of one exclusive intermetallic compound through the interface identified as TiAl3. This compound is responsible of the crack spreading of the junction during the mechanical loading. The original version of this article supplied to AIP Publishing contained an accidental inversion of the authors, names. An updated version of this article, with the authors names formatted correctly was published on 20 October 2017.

  15. Ambiguity and variability of database and software names in bioinformatics.

    PubMed

    Duck, Geraint; Kovacevic, Aleksandar; Robertson, David L; Stevens, Robert; Nenadic, Goran

    2015-01-01

    There are numerous options available to achieve various tasks in bioinformatics, but until recently, there were no tools that could systematically identify mentions of databases and tools within the literature. In this paper we explore the variability and ambiguity of database and software name mentions and compare dictionary and machine learning approaches to their identification. Through the development and analysis of a corpus of 60 full-text documents manually annotated at the mention level, we report high variability and ambiguity in database and software mentions. On a test set of 25 full-text documents, a baseline dictionary look-up achieved an F-score of 46 %, highlighting not only variability and ambiguity but also the extensive number of new resources introduced. A machine learning approach achieved an F-score of 63 % (with precision of 74 %) and 70 % (with precision of 83 %) for strict and lenient matching respectively. We characterise the issues with various mention types and propose potential ways of capturing additional database and software mentions in the literature. Our analyses show that identification of mentions of databases and tools is a challenging task that cannot be achieved by relying on current manually-curated resource repositories. Although machine learning shows improvement and promise (primarily in precision), more contextual information needs to be taken into account to achieve a good degree of accuracy.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suresh, Niraj; Stephens, Sean A.; Adams, Lexor

    Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere, as well as processes with important implications to climate change and forest management. Quantitative size information on roots in their native environment is invaluable for studying root growth and environmental processes involving the plant. X ray computed tomography (XCT) has been demonstrated to be an effective tool for in situ root scanning and analysis. Our group at the Environmental Molecular Sciences Laboratory (EMSL) has developed an XCT-based tool to image and quantitatively analyze plant root structures in their native soil environment. XCT data collected on amore » Prairie dropseed (Sporobolus heterolepis) specimen was used to visualize its root structure. A combination of open-source software RooTrak and DDV were employed to segment the root from the soil, and calculate its isosurface, respectively. Our own computer script named 3DRoot-SV was developed and used to calculate root volume and surface area from a triangular mesh. The process utilizing a unique combination of tools, from imaging to quantitative root analysis, including the 3DRoot-SV computer script, is described.« less

  17. The Effects of "Handwriting without Tears®" on the Handwriting Skills of Appropriate Size, Form, and Tool for a Four Year-Old Boy with a Developmental Delay

    ERIC Educational Resources Information Center

    Meyers, Colleen; McLaughlin, T. F.; Derby, Mark; Weber, Kimberly P.; Robison, Milena

    2015-01-01

    The ability to write one's own name legibly is a critical lifelong skill for academic success. The purpose of the present study was to evaluate the effects of the Handwriting Without Tears® program on teaching a four year-old how to write his first name using proper size, form, and tool. The participant was a four year-old boy in a self-contained…

  18. Neo: an object model for handling electrophysiology data in multiple formats

    PubMed Central

    Garcia, Samuel; Guarino, Domenico; Jaillet, Florent; Jennings, Todd; Pröpper, Robert; Rautenberg, Philipp L.; Rodgers, Chris C.; Sobolev, Andrey; Wachtler, Thomas; Yger, Pierre; Davison, Andrew P.

    2014-01-01

    Neuroscientists use many different software tools to acquire, analyze and visualize electrophysiological signals. However, incompatible data models and file formats make it difficult to exchange data between these tools. This reduces scientific productivity, renders potentially useful analysis methods inaccessible and impedes collaboration between labs. A common representation of the core data would improve interoperability and facilitate data-sharing. To that end, we propose here a language-independent object model, named “Neo,” suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. As a concrete instantiation of this object model we have developed an open source implementation in the Python programming language. In addition to representing electrophysiology data in memory for the purposes of analysis and visualization, the Python implementation provides a set of input/output (IO) modules for reading/writing the data from/to a variety of commonly used file formats. Support is included for formats produced by most of the major manufacturers of electrophysiology recording equipment and also for more generic formats such as MATLAB. Data representation and data analysis are conceptually separate: it is easier to write robust analysis code if it is focused on analysis and relies on an underlying package to handle data representation. For that reason, and also to be as lightweight as possible, the Neo object model and the associated Python package are deliberately limited to representation of data, with no functions for data analysis or visualization. Software for neurophysiology data analysis and visualization built on top of Neo automatically gains the benefits of interoperability, easier data sharing and automatic format conversion; there is already a burgeoning ecosystem of such tools. We intend that Neo should become the standard basis for Python tools in neurophysiology. PMID:24600386

  19. Neo: an object model for handling electrophysiology data in multiple formats.

    PubMed

    Garcia, Samuel; Guarino, Domenico; Jaillet, Florent; Jennings, Todd; Pröpper, Robert; Rautenberg, Philipp L; Rodgers, Chris C; Sobolev, Andrey; Wachtler, Thomas; Yger, Pierre; Davison, Andrew P

    2014-01-01

    Neuroscientists use many different software tools to acquire, analyze and visualize electrophysiological signals. However, incompatible data models and file formats make it difficult to exchange data between these tools. This reduces scientific productivity, renders potentially useful analysis methods inaccessible and impedes collaboration between labs. A common representation of the core data would improve interoperability and facilitate data-sharing. To that end, we propose here a language-independent object model, named "Neo," suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. As a concrete instantiation of this object model we have developed an open source implementation in the Python programming language. In addition to representing electrophysiology data in memory for the purposes of analysis and visualization, the Python implementation provides a set of input/output (IO) modules for reading/writing the data from/to a variety of commonly used file formats. Support is included for formats produced by most of the major manufacturers of electrophysiology recording equipment and also for more generic formats such as MATLAB. Data representation and data analysis are conceptually separate: it is easier to write robust analysis code if it is focused on analysis and relies on an underlying package to handle data representation. For that reason, and also to be as lightweight as possible, the Neo object model and the associated Python package are deliberately limited to representation of data, with no functions for data analysis or visualization. Software for neurophysiology data analysis and visualization built on top of Neo automatically gains the benefits of interoperability, easier data sharing and automatic format conversion; there is already a burgeoning ecosystem of such tools. We intend that Neo should become the standard basis for Python tools in neurophysiology.

  20. IBiSA_Tools: A Computational Toolkit for Ion-Binding State Analysis in Molecular Dynamics Trajectories of Ion Channels.

    PubMed

    Kasahara, Kota; Kinoshita, Kengo

    2016-01-01

    Ion conduction mechanisms of ion channels are a long-standing conundrum. Although the molecular dynamics (MD) method has been extensively used to simulate ion conduction dynamics at the atomic level, analysis and interpretation of MD results are not straightforward due to complexity of the dynamics. In our previous reports, we proposed an analytical method called ion-binding state analysis to scrutinize and summarize ion conduction mechanisms by taking advantage of a variety of analytical protocols, e.g., the complex network analysis, sequence alignment, and hierarchical clustering. This approach effectively revealed the ion conduction mechanisms and their dependence on the conditions, i.e., ion concentration and membrane voltage. Here, we present an easy-to-use computational toolkit for ion-binding state analysis, called IBiSA_tools. This toolkit consists of a C++ program and a series of Python and R scripts. From the trajectory file of MD simulations and a structure file, users can generate several images and statistics of ion conduction processes. A complex network named ion-binding state graph is generated in a standard graph format (graph modeling language; GML), which can be visualized by standard network analyzers such as Cytoscape. As a tutorial, a trajectory of a 50 ns MD simulation of the Kv1.2 channel is also distributed with the toolkit. Users can trace the entire process of ion-binding state analysis step by step. The novel method for analysis of ion conduction mechanisms of ion channels can be easily used by means of IBiSA_tools. This software is distributed under an open source license at the following URL: http://www.ritsumei.ac.jp/~ktkshr/ibisa_tools/.

  1. Role-play as an educational tool in medication communication skills: Students' perspectives.

    PubMed

    Lavanya, S H; Kalpana, L; Veena, R M; Bharath Kumar, V D

    2016-10-01

    Medication communication skills are vital aspects of patient care that may influence treatment outcomes. However, traditional pharmacology curriculum deals with imparting factual information, with little emphasis on patient communication. The current study aims to explore students' perceptions of role-play as an educational tool in acquiring communication skills and to ascertain the need of role-play for their future clinical practice. This questionnaire-based study was done in 2 nd professional MBBS students. A consolidated concept of six training cases, focusing on major communication issues related to medication prescription in pharmacology, were developed for peer-role-play sessions for 2 nd professional MBBS ( n = 122) students. Structured scripts with specific emphasis on prescription medication communication and checklists for feedback were developed. Prevalidated questionnaires measured the quantitative aspects of role-plays in relation to their relevance as teaching-learning tool, perceived benefits of sessions, and their importance for future use. Data analysis was performed using descriptive statistics. The role-play concept was well appreciated and considered an effective means for acquiring medication communication skills. The structured feedback by peers and faculty was well received by many. Over 90% of the students reported immense confidence in communicating therapy details, namely, drug name, purpose, mechanism, dosing details, and precautions. Majority reported a better retention of pharmacology concepts and preferred more such sessions. Most students consider peer-role-play as an indispensable tool to acquire effective communication skills regarding drug therapy. By virtue of providing experiential learning opportunities and its feasibility of implementation, role-play sessions justify inclusion in undergraduate medical curricula.

  2. Towards a collaborative, global infrastructure for biodiversity assessment

    PubMed Central

    Guralnick, Robert P; Hill, Andrew W; Lane, Meredith

    2007-01-01

    Biodiversity data are rapidly becoming available over the Internet in common formats that promote sharing and exchange. Currently, these data are somewhat problematic, primarily with regard to geographic and taxonomic accuracy, for use in ecological research, natural resources management and conservation decision-making. However, web-based georeferencing tools that utilize best practices and gazetteer databases can be employed to improve geographic data. Taxonomic data quality can be improved through web-enabled valid taxon names databases and services, as well as more efficient mechanisms to return systematic research results and taxonomic misidentification rates back to the biodiversity community. Both of these are under construction. A separate but related challenge will be developing web-based visualization and analysis tools for tracking biodiversity change. Our aim was to discuss how such tools, combined with data of enhanced quality, will help transform today's portals to raw biodiversity data into nexuses of collaborative creation and sharing of biodiversity knowledge. PMID:17594421

  3. Giving students the run of sprinting models

    NASA Astrophysics Data System (ADS)

    Heck, André; Ellermeijer, Ton

    2009-11-01

    A biomechanical study of sprinting is an interesting task for students who have a background in mechanics and calculus. These students can work with real data and do practical investigations similar to the way sports scientists do research. Student research activities are viable when the students are familiar with tools to collect and work with data from sensors and video recordings and with modeling tools for comparing simulation and experimental results. This article describes a multipurpose system, named COACH, that offers a versatile integrated set of tools for learning, doing, and teaching mathematics and science in a computer-based inquiry approach. Automated tracking of reference points and correction of perspective distortion in videos, state-of-the-art algorithms for data smoothing and numerical differentiation, and graphical system dynamics based modeling are some of the built-in techniques that are suitable for motion analysis. Their implementation and their application in student activities involving models of running are discussed.

  4. The Schisto Track: A System for Gathering and Monitoring Epidemiological Surveys by Connecting Geographical Information Systems in Real Time

    PubMed Central

    2014-01-01

    Background Using the Android platform as a notification instrument for diseases and disorders forms a new alternative for computerization of epidemiological studies. Objective The objective of our study was to construct a tool for gathering epidemiological data on schistosomiasis using the Android platform. Methods The developed application (app), named the Schisto Track, is a tool for data capture and analysis that was designed to meet the needs of a traditional epidemiological survey. An initial version of the app was finished and tested in both real situations and simulations for epidemiological surveys. Results The app proved to be a tool capable of automation of activities, with data organization and standardization, easy data recovery (to enable interfacing with other systems), and totally modular architecture. Conclusions The proposed Schisto Track is in line with worldwide trends toward use of smartphones with the Android platform for modeling epidemiological scenarios. PMID:25099881

  5. Objective Data Assessment (ODA) Methods as Nutritional Assessment Tools.

    PubMed

    Hamada, Yasuhiro

    2015-01-01

    Nutritional screening and assessment should be a standard of care for all patients because nutritional management plays an important role in clinical practice. However, there is no gold standard for the diagnosis of malnutrition or undernutrition, although a large number of nutritional screening and assessment tools have been developed. Nutritional screening and assessment tools are classified into two categories, namely, subjective global assessment (SGA) and objective data assessment (ODA). SGA assesses nutritional status based on the features of medical history and physical examination. On the other hand, ODA consists of objective data provided from various analyses, such as anthropometry, bioimpedance analysis (BIA), dual-energy X-ray absorptiometry (DEXA), computed tomography (CT), magnetic resonance imaging (MRI), laboratory tests, and functional tests. This review highlights knowledge on the performance of ODA methods for the assessment of nutritional status in clinical practice. J. Med. Invest. 62: 119-122, August, 2015.

  6. Ada (Trade Name) Foundation Technology. Volume 4. Software Requirements for WIS (WWMCCS (World Wide Military Command and Control System) Information System) Text Processing Prototypes

    DTIC Science & Technology

    1986-12-01

    graphics : The package allows a character set which can be defined by users giving the picture for a character by designating its pixels. Such characters...type lonts and gsei-oriented "help" messages tailored to the operations being performed and user expertise In general, critical design issues...other volumes include command language, software design , description and analysis tools, database management system operating systems; planning and

  7. Web-based platform for collaborative medical imaging research

    NASA Astrophysics Data System (ADS)

    Rittner, Leticia; Bento, Mariana P.; Costa, André L.; Souza, Roberto M.; Machado, Rubens C.; Lotufo, Roberto A.

    2015-03-01

    Medical imaging research depends basically on the availability of large image collections, image processing and analysis algorithms, hardware and a multidisciplinary research team. It has to be reproducible, free of errors, fast, accessible through a large variety of devices spread around research centers and conducted simultaneously by a multidisciplinary team. Therefore, we propose a collaborative research environment, named Adessowiki, where tools and datasets are integrated and readily available in the Internet through a web browser. Moreover, processing history and all intermediate results are stored and displayed in automatic generated web pages for each object in the research project or clinical study. It requires no installation or configuration from the client side and offers centralized tools and specialized hardware resources, since processing takes place in the cloud.

  8. Category specific dysnomia after thalamic infarction: a case-control study.

    PubMed

    Levin, Netta; Ben-Hur, Tamir; Biran, Iftah; Wertman, Eli

    2005-01-01

    Category specific naming impairment was described mainly after cortical lesions. It is thought to result from a lesion in a specific network, reflecting the organization of our semantic knowledge. The deficit usually involves multiple semantic categories whose profile of naming deficit generally obeys the animate/inanimate dichotomy. Thalamic lesions cause general semantic naming deficit, and only rarely a category specific semantic deficit for very limited and highly specific categories. We performed a case-control study on a 56-year-old right-handed man who presented with language impairment following a left anterior thalamic infarction. His naming ability and semantic knowledge were evaluated in the visual, tactile and auditory modalities for stimuli from 11 different categories, and compared to that of five controls. In naming to visual stimuli the patient performed poorly (error rate>50%) in four categories: vegetables, toys, animals and body parts (average 70.31+/-15%). In each category there was a different dominating error type. He performed better in the other seven categories (tools, clothes, transportation, fruits, electric, furniture, kitchen utensils), averaging 14.28+/-9% errors. Further analysis revealed a dichotomy between naming in animate and inanimate categories in the visual and tactile modalities but not in response to auditory stimuli. Thus, a unique category specific profile of response and naming errors to visual and tactile, but not auditory stimuli was found after a left anterior thalamic infarction. This might reflect the role of the thalamus not only as a relay station but further as a central integrator of different stages of perceptual and semantic processing.

  9. Comparing Pearson, Spearman and Hoeffding's D measure for gene expression association analysis.

    PubMed

    Fujita, André; Sato, João Ricardo; Demasi, Marcos Angelo Almeida; Sogayar, Mari Cleide; Ferreira, Carlos Eduardo; Miyano, Satoru

    2009-08-01

    DNA microarrays have become a powerful tool to describe gene expression profiles associated with different cellular states, various phenotypes and responses to drugs and other extra- or intra-cellular perturbations. In order to cluster co-expressed genes and/or to construct regulatory networks, definition of distance or similarity between measured gene expression data is usually required, the most common choices being Pearson's and Spearman's correlations. Here, we evaluate these two methods and also compare them with a third one, namely Hoeffding's D measure, which is used to infer nonlinear and non-monotonic associations, i.e. independence in a general sense. By comparing three different variable association approaches, namely Pearson's correlation, Spearman's correlation and Hoeffding's D measure, we aimed at assessing the most appropriate one for each purpose. Using simulations, we demonstrate that the Hoeffding's D measure outperforms Pearson's and Spearman's approaches in identifying nonlinear associations. Our results demonstrate that Hoeffding's D measure is less sensitive to outliers and is a more powerful tool to identify nonlinear and non-monotonic associations. We have also applied Hoeffding's D measure in order to identify new putative genes associated with tp53. Therefore, we propose the Hoeffding's D measure to identify nonlinear associations between gene expression profiles.

  10. Cerebral and Sinus Vein Thrombosis

    MedlinePlus

    ... Disclosures Footnotes References Figures & Tables Info & Metrics eLetters Article Tools Print Citation Tools Cerebral and Sinus Vein ... Remember my user name & password. Submit Share this Article Email Thank you for your interest in spreading ...

  11. Screening for Peripheral Artery Disease

    MedlinePlus

    ... Disclosures Acknowledgments Footnotes Figures & Tables Info & Metrics eLetters Article Tools Print Citation Tools Screening for Peripheral Artery ... Remember my user name & password. Submit Share this Article Email Thank you for your interest in spreading ...

  12. "gnparser": a powerful parser for scientific names based on Parsing Expression Grammar.

    PubMed

    Mozzherin, Dmitry Y; Myltsev, Alexander A; Patterson, David J

    2017-05-26

    Scientific names in biology act as universal links. They allow us to cross-reference information about organisms globally. However variations in spelling of scientific names greatly diminish their ability to interconnect data. Such variations may include abbreviations, annotations, misspellings, etc. Authorship is a part of a scientific name and may also differ significantly. To match all possible variations of a name we need to divide them into their elements and classify each element according to its role. We refer to this as 'parsing' the name. Parsing categorizes name's elements into those that are stable and those that are prone to change. Names are matched first by combining them according to their stable elements. Matches are then refined by examining their varying elements. This two stage process dramatically improves the number and quality of matches. It is especially useful for the automatic data exchange within the context of "Big Data" in biology. We introduce Global Names Parser (gnparser). It is a Java tool written in Scala language (a language for Java Virtual Machine) to parse scientific names. It is based on a Parsing Expression Grammar. The parser can be applied to scientific names of any complexity. It assigns a semantic meaning (such as genus name, species epithet, rank, year of publication, authorship, annotations, etc.) to all elements of a name. It is able to work with nested structures as in the names of hybrids. gnparser performs with ≈99% accuracy and processes 30 million name-strings/hour per CPU thread. The gnparser library is compatible with Scala, Java, R, Jython, and JRuby. The parser can be used as a command line application, as a socket server, a web-app or as a RESTful HTTP-service. It is released under an Open source MIT license. Global Names Parser (gnparser) is a fast, high precision tool for biodiversity informaticians and biologists working with large numbers of scientific names. It can replace expensive and error-prone manual parsing and standardization of scientific names in many situations, and can quickly enhance the interoperability of distributed biological information.

  13. To Name or Not to Name: The Effect of Changing Author Gender on Peer Review

    ERIC Educational Resources Information Center

    Borsuk, Robyn M.; Aarssen, Lonnie W.; Budden, Amber E.; Koricheva, Julia; Leimu, Roosa; Tregenza, Tom; Lortie, Christopher J.

    2009-01-01

    The peer review model is one of the most important tools used in science to assess the relative merit of research. We manipulated a published article to reflect one of the following four author designations: female, male, initial, and no name provided. This article was then reviewed by referees of both genders at various stages of scientific…

  14. Explorative visual analytics on interval-based genomic data and their metadata.

    PubMed

    Jalili, Vahid; Matteucci, Matteo; Masseroli, Marco; Ceri, Stefano

    2017-12-04

    With the wide-spreading of public repositories of NGS processed data, the availability of user-friendly and effective tools for data exploration, analysis and visualization is becoming very relevant. These tools enable interactive analytics, an exploratory approach for the seamless "sense-making" of data through on-the-fly integration of analysis and visualization phases, suggested not only for evaluating processing results, but also for designing and adapting NGS data analysis pipelines. This paper presents abstractions for supporting the early analysis of NGS processed data and their implementation in an associated tool, named GenoMetric Space Explorer (GeMSE). This tool serves the needs of the GenoMetric Query Language, an innovative cloud-based system for computing complex queries over heterogeneous processed data. It can also be used starting from any text files in standard BED, BroadPeak, NarrowPeak, GTF, or general tab-delimited format, containing numerical features of genomic regions; metadata can be provided as text files in tab-delimited attribute-value format. GeMSE allows interactive analytics, consisting of on-the-fly cycling among steps of data exploration, analysis and visualization that help biologists and bioinformaticians in making sense of heterogeneous genomic datasets. By means of an explorative interaction support, users can trace past activities and quickly recover their results, seamlessly going backward and forward in the analysis steps and comparative visualizations of heatmaps. GeMSE effective application and practical usefulness is demonstrated through significant use cases of biological interest. GeMSE is available at http://www.bioinformatics.deib.polimi.it/GeMSE/ , and its source code is available at https://github.com/Genometric/GeMSE under GPLv3 open-source license.

  15. Localized Overheating Phenomena and Optimization of Spark-Plasma Sintering Tooling Design

    PubMed Central

    Giuntini, Diletta; Olevsky, Eugene A.; Garcia-Cardona, Cristina; Maximenko, Andrey L.; Yurlova, Maria S.; Haines, Christopher D.; Martin, Darold G.; Kapoor, Deepak

    2013-01-01

    The present paper shows the application of a three-dimensional coupled electrical, thermal, mechanical finite element macro-scale modeling framework of Spark Plasma Sintering (SPS) to an actual problem of SPS tooling overheating, encountered during SPS experimentation. The overheating phenomenon is analyzed by varying the geometry of the tooling that exhibits the problem, namely by modeling various tooling configurations involving sequences of disk-shape spacers with step-wise increasing radii. The analysis is conducted by means of finite element simulations, intended to obtain temperature spatial distributions in the graphite press-forms, including punches, dies, and spacers; to identify the temperature peaks and their respective timing, and to propose a more suitable SPS tooling configuration with the avoidance of the overheating as a final aim. Electric currents-based Joule heating, heat transfer, mechanical conditions, and densification are imbedded in the model, utilizing the finite-element software COMSOL™, which possesses a distinguishing ability of coupling multiple physics. Thereby the implementation of a finite element method applicable to a broad range of SPS procedures is carried out, together with the more specific optimization of the SPS tooling design when dealing with excessive heating phenomena. PMID:28811398

  16. The Dockstore: enabling modular, community-focused sharing of Docker-based genomics tools and workflows

    PubMed Central

    O'Connor, Brian D.; Yuen, Denis; Chung, Vincent; Duncan, Andrew G.; Liu, Xiang Kun; Patricia, Janice; Paten, Benedict; Stein, Lincoln; Ferretti, Vincent

    2017-01-01

    As genomic datasets continue to grow, the feasibility of downloading data to a local organization and running analysis on a traditional compute environment is becoming increasingly problematic. Current large-scale projects, such as the ICGC PanCancer Analysis of Whole Genomes (PCAWG), the Data Platform for the U.S. Precision Medicine Initiative, and the NIH Big Data to Knowledge Center for Translational Genomics, are using cloud-based infrastructure to both host and perform analysis across large data sets. In PCAWG, over 5,800 whole human genomes were aligned and variant called across 14 cloud and HPC environments; the processed data was then made available on the cloud for further analysis and sharing. If run locally, an operation at this scale would have monopolized a typical academic data centre for many months, and would have presented major challenges for data storage and distribution. However, this scale is increasingly typical for genomics projects and necessitates a rethink of how analytical tools are packaged and moved to the data. For PCAWG, we embraced the use of highly portable Docker images for encapsulating and sharing complex alignment and variant calling workflows across highly variable environments. While successful, this endeavor revealed a limitation in Docker containers, namely the lack of a standardized way to describe and execute the tools encapsulated inside the container. As a result, we created the Dockstore ( https://dockstore.org), a project that brings together Docker images with standardized, machine-readable ways of describing and running the tools contained within. This service greatly improves the sharing and reuse of genomics tools and promotes interoperability with similar projects through emerging web service standards developed by the Global Alliance for Genomics and Health (GA4GH). PMID:28344774

  17. The Dockstore: enabling modular, community-focused sharing of Docker-based genomics tools and workflows.

    PubMed

    O'Connor, Brian D; Yuen, Denis; Chung, Vincent; Duncan, Andrew G; Liu, Xiang Kun; Patricia, Janice; Paten, Benedict; Stein, Lincoln; Ferretti, Vincent

    2017-01-01

    As genomic datasets continue to grow, the feasibility of downloading data to a local organization and running analysis on a traditional compute environment is becoming increasingly problematic. Current large-scale projects, such as the ICGC PanCancer Analysis of Whole Genomes (PCAWG), the Data Platform for the U.S. Precision Medicine Initiative, and the NIH Big Data to Knowledge Center for Translational Genomics, are using cloud-based infrastructure to both host and perform analysis across large data sets. In PCAWG, over 5,800 whole human genomes were aligned and variant called across 14 cloud and HPC environments; the processed data was then made available on the cloud for further analysis and sharing. If run locally, an operation at this scale would have monopolized a typical academic data centre for many months, and would have presented major challenges for data storage and distribution. However, this scale is increasingly typical for genomics projects and necessitates a rethink of how analytical tools are packaged and moved to the data. For PCAWG, we embraced the use of highly portable Docker images for encapsulating and sharing complex alignment and variant calling workflows across highly variable environments. While successful, this endeavor revealed a limitation in Docker containers, namely the lack of a standardized way to describe and execute the tools encapsulated inside the container. As a result, we created the Dockstore ( https://dockstore.org), a project that brings together Docker images with standardized, machine-readable ways of describing and running the tools contained within. This service greatly improves the sharing and reuse of genomics tools and promotes interoperability with similar projects through emerging web service standards developed by the Global Alliance for Genomics and Health (GA4GH).

  18. GNormPlus: An Integrative Approach for Tagging Genes, Gene Families, and Protein Domains

    PubMed Central

    Lu, Zhiyong

    2015-01-01

    The automatic recognition of gene names and their associated database identifiers from biomedical text has been widely studied in recent years, as these tasks play an important role in many downstream text-mining applications. Despite significant previous research, only a small number of tools are publicly available and these tools are typically restricted to detecting only mention level gene names or only document level gene identifiers. In this work, we report GNormPlus: an end-to-end and open source system that handles both gene mention and identifier detection. We created a new corpus of 694 PubMed articles to support our development of GNormPlus, containing manual annotations for not only gene names and their identifiers, but also closely related concepts useful for gene name disambiguation, such as gene families and protein domains. GNormPlus integrates several advanced text-mining techniques, including SimConcept for resolving composite gene names. As a result, GNormPlus compares favorably to other state-of-the-art methods when evaluated on two widely used public benchmarking datasets, achieving 86.7% F1-score on the BioCreative II Gene Normalization task dataset and 50.1% F1-score on the BioCreative III Gene Normalization task dataset. The GNormPlus source code and its annotated corpus are freely available, and the results of applying GNormPlus to the entire PubMed are freely accessible through our web-based tool PubTator. PMID:26380306

  19. Trace-back and trace-forward tools developed ad hoc and used during the STEC O104:H4 outbreak 2011 in Germany and generic concepts for future outbreak situations.

    PubMed

    Weiser, Armin A; Gross, Stefan; Schielke, Anika; Wigger, Jan-Frederik; Ernert, Andrea; Adolphs, Julian; Fetsch, Alexandra; Müller-Graf, Christine; Käsbohrer, Annemarie; Mosbach-Schulz, Olaf; Appel, Bernd; Greiner, Matthias

    2013-03-01

    The Shiga toxin-producing Escherichia coli O104:H4 outbreak in Germany in 2011 required the development of appropriate tools in real-time for tracing suspicious foods along the supply chain, namely salad ingredients, sprouts, and seeds. Food commodities consumed at locations identified as most probable site of infection (outbreak clusters) were traced back in order to identify connections between different disease clusters via the supply chain of the foods. A newly developed relational database with integrated consistency and plausibility checks was used to collate these data for further analysis. Connections between suppliers, distributors, and producers were visualized in network graphs and geographic projections. Finally, this trace-back and trace-forward analysis led to the identification of sprouts produced by a horticultural farm in Lower Saxony as vehicle for the pathogen, and a specific lot of fenugreek seeds imported from Egypt as the most likely source of contamination. Network graphs have proven to be a powerful tool for summarizing and communicating complex trade relationships to various stake holders. The present article gives a detailed description of the newly developed tracing tools and recommendations for necessary requirements and improvements for future foodborne outbreak investigations.

  20. PATIKA: an integrated visual environment for collaborative construction and analysis of cellular pathways.

    PubMed

    Demir, E; Babur, O; Dogrusoz, U; Gursoy, A; Nisanci, G; Cetin-Atalay, R; Ozturk, M

    2002-07-01

    Availability of the sequences of entire genomes shifts the scientific curiosity towards the identification of function of the genomes in large scale as in genome studies. In the near future, data produced about cellular processes at molecular level will accumulate with an accelerating rate as a result of proteomics studies. In this regard, it is essential to develop tools for storing, integrating, accessing, and analyzing this data effectively. We define an ontology for a comprehensive representation of cellular events. The ontology presented here enables integration of fragmented or incomplete pathway information and supports manipulation and incorporation of the stored data, as well as multiple levels of abstraction. Based on this ontology, we present the architecture of an integrated environment named Patika (Pathway Analysis Tool for Integration and Knowledge Acquisition). Patika is composed of a server-side, scalable, object-oriented database and client-side editors to provide an integrated, multi-user environment for visualizing and manipulating network of cellular events. This tool features automated pathway layout, functional computation support, advanced querying and a user-friendly graphical interface. We expect that Patika will be a valuable tool for rapid knowledge acquisition, microarray generated large-scale data interpretation, disease gene identification, and drug development. A prototype of Patika is available upon request from the authors.

  1. Academic health sciences library Website navigation: an analysis of forty-one Websites and their navigation tools

    PubMed Central

    Brower, Stewart M.

    2004-01-01

    Background: The analysis included forty-one academic health sciences library (HSL) Websites as captured in the first two weeks of January 2001. Home pages and persistent navigational tools (PNTs) were analyzed for layout, technology, and links, and other general site metrics were taken. Methods: Websites were selected based on rank in the National Network of Libraries of Medicine, with regional and resource libraries given preference on the basis that these libraries are recognized as leaders in their regions and would be the most reasonable source of standards for best practice. A three-page evaluation tool was developed based on previous similar studies. All forty-one sites were evaluated in four specific areas: library general information, Website aids and tools, library services, and electronic resources. Metrics taken for electronic resources included orientation of bibliographic databases alphabetically by title or by subject area and with links to specifically named databases. Results: Based on the results, a formula for determining obligatory links was developed, listing items that should appear on all academic HSL Web home pages and PNTs. Conclusions: These obligatory links demonstrate a series of best practices that may be followed in the design and construction of academic HSL Websites. PMID:15494756

  2. ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra

    PubMed Central

    2011-01-01

    Background Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. Results We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Conclusions Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics. PMID:21774817

  3. EpiTools, A software suite for presurgical brain mapping in epilepsy: Intracerebral EEG.

    PubMed

    Medina Villalon, S; Paz, R; Roehri, N; Lagarde, S; Pizzo, F; Colombet, B; Bartolomei, F; Carron, R; Bénar, C-G

    2018-06-01

    In pharmacoresistant epilepsy, exploration with depth electrodes can be needed to precisely define the epileptogenic zone. Accurate location of these electrodes is thus essential for the interpretation of Stereotaxic EEG (SEEG) signals. As SEEG analysis increasingly relies on signal processing, it is crucial to make a link between these results and patient's anatomy. Our aims were thus to develop a suite of software tools, called "EpiTools", able to i) precisely and automatically localize the position of each SEEG contact and ii) display the results of signal analysis in each patient's anatomy. The first tool, GARDEL (GUI for Automatic Registration and Depth Electrode Localization), is able to automatically localize SEEG contacts and to label each contact according to a pre-specified nomenclature (for instance that of FreeSurfer or MarsAtlas). The second tool, 3Dviewer, enables to visualize in the 3D anatomy of the patient the origin of signal processing results such as rate of biomarkers, connectivity graphs or Epileptogenicity Index. GARDEL was validated in 30 patients by clinicians and proved to be highly reliable to determine within the patient's individual anatomy the actual location of contacts. GARDEL is a fully automatic electrode localization tool needing limited user interaction (only for electrode naming or contact correction). The 3Dviewer is able to read signal processing results and to display them in link with patient's anatomy. EpiTools can help speeding up the interpretation of SEEG data and improving its precision. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. D-VASim: an interactive virtual laboratory environment for the simulation and analysis of genetic circuits.

    PubMed

    Baig, Hasan; Madsen, Jan

    2017-01-15

    Simulation and behavioral analysis of genetic circuits is a standard approach of functional verification prior to their physical implementation. Many software tools have been developed to perform in silico analysis for this purpose, but none of them allow users to interact with the model during runtime. The runtime interaction gives the user a feeling of being in the lab performing a real world experiment. In this work, we present a user-friendly software tool named D-VASim (Dynamic Virtual Analyzer and Simulator), which provides a virtual laboratory environment to simulate and analyze the behavior of genetic logic circuit models represented in an SBML (Systems Biology Markup Language). Hence, SBML models developed in other software environments can be analyzed and simulated in D-VASim. D-VASim offers deterministic as well as stochastic simulation; and differs from other software tools by being able to extract and validate the Boolean logic from the SBML model. D-VASim is also capable of analyzing the threshold value and propagation delay of a genetic circuit model. D-VASim is available for Windows and Mac OS and can be downloaded from bda.compute.dtu.dk/downloads/. haba@dtu.dk, jama@dtu.dk. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Novel presentational approaches were developed for reporting network meta-analysis.

    PubMed

    Tan, Sze Huey; Cooper, Nicola J; Bujkiewicz, Sylwia; Welton, Nicky J; Caldwell, Deborah M; Sutton, Alexander J

    2014-06-01

    To present graphical tools for reporting network meta-analysis (NMA) results aiming to increase the accessibility, transparency, interpretability, and acceptability of NMA analyses. The key components of NMA results were identified based on recommendations by agencies such as the National Institute for Health and Care Excellence (United Kingdom). Three novel graphs were designed to amalgamate the identified components using familiar graphical tools such as the bar, line, or pie charts and adhering to good graphical design principles. Three key components for presentation of NMA results were identified, namely relative effects and their uncertainty, probability of an intervention being best, and between-study heterogeneity. Two of the three graphs developed present results (for each pairwise comparison of interventions in the network) obtained from both NMA and standard pairwise meta-analysis for easy comparison. They also include options to display the probability best, ranking statistics, heterogeneity, and prediction intervals. The third graph presents rankings of interventions in terms of their effectiveness to enable clinicians to easily identify "top-ranking" interventions. The graphical tools presented can display results tailored to the research question of interest, and targeted at a whole spectrum of users from the technical analyst to the nontechnical clinician. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. A Student Assessment Tool for Standardized Patient Simulations (SAT-SPS): Psychometric analysis.

    PubMed

    Castro-Yuste, Cristina; García-Cabanillas, María José; Rodríguez-Cornejo, María Jesús; Carnicer-Fuentes, Concepción; Paloma-Castro, Olga; Moreno-Corral, Luis Javier

    2018-05-01

    The evaluation of the level of clinical competence acquired by the student is a complex process that must meet various requirements to ensure its quality. The psychometric analysis of the data collected by the assessment tools used is a fundamental aspect to guarantee the student's competence level. To conduct a psychometric analysis of an instrument which assesses clinical competence in nursing students at simulation stations with standardized patients in OSCE-format tests. The construct of clinical competence was operationalized as a set of observable and measurable behaviors, measured by the newly-created Student Assessment Tool for Standardized Patient Simulations (SAT-SPS), which was comprised of 27 items. The categories assigned to the items were 'incorrect or not performed' (0), 'acceptable' (1), and 'correct' (2). 499 nursing students. Data were collected by two independent observers during the assessment of the students' performance at a four-station OSCE with standardized patients. Descriptive statistics were used to summarize the variables. The difficulty levels and floor and ceiling effects were determined for each item. Reliability was analyzed using internal consistency and inter-observer reliability. The validity analysis was performed considering face validity, content and construct validity (through exploratory factor analysis), and criterion validity. Internal reliability and inter-observer reliability were higher than 0.80. The construct validity analysis suggested a three-factor model accounting for 37.1% of the variance. These three factors were named 'Nursing process', 'Communication skills', and 'Safe practice'. A significant correlation was found between the scores obtained and the students' grades in general, as well as with the grades obtained in subjects with clinical content. The assessment tool has proven to be sufficiently reliable and valid for the assessment of the clinical competence of nursing students using standardized patients. This tool has three main components: the nursing process, communication skills, and safety management. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Validation Analysis of a Geriatric Dehydration Screening Tool in Community-Dwelling and Institutionalized Elderly People

    PubMed Central

    Rodrigues, Susana; Silva, Joana; Severo, Milton; Inácio, Cátia; Padrão, Patrícia; Lopes, Carla; Carvalho, Joana; do Carmo, Isabel; Moreira, Pedro

    2015-01-01

    Dehydration is common among elderly people. The aim of this study was to perform validation analysis of a geriatric dehydration-screening tool (DST) in the assessment of hydration status in elderly people. This tool was based on the DST proposed by Vivanti et al., which is composed by 11 items (four physical signs of dehydration and seven questions about thirst sensation, pain and mobility), with four questions extra about drinking habits. The resulting questionnaire was evaluated in a convenience sample comprising institutionalized (n = 29) and community-dwelling (n = 74) elderly people. Urinary parameters were assessed (24-h urine osmolality and volume) and free water reserve (FWR) was calculated. Exploratory factor analysis was used to evaluate the scale’s dimensionality and Cronbach’s alpha was used to measure the reliability of each subscale. Construct’s validity was tested using linear regression to estimate the association between scores in each dimension and urinary parameters. Two factors emerged from factor analysis, which were named “Hydration Score” and “Pain Score”, and both subscales showed acceptable reliabilities. The “Hydration Score” was negatively associated with 24-h urine osmolality in community-dwelling; and the “Pain Score” was negatively associated with 24-h urine osmolality, and positively associated with 24-h urine volume and FWR in institutionalized elderly people. PMID:25739005

  8. Fermilab Friends for Science Education | Tree of Knowledge

    Science.gov Websites

    contributors to perpetuate their name or a designated name, or the memory of a special event, honor or personal Testimonials Our Donors Board of Directors Board Tools Calendar Join Us Donate Now Get FermiGear! Education

  9. 3D visualization of solar wind ion data from the Chang'E-1 exploration

    NASA Astrophysics Data System (ADS)

    Zhang, Tian; Sun, Yankui; Tang, Zesheng

    2011-10-01

    Chang'E-1 (abbreviation CE-1), China's first Moon-orbiting spacecraft launched in 2007, carried equipment called the Solar Wind Ion Detector (abbreviation SWID), which sent back tens of gigabytes of solar wind ion differential number flux data. These data are essential for furthering our understanding of the cislunar space environment. However, to fully comprehend and analyze these data presents considerable difficulties, not only because of their huge size (57 GB), but also because of their complexity. Therefore, a new 3D visualization method is developed to give a more intuitive representation than traditional 1D and 2D visualizations, and in particular to offer a better indication of the direction of the incident ion differential number flux and the relative spatial position of CE-1 with respect to the Sun, the Earth, and the Moon. First, a coordinate system named Selenocentric Solar Ecliptic (SSE) which is more suitable for our goal is chosen, and solar wind ion differential number flux vectors in SSE are calculated from Geocentric Solar Ecliptic System (GSE) and Moon Center Coordinate (MCC) coordinates of the spacecraft, and then the ion differential number flux distribution in SSE is visualized in 3D space. This visualization method is integrated into an interactive visualization analysis software tool named vtSWIDs, developed in MATLAB, which enables researchers to browse through numerous records and manipulate the visualization results in real time. The tool also provides some useful statistical analysis functions, and can be easily expanded.

  10. Naming and verbal learning in adults with Alzheimer's disease, mild cognitive impairment and in healthy aging, with low educational levels.

    PubMed

    Hübner, Lilian Cristine; Loureiro, Fernanda; Tessaro, Bruna; Siqueira, Ellen Cristina Gerner; Jerônimo, Gislaine Machado; Gomes, Irênio; Schilling, Lucas Porcello

    2018-02-01

    Language assessment seems to be an effective tool to differentiate healthy and cognitively impaired aging groups. This article discusses the impact of educational level on a naming task, on a verbal learning with semantic cues task and on the MMSE in healthy aging adults at three educational levels (very low, low and high) as well as comparing two clinical groups of very low (0-3 years) and low education (4-7 years) patients with Alzheimer's disease (AD) and mild cognitive impairment (MCI) with healthy controls. The participants comprised 101 healthy controls, 17 patients with MCI and 19 with AD. Comparisons between the healthy groups showed an education effect on the MMSE, but not on naming and verbal learning. However, the clinical groups were differentiated in both the naming and verbal learning assessment. The results support the assumption that the verbal learning with semantic cues task is a valid tool to diagnose MCI and AD patients, with no influence from education.

  11. [Greeting modalities preferred by patients in pediatric ambulatory setting].

    PubMed

    Eymann, Alfredo; Ortolani, Marina; Moro, Graciela; Otero, Paula; Catsicaris, Cristina; Wahren, Carlos

    2011-02-01

    The greeting is the first form of verbal and nonverbal communication and is a valuable tool to support the physician-patient relationship. Assess parents and children preferences on how they want pediatricians greet and address them. Cross-sectional study. The population was persons accompanying patients (parents or guardians) between 1 month and 19 years old and patients older than 5 years old. A survey questionnaire was completed after the medical visit. A total of 419 surveys from patients' companions and 249 from pediatric patients were analyzed; 68% of the companions preferred the doctor addressed them by the first name, 67% liked to be greeted with a kiss on the cheek and 90% liked to be treated informally. Preferring to be greeted with a kiss on the cheek was associated in multivariate analysis with the companion was the mother, age younger than 39 years and longer time in knowing the pediatrician; 60% of the patients preferred to be addressed by their first name. In the outpatient setting patients companions and patients themselves prefer to be addressed by their name informally and be greeted with a kiss on the cheek.

  12. Effect of the statin therapy on biochemical laboratory tests--a chemometrics study.

    PubMed

    Durceková, Tatiana; Mocák, Ján; Boronová, Katarína; Balla, Ján

    2011-01-05

    Statins are the first-line choice for lowering total and LDL cholesterol levels and very important medicaments for reducing the risk of coronary artery disease. The aim of this study is therefore assessment of the results of biochemical tests characterizing the condition of 172 patients before and after administration of statins. For this purpose, several chemometric tools, namely principal component analysis, cluster analysis, discriminant analysis, logistic regression, KNN classification, ROC analysis, descriptive statistics and ANOVA were used. Mutual relations of 11 biochemical laboratory tests, the patient's age and gender were investigated in detail. Achieved results enable to evaluate the extent of the statin treatment in each individual case. They may also help in monitoring the dynamic progression of the disease. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. Immunogenetic Management Software: a new tool for visualization and analysis of complex immunogenetic datasets

    PubMed Central

    Johnson, Z. P.; Eady, R. D.; Ahmad, S. F.; Agravat, S.; Morris, T; Else, J; Lank, S. M.; Wiseman, R. W.; O’Connor, D. H.; Penedo, M. C. T.; Larsen, C. P.

    2012-01-01

    Here we describe the Immunogenetic Management Software (IMS) system, a novel web-based application that permitsmultiplexed analysis of complex immunogenetic traits that are necessary for the accurate planning and execution of experiments involving large animal models, including nonhuman primates. IMS is capable of housing complex pedigree relationships, microsatellite-based MHC typing data, as well as MHC pyrosequencing expression analysis of class I alleles. It includes a novel, automated MHC haplotype naming algorithm and has accomplished an innovative visualization protocol that allows users to view multiple familial and MHC haplotype relationships through a single, interactive graphical interface. Detailed DNA and RNA-based data can also be queried and analyzed in a highly accessible fashion, and flexible search capabilities allow experimental choices to be made based on multiple, individualized and expandable immunogenetic factors. This web application is implemented in Java, MySQL, Tomcat, and Apache, with supported browsers including Internet Explorer and Firefox onWindows and Safari on Mac OS. The software is freely available for distribution to noncommercial users by contacting Leslie. kean@emory.edu. A demonstration site for the software is available at http://typing.emory.edu/typing_demo, user name: imsdemo7@gmail.com and password: imsdemo. PMID:22080300

  14. Immunogenetic Management Software: a new tool for visualization and analysis of complex immunogenetic datasets.

    PubMed

    Johnson, Z P; Eady, R D; Ahmad, S F; Agravat, S; Morris, T; Else, J; Lank, S M; Wiseman, R W; O'Connor, D H; Penedo, M C T; Larsen, C P; Kean, L S

    2012-04-01

    Here we describe the Immunogenetic Management Software (IMS) system, a novel web-based application that permits multiplexed analysis of complex immunogenetic traits that are necessary for the accurate planning and execution of experiments involving large animal models, including nonhuman primates. IMS is capable of housing complex pedigree relationships, microsatellite-based MHC typing data, as well as MHC pyrosequencing expression analysis of class I alleles. It includes a novel, automated MHC haplotype naming algorithm and has accomplished an innovative visualization protocol that allows users to view multiple familial and MHC haplotype relationships through a single, interactive graphical interface. Detailed DNA and RNA-based data can also be queried and analyzed in a highly accessible fashion, and flexible search capabilities allow experimental choices to be made based on multiple, individualized and expandable immunogenetic factors. This web application is implemented in Java, MySQL, Tomcat, and Apache, with supported browsers including Internet Explorer and Firefox on Windows and Safari on Mac OS. The software is freely available for distribution to noncommercial users by contacting Leslie.kean@emory.edu. A demonstration site for the software is available at http://typing.emory.edu/typing_demo , user name: imsdemo7@gmail.com and password: imsdemo.

  15. Logical Modeling and Dynamical Analysis of Cellular Networks

    PubMed Central

    Abou-Jaoudé, Wassim; Traynard, Pauline; Monteiro, Pedro T.; Saez-Rodriguez, Julio; Helikar, Tomáš; Thieffry, Denis; Chaouiya, Claudine

    2016-01-01

    The logical (or logic) formalism is increasingly used to model regulatory and signaling networks. Complementing these applications, several groups contributed various methods and tools to support the definition and analysis of logical models. After an introduction to the logical modeling framework and to several of its variants, we review here a number of recent methodological advances to ease the analysis of large and intricate networks. In particular, we survey approaches to determine model attractors and their reachability properties, to assess the dynamical impact of variations of external signals, and to consistently reduce large models. To illustrate these developments, we further consider several published logical models for two important biological processes, namely the differentiation of T helper cells and the control of mammalian cell cycle. PMID:27303434

  16. Multisatellite constellation configuration selection for multiregional highly elliptical orbit constellations

    NASA Technical Reports Server (NTRS)

    Matossian, Mark G.

    1994-01-01

    The Archimedes Project is a joint effort of the European Space Agency (ESA) and the National Space Development Agency of Japan (NASDA). The primary goal of the Archimedes project is to perform a technical feasibility analysis and preliminary design of a highly inclined multisatellite constellation for direct broadcast and mobile communications services for Europe, Japan and much of North America. This report addresses one aspect of this project, specifically an analysis of continuous satellite coverage using multiregional highly elliptical orbits (M-HEO's). The analysis methodology and ensuing software tool, named SPIFF, were developed specifically for this project by the author during the summer of 1992 under the STA/NSF Summer Institute in Japan Program at Tsukuba Space Center.

  17. Providing Cryptographic Security and Evidentiary Chain-of-Custody with the Advanced Forensic Format, Library, and Tools

    DTIC Science & Technology

    2008-08-19

    1 hash of the page page%d sha256 The segment for the SHA256 hash of the page Bad Sector Management: badsectors The number of sectors in the image...written, AFFLIB can automatically compute the page’s MD5, SHA-1, and/or SHA256 hash and write an associated segment containing the hash value. The...are written into segments themselves, with the segment name being name/ sha256 where name is the original segment name sha256 is the hash algorithm used

  18. Failure mode and effect analysis in blood transfusion: a proactive tool to reduce risks.

    PubMed

    Lu, Yao; Teng, Fang; Zhou, Jie; Wen, Aiqing; Bi, Yutian

    2013-12-01

    The aim of blood transfusion risk management is to improve the quality of blood products and to assure patient safety. We utilize failure mode and effect analysis (FMEA), a tool employed for evaluating risks and identifying preventive measures to reduce the risks in blood transfusion. The failure modes and effects occurring throughout the whole process of blood transfusion were studied. Each failure mode was evaluated using three scores: severity of effect (S), likelihood of occurrence (O), and probability of detection (D). Risk priority numbers (RPNs) were calculated by multiplying the S, O, and D scores. The plan-do-check-act cycle was also used for continuous improvement. Analysis has showed that failure modes with the highest RPNs, and therefore the greatest risk, were insufficient preoperative assessment of the blood product requirement (RPN, 245), preparation time before infusion of more than 30 minutes (RPN, 240), blood transfusion reaction occurring during the transfusion process (RPN, 224), blood plasma abuse (RPN, 180), and insufficient and/or incorrect clinical information on request form (RPN, 126). After implementation of preventative measures and reassessment, a reduction in RPN was detected with each risk. The failure mode with the second highest RPN, namely, preparation time before infusion of more than 30 minutes, was shown in detail to prove the efficiency of this tool. FMEA evaluation model is a useful tool in proactively analyzing and reducing the risks associated with the blood transfusion procedure. © 2013 American Association of Blood Banks.

  19. What's Next in Complex Networks? Capturing the Concept of Attacking Play in Invasive Team Sports.

    PubMed

    Ramos, João; Lopes, Rui J; Araújo, Duarte

    2018-01-01

    The evolution of performance analysis within sports sciences is tied to technology development and practitioner demands. However, how individual and collective patterns self-organize and interact in invasive team sports remains elusive. Social network analysis has been recently proposed to resolve some aspects of this problem, and has proven successful in capturing collective features resulting from the interactions between team members as well as a powerful communication tool. Despite these advances, some fundamental team sports concepts such as an attacking play have not been properly captured by the more common applications of social network analysis to team sports performance. In this article, we propose a novel approach to team sports performance centered on sport concepts, namely that of an attacking play. Network theory and tools including temporal and bipartite or multilayered networks were used to capture this concept. We put forward eight questions directly related to team performance to discuss how common pitfalls in the use of network tools for capturing sports concepts can be avoided. Some answers are advanced in an attempt to be more precise in the description of team dynamics and to uncover other metrics directly applied to sport concepts, such as the structure and dynamics of attacking plays. Finally, we propose that, at this stage of knowledge, it may be advantageous to build up from fundamental sport concepts toward complex network theory and tools, and not the other way around.

  20. A smartphone-based platform to test the performance of wireless mobile networks and preliminary findings

    NASA Astrophysics Data System (ADS)

    Geng, Xinli; Xu, Hao; Qin, Xiaowei

    2016-10-01

    During the last several years, the amount of wireless network traffic data increased fast and relative technologies evolved rapidly. In order to improve the performance and Quality of Experience (QoE) of wireless network services, the analysis of field network data and existing delivery mechanisms comes to be a promising research topic. In order to achieve this goal, a smartphone based platform named Monitor and Diagnosis of Mobile Applications (MDMA) was developed to collect field data. Based on this tool, the web browsing service of High Speed Downlink Packet Access (HSDPA) network was tested. The top 200 popular websites in China were selected and loaded on smartphone for thousands times automatically. Communication packets between the smartphone and the cell station were captured for various scenarios (e.g. residential area, urban roads, bus station etc.) in the selected city. A cross-layer database was constructed to support the off-line analysis. Based on the results of client-side experiments and analysis, the usability of proposed portable tool was verified. The preliminary findings and results for existing web browsing service were also presented.

  1. bcROCsurface: an R package for correcting verification bias in estimation of the ROC surface and its volume for continuous diagnostic tests.

    PubMed

    To Duc, Khanh

    2017-11-18

    Receiver operating characteristic (ROC) surface analysis is usually employed to assess the accuracy of a medical diagnostic test when there are three ordered disease status (e.g. non-diseased, intermediate, diseased). In practice, verification bias can occur due to missingness of the true disease status and can lead to a distorted conclusion on diagnostic accuracy. In such situations, bias-corrected inference tools are required. This paper introduce an R package, named bcROCsurface, which provides utility functions for verification bias-corrected ROC surface analysis. The shiny web application of the correction for verification bias in estimation of the ROC surface analysis is also developed. bcROCsurface may become an important tool for the statistical evaluation of three-class diagnostic markers in presence of verification bias. The R package, readme and example data are available on CRAN. The web interface enables users less familiar with R to evaluate the accuracy of diagnostic tests, and can be found at http://khanhtoduc.shinyapps.io/bcROCsurface_shiny/ .

  2. Joint analysis of epistemic and aleatory uncertainty in stability analysis for geo-hazard assessments

    NASA Astrophysics Data System (ADS)

    Rohmer, Jeremy; Verdel, Thierry

    2017-04-01

    Uncertainty analysis is an unavoidable task of stability analysis of any geotechnical systems. Such analysis usually relies on the safety factor SF (if SF is below some specified threshold), the failure is possible). The objective of the stability analysis is then to estimate the failure probability P for SF to be below the specified threshold. When dealing with uncertainties, two facets should be considered as outlined by several authors in the domain of geotechnics, namely "aleatoric uncertainty" (also named "randomness" or "intrinsic variability") and "epistemic uncertainty" (i.e. when facing "vague, incomplete or imprecise information" such as limited databases and observations or "imperfect" modelling). The benefits of separating both facets of uncertainty can be seen from a risk management perspective because: - Aleatoric uncertainty, being a property of the system under study, cannot be reduced. However, practical actions can be taken to circumvent the potentially dangerous effects of such variability; - Epistemic uncertainty, being due to the incomplete/imprecise nature of available information, can be reduced by e.g., increasing the number of tests (lab or in site survey), improving the measurement methods or evaluating calculation procedure with model tests, confronting more information sources (expert opinions, data from literature, etc.). Uncertainty treatment in stability analysis usually restricts to the probabilistic framework to represent both facets of uncertainty. Yet, in the domain of geo-hazard assessments (like landslides, mine pillar collapse, rockfalls, etc.), the validity of this approach can be debatable. In the present communication, we propose to review the major criticisms available in the literature against the systematic use of probability in situations of high degree of uncertainty. On this basis, the feasibility of using a more flexible uncertainty representation tool is then investigated, namely Possibility distributions (e.g., Baudrit et al., 2007) for geo-hazard assessments. A graphical tool is then developed to explore: 1. the contribution of both types of uncertainty, aleatoric and epistemic; 2. the regions of the imprecise or random parameters which contribute the most to the imprecision on the failure probability P. The method is applied on two case studies (a mine pillar and a steep slope stability analysis, Rohmer and Verdel, 2014) to investigate the necessity for extra data acquisition on parameters whose imprecision can hardly be modelled by probabilities due to the scarcity of the available information (respectively the extraction ratio and the cliff geometry). References Baudrit, C., Couso, I., & Dubois, D. (2007). Joint propagation of probability and possibility in risk analysis: Towards a formal framework. International Journal of Approximate Reasoning, 45(1), 82-105. Rohmer, J., & Verdel, T. (2014). Joint exploration of regional importance of possibilistic and probabilistic uncertainty in stability analysis. Computers and Geotechnics, 61, 308-315.

  3. Circulating Cell Free Tumor DNA Detection as a Routine Tool for Lung Cancer Patient Management

    PubMed Central

    Vendrell, Julie A.; Mau-Them, Frédéric Tran; Béganton, Benoît; Godreuil, Sylvain; Coopman, Peter; Solassol, Jérôme

    2017-01-01

    Circulating tumoral DNA (ctDNA), commonly named “liquid biopsy”, has emerged as a new promising noninvasive tool to detect biomarker in several cancers including lung cancer. Applications involving molecular analysis of ctDNA in lung cancer have increased and encompass diagnosis, response to treatment, acquired resistance and prognosis prediction, while bypassing the problem of tumor heterogeneity. ctDNA may then help perform dynamic genetic surveillance in the era of precision medicine through indirect tumoral genomic information determination. The aims of this review were to examine the recent technical developments that allowed the detection of genetic alterations of ctDNA in lung cancer. Furthermore, we explored clinical applications in patients with lung cancer including treatment efficiency monitoring, acquired therapy resistance mechanisms and prognosis value. PMID:28146051

  4. Multisource feedback: 360-degree assessment of professional skills of clinical directors.

    PubMed

    Palmer, Robert; Rayner, Hugh; Wall, David

    2007-08-01

    For measuring behaviour of National Health Service (NHS) staff, 360-degree assessment is a valuable tool. The important role of a clinical director as a medical leader is increasingly recognized, and attributes of a good clinical director can be defined. Set against these attributes, a 360-degree assessment tool has been designed. The job description for clinical directors has been used to develop a questionnaire sent to senior hospital staff. The views of staff within the hospital are similar irrespective of gender, post held or length of time in post. Analysis has shown that three independent factors can be distilled, namely operational management, interpersonal skills and creative/strategic thinking. A simple validated questionnaire has been developed and successfully introduced for the 360-degree assessment of clinical directors.

  5. Social Network Analysis Reveals the Negative Effects of Attention-Deficit/Hyperactivity Disorder (ADHD) Symptoms on Friend-Based Student Networks.

    PubMed

    Kim, Jun Won; Kim, Bung-Nyun; Kim, Johanna Inhyang; Lee, Young Sik; Min, Kyung Joon; Kim, Hyun-Jin; Lee, Jaewon

    2015-01-01

    Social network analysis has emerged as a promising tool in modern social psychology. This method can be used to examine friend-based social relationships in terms of network theory, with nodes representing individual students and ties representing relationships between students (e.g., friendships and kinships). Using social network analysis, we investigated whether greater severity of ADHD symptoms is correlated with weaker peer relationships among elementary school students. A total of 562 sixth-graders from two elementary schools (300 males) provided the names of their best friends (maximum 10 names). Their teachers rated each student's ADHD symptoms using an ADHD rating scale. The results showed that 10.2% of the students were at high risk for ADHD. Significant group differences were observed between the high-risk students and other students in two of the three network parameters (degree, centrality and closeness) used to assess friendship quality, with the high-risk group showing significantly lower values of degree and closeness compared to the other students. Moreover, negative correlations were found between the ADHD rating and two social network analysis parameters. Our findings suggest that the severity of ADHD symptoms is strongly correlated with the quality of social and interpersonal relationships in students with ADHD symptoms.

  6. Social Network Analysis Reveals the Negative Effects of Attention-Deficit/Hyperactivity Disorder (ADHD) Symptoms on Friend-Based Student Networks

    PubMed Central

    Kim, Jun Won; Kim, Bung-Nyun; Kim, Johanna Inhyang; Lee, Young Sik; Min, Kyung Joon; Kim, Hyun-Jin; Lee, Jaewon

    2015-01-01

    Introduction Social network analysis has emerged as a promising tool in modern social psychology. This method can be used to examine friend-based social relationships in terms of network theory, with nodes representing individual students and ties representing relationships between students (e.g., friendships and kinships). Using social network analysis, we investigated whether greater severity of ADHD symptoms is correlated with weaker peer relationships among elementary school students. Methods A total of 562 sixth-graders from two elementary schools (300 males) provided the names of their best friends (maximum 10 names). Their teachers rated each student’s ADHD symptoms using an ADHD rating scale. Results The results showed that 10.2% of the students were at high risk for ADHD. Significant group differences were observed between the high-risk students and other students in two of the three network parameters (degree, centrality and closeness) used to assess friendship quality, with the high-risk group showing significantly lower values of degree and closeness compared to the other students. Moreover, negative correlations were found between the ADHD rating and two social network analysis parameters. Conclusion Our findings suggest that the severity of ADHD symptoms is strongly correlated with the quality of social and interpersonal relationships in students with ADHD symptoms. PMID:26562777

  7. Cost-effectiveness analysis: adding value to assessment of animal health welfare and production.

    PubMed

    Babo Martins, S; Rushton, J

    2014-12-01

    Cost-effectiveness analysis (CEA) has been extensively used in economic assessments in fields related to animal health, namely in human health where it provides a decision-making framework for choices about the allocation of healthcare resources. Conversely, in animal health, cost-benefit analysis has been the preferred tool for economic analysis. In this paper, the use of CEA in related areas and the role of this technique in assessments of animal health, welfare and production are reviewed. Cost-effectiveness analysis can add further value to these assessments, particularly in programmes targeting animal welfare or animal diseases with an impact on human health, where outcomes are best valued in natural effects rather than in monetary units. Importantly, CEA can be performed during programme implementation stages to assess alternative courses of action in real time.

  8. CFGP: a web-based, comparative fungal genomics platform.

    PubMed

    Park, Jongsun; Park, Bongsoo; Jung, Kyongyong; Jang, Suwang; Yu, Kwangyul; Choi, Jaeyoung; Kong, Sunghyung; Park, Jaejin; Kim, Seryun; Kim, Hyojeong; Kim, Soonok; Kim, Jihyun F; Blair, Jaime E; Lee, Kwangwon; Kang, Seogchan; Lee, Yong-Hwan

    2008-01-01

    Since the completion of the Saccharomyces cerevisiae genome sequencing project in 1996, the genomes of over 80 fungal species have been sequenced or are currently being sequenced. Resulting data provide opportunities for studying and comparing fungal biology and evolution at the genome level. To support such studies, the Comparative Fungal Genomics Platform (CFGP; http://cfgp.snu.ac.kr), a web-based multifunctional informatics workbench, was developed. The CFGP comprises three layers, including the basal layer, middleware and the user interface. The data warehouse in the basal layer contains standardized genome sequences of 65 fungal species. The middleware processes queries via six analysis tools, including BLAST, ClustalW, InterProScan, SignalP 3.0, PSORT II and a newly developed tool named BLASTMatrix. The BLASTMatrix permits the identification and visualization of genes homologous to a query across multiple species. The Data-driven User Interface (DUI) of the CFGP was built on a new concept of pre-collecting data and post-executing analysis instead of the 'fill-in-the-form-and-press-SUBMIT' user interfaces utilized by most bioinformatics sites. A tool termed Favorite, which supports the management of encapsulated sequence data and provides a personalized data repository to users, is another novel feature in the DUI.

  9. Lean manufacturing analysis to reduce waste on production process of fan products

    NASA Astrophysics Data System (ADS)

    Siregar, I.; Nasution, A. A.; Andayani, U.; Sari, R. M.; Syahputri, K.; Anizar

    2018-02-01

    This research is based on case study that being on electrical company. One of the products that will be researched is the fan, which when running the production process there is a time that is not value-added, among others, the removal of material which is not efficient in the raw materials and component molding fan. This study aims to reduce waste or non-value added activities and shorten the total lead time by using the tools Value Stream Mapping. Lean manufacturing methods used to analyze and reduce the non-value added activities, namely the value stream mapping analysis tools, process mapping activity with 5W1H, and tools 5 whys. Based on the research note that no value-added activities in the production process of a fan of 647.94 minutes of total lead time of 725.68 minutes. Process cycle efficiency in the production process indicates that the fan is still very low at 11%. While estimates of the repair showed a decrease in total lead time became 340.9 minutes and the process cycle efficiency is greater by 24%, which indicates that the production process has been better.

  10. Data and Tools | Research Site Name | NREL

    Science.gov Websites

    aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo laboris nisi ut aliquip ex ea commodo consequat. Research Topic 1 Lorem Ipsum Tool 1 Lorem Ipsum Facility

  11. Patient's Guide to Recovery After Deep Vein Thrombosis or Pulmonary Embolism

    MedlinePlus

    ... Information Disclosures Footnotes Figures & Tables Info & Metrics eLetters Article Tools Print Citation Tools A Patient’s Guide to ... Remember my user name & password. Submit Share this Article Email Thank you for your interest in spreading ...

  12. Design Tools for Assessing Manufacturing Environmental Impact.

    DTIC Science & Technology

    1997-11-26

    the material report alone. In order to more easily design, update and verify the output report, many of the cells which contained the information...needed for the material balance calculations were named. The cell name was then used in the calculations. Where possible the same names that were used in...Material balance information was used extensively to ensure all the equations were correct and were put into the appropriate cells . A summary of the

  13. Fingerprint Ridge Density as a Potential Forensic Anthropological Tool for Sex Identification.

    PubMed

    Dhall, Jasmine Kaur; Kapoor, Anup Kumar

    2016-03-01

    In cases of partial or poor print recovery and lack of database/suspect print, fingerprint evidence is generally neglected. In light of such constraints, this study was designed to examine whether ridge density can aid in narrowing down the investigation for sex identification. The study was conducted on the right-hand index digit of 245 males and 246 females belonging to the Punjabis of Delhi region. Five ridge density count areas, namely upper radial, radial, ulnar, upper ulnar, and proximal, were selected and designated. Probability of sex origin was calculated, and stepwise discriminant function analysis was performed to determine the discriminating ability of the selected areas. Females were observed with a significantly higher ridge density than males in all the five areas. Discriminant function analysis and logistic regression exhibited 96.8% and 97.4% accuracy, respectively, in sex identification. Hence, fingerprint ridge density is a potential tool for sex identification, even from partial prints. © 2015 American Academy of Forensic Sciences.

  14. The application of data mining techniques to oral cancer prognosis.

    PubMed

    Tseng, Wan-Ting; Chiang, Wei-Fan; Liu, Shyun-Yeu; Roan, Jinsheng; Lin, Chun-Nan

    2015-05-01

    This study adopted an integrated procedure that combines the clustering and classification features of data mining technology to determine the differences between the symptoms shown in past cases where patients died from or survived oral cancer. Two data mining tools, namely decision tree and artificial neural network, were used to analyze the historical cases of oral cancer, and their performance was compared with that of logistic regression, the popular statistical analysis tool. Both decision tree and artificial neural network models showed superiority to the traditional statistical model. However, as to clinician, the trees created by the decision tree models are relatively easier to interpret compared to that of the artificial neural network models. Cluster analysis also discovers that those stage 4 patients whose also possess the following four characteristics are having an extremely low survival rate: pN is N2b, level of RLNM is level I-III, AJCC-T is T4, and cells mutate situation (G) is moderate.

  15. DNA barcoding coupled to HRM analysis as a new and simple tool for the authentication of Gadidae fish species.

    PubMed

    Fernandes, Telmo J R; Costa, Joana; Oliveira, M Beatriz P P; Mafra, Isabel

    2017-09-01

    This work aimed to exploit the use of DNA mini-barcodes combined with high resolution melting (HRM) for the authentication of gadoid species: Atlantic cod (Gadus morhua), Pacific cod (Gadus macrocephalus), Alaska pollock (Theragra chalcogramma) and saithe (Pollachius virens). Two DNA barcode regions, namely cytochrome c oxidase subunit I (COI) and cytochrome b (cytb), were analysed in silico to identify genetic variability among the four species and used, subsequently, to develop a real-time PCR method coupled with HRM analysis. The cytb mini-barcode enabled best discrimination of the target species with a high level of confidence (99.3%). The approach was applied successfully to identify gadoid species in 30 fish-containing foods, 30% of which were not as declared on the label. Herein, a novel approach for rapid, simple and cost-effective discrimination/clustering, as a tool to authenticate Gadidae fish species, according to their genetic relationship, is proposed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Positioning matrix of economic efficiency and complexity: a case study in a university hospital.

    PubMed

    Ippolito, Adelaide; Viggiani, Vincenzo

    2014-01-01

    At the end of 2010, the Federico II University Hospital in Naples, Italy, initiated a series of discussions aimed at designing and applying a positioning matrix to its departments. This analysis was developed to create a tool able to extract meaningful information both to increase knowledge about individual departments and to inform the choices of general management during strategic planning. The name given to this tool was the positioning matrix of economic efficiency and complexity. In the matrix, the x-axis measures the ratio between revenues and costs, whereas the y-axis measures the index of complexity, thus showing "profitability" while bearing in mind the complexity of activities. By using the positioning matrix, it was possible to conduct a critical analysis of the characteristics of the Federico II University Hospital and to extract useful information for general management to use during strategic planning at the end of 2010 when defining medium-term objectives. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Analytical simulation and PROFAT II: a new methodology and a computer automated tool for fault tree analysis in chemical process industries.

    PubMed

    Khan, F I; Abbasi, S A

    2000-07-10

    Fault tree analysis (FTA) is based on constructing a hypothetical tree of base events (initiating events) branching into numerous other sub-events, propagating the fault and eventually leading to the top event (accident). It has been a powerful technique used traditionally in identifying hazards in nuclear installations and power industries. As the systematic articulation of the fault tree is associated with assigning probabilities to each fault, the exercise is also sometimes called probabilistic risk assessment. But powerful as this technique is, it is also very cumbersome and costly, limiting its area of application. We have developed a new algorithm based on analytical simulation (named as AS-II), which makes the application of FTA simpler, quicker, and cheaper; thus opening up the possibility of its wider use in risk assessment in chemical process industries. Based on the methodology we have developed a computer-automated tool. The details are presented in this paper.

  18. Defining Geodetic Reference Frame using Matlab®: PlatEMotion 2.0

    NASA Astrophysics Data System (ADS)

    Cannavò, Flavio; Palano, Mimmo

    2016-03-01

    We describe the main features of the developed software tool, namely PlatE-Motion 2.0 (PEM2), which allows inferring the Euler pole parameters by inverting the observed velocities at a set of sites located on a rigid block (inverse problem). PEM2 allows also calculating the expected velocity value for any point located on the Earth providing an Euler pole (direct problem). PEM2 is the updated version of a previous software tool initially developed for easy-to-use file exchange with the GAMIT/GLOBK software package. The software tool is developed in Matlab® framework and, as the previous version, includes a set of MATLAB functions (m-files), GUIs (fig-files), map data files (mat-files) and user's manual as well as some example input files. New changes in PEM2 include (1) some bugs fixed, (2) improvements in the code, (3) improvements in statistical analysis, (4) new input/output file formats. In addition, PEM2 can be now run under the majority of operating systems. The tool is open source and freely available for the scientific community.

  19. Contemporary criticisms of the received wilderness idea

    Treesearch

    J. Baird Callicott

    2000-01-01

    Names are important. The name “wilderness” is fraught with historical baggage obfuscating the most important role of wilderness areas for contemporary conservation. The received wilderness idea has been and remains a tool of androcentrism, racism, colonialism, and genocide. It privileges virile and primitive recreation, because the...

  20. Noun and knowledge retrieval for biological and non-biological entities following right occipitotemporal lesions.

    PubMed

    Bruffaerts, Rose; De Weer, An-Sofie; De Grauwe, Sophie; Thys, Miek; Dries, Eva; Thijs, Vincent; Sunaert, Stefan; Vandenbulcke, Mathieu; De Deyne, Simon; Storms, Gerrit; Vandenberghe, Rik

    2014-09-01

    We investigated the critical contribution of right ventral occipitotemporal cortex to knowledge of visual and functional-associative attributes of biological and non-biological entities and how this relates to category-specificity during confrontation naming. In a consecutive series of 7 patients with lesions confined to right ventral occipitotemporal cortex, we conducted an extensive assessment of oral generation of visual-sensory and functional-associative features in response to the names of biological and nonbiological entities. Subjects also performed a confrontation naming task for these categories. Our main novel finding related to a unique case with a small lesion confined to right medial fusiform gyrus who showed disproportionate naming impairment for nonbiological versus biological entities, specifically for tools. Generation of visual and functional-associative features was preserved for biological and non-biological entities. In two other cases, who had a relatively small posterior lesion restricted to primary visual and posterior fusiform cortex, retrieval of visual attributes was disproportionately impaired compared to functional-associative attributes, in particular for biological entities. However, these cases did not show a category-specific naming deficit. Two final cases with the largest lesions showed a classical dissociation between biological versus nonbiological entities during naming, with normal feature generation performance. This is the first lesion-based evidence of a critical contribution of the right medial fusiform cortex to tool naming. Second, dissociations along the dimension of attribute type during feature generation do not co-occur with category-specificity during naming in the current patient sample. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Hekate: Software Suite for the Mass Spectrometric Analysis and Three-Dimensional Visualization of Cross-Linked Protein Samples

    PubMed Central

    2013-01-01

    Chemical cross-linking of proteins combined with mass spectrometry provides an attractive and novel method for the analysis of native protein structures and protein complexes. Analysis of the data however is complex. Only a small number of cross-linked peptides are produced during sample preparation and must be identified against a background of more abundant native peptides. To facilitate the search and identification of cross-linked peptides, we have developed a novel software suite, named Hekate. Hekate is a suite of tools that address the challenges involved in analyzing protein cross-linking experiments when combined with mass spectrometry. The software is an integrated pipeline for the automation of the data analysis workflow and provides a novel scoring system based on principles of linear peptide analysis. In addition, it provides a tool for the visualization of identified cross-links using three-dimensional models, which is particularly useful when combining chemical cross-linking with other structural techniques. Hekate was validated by the comparative analysis of cytochrome c (bovine heart) against previously reported data.1 Further validation was carried out on known structural elements of DNA polymerase III, the catalytic α-subunit of the Escherichia coli DNA replisome along with new insight into the previously uncharacterized C-terminal domain of the protein. PMID:24010795

  2. [Development of novel laboratory technology--Chairmen's introductory remarks].

    PubMed

    Maekawa, Masato; Ando, Yukio

    2012-07-01

    The theme of the 58th annual meeting is, "Mission and Challenge of Laboratory Medicine". This symposium is named, "Development of Novel Laboratory Technology" and is held under the joint sponsorship of the Japanese Society of Clinical Chemistry and the Japanese Electrophoresis Society. Both societies have superior skills at developing methodology and technology. The tools used in the lectures are a carbon nanotube sensor, immunochromatography, direct measurement using polyanions and detergents, epigenomic analysis and fluorescent two-dimensional electrophoresis. All of the lectures will be very helpful and interesting.

  3. OSCAR4: a flexible architecture for chemical text-mining.

    PubMed

    Jessop, David M; Adams, Sam E; Willighagen, Egon L; Hawizy, Lezan; Murray-Rust, Peter

    2011-10-14

    The Open-Source Chemistry Analysis Routines (OSCAR) software, a toolkit for the recognition of named entities and data in chemistry publications, has been developed since 2002. Recent work has resulted in the separation of the core OSCAR functionality and its release as the OSCAR4 library. This library features a modular API (based on reduction of surface coupling) that permits client programmers to easily incorporate it into external applications. OSCAR4 offers a domain-independent architecture upon which chemistry specific text-mining tools can be built, and its development and usage are discussed.

  4. The DiaCog: A Prototype Tool for Visualizing Online Dialog Games' Interactions

    ERIC Educational Resources Information Center

    Yengin, Ilker; Lazarevic, Bojan

    2014-01-01

    This paper proposes and explains the design of a prototype learning tool named the DiaCog. The DiaCog visualizes dialog interactions within an online dialog game by using dynamically created cognitive maps. As a purposefully designed tool for enhancing learning effectiveness the DiaCog might be applicable to dialogs at discussion boards within a…

  5. Current Lewis Turbomachinery Research: Building on our Legacy of Excellence

    NASA Technical Reports Server (NTRS)

    Povinelli, Louis A.

    1997-01-01

    This Wu Chang-Hua lecture is concerned with the development of analysis and computational capability for turbomachinery flows which is based on detailed flow field physics. A brief review of the work of Professor Wu is presented as well as a summary of the current NASA aeropropulsion programs. Two major areas of research are described in order to determine our predictive capabilities using modern day computational tools evolved from the work of Professor Wu. In one of these areas, namely transonic rotor flow, it is demonstrated that a high level of accuracy is obtainable provided sufficient geometric detail is simulated. In the second case, namely turbine heat transfer, our capability is lacking for rotating blade rows and experimental correlations will provide needed information in the near term. It is believed that continuing progress will allow us to realize the full computational potential and its impact on design time and cost.

  6. X-ray texture analysis of paper coating pigments and the correlation with chemical composition analysis

    NASA Astrophysics Data System (ADS)

    Roine, J.; Tenho, M.; Murtomaa, M.; Lehto, V.-P.; Kansanaho, R.

    2007-10-01

    The present research experiments the applicability of x-ray texture analysis in investigating the properties of paper coatings. The preferred orientations of kaolin, talc, ground calcium carbonate, and precipitated calcium carbonate particles used in four different paper coatings were determined qualitatively based on the measured crystal orientation data. The extent of the orientation, namely, the degree of the texture of each pigment, was characterized quantitatively using a single parameter. As a result, the effect of paper calendering is clearly seen as an increase on the degree of texture of the coating pigments. The effect of calendering on the preferred orientation of kaolin was also evident in an independent energy dispersive spectrometer analysis on micrometer scale and an electron spectroscopy for chemical analysis on nanometer scale. Thus, the present work proves x-ray texture analysis to be a potential research tool for characterizing the properties of paper coating layers.

  7. Sherlock Holmes and the proteome--a detective story.

    PubMed

    Righetti, Pier Giorgio; Boschetti, Egisto

    2007-02-01

    The performance of a hexapeptide ligand library in capturing the 'hidden proteome' is illustrated and evaluated. This library, insolubilized on an organic polymer and available under the trade name 'Equalizer Bead Technology', acts by capturing all components of a given proteome, by concentrating rare and very rare proteins, and simultaneously diluting the abundant ones. This results in a proteome of 'normalized' relative abundances, amenable to analysis by MS and any other analytical tool. Examples are given of analysis of human urine and serum, as well as cell and tissue lysates, such as Escherichia coli and Saccharomyces cerevisiae extracts. Another important application is impurity tracking and polishing of recombinant DNA products, especially biopharmaceuticals meant for human consumption.

  8. Stakeholder perspectives on the use of pig meat inspection as a health and welfare diagnostic tool in the Republic of Ireland and Northern Ireland; a SWOT analysis.

    PubMed

    Devitt, C; Boyle, L; Teixeira, D L; O'Connell, N E; Hawe, M; Hanlon, A

    2016-01-01

    A SWOT (Strengths, Weaknesses, Opportunities and Threats) analysis is a strategic management tool applied to policy planning and decision-making. This short report presents the results of a SWOT analysis, carried out with n  = 16 stakeholders i) involved in the pig industry in the Republic of Ireland and Northern Ireland, and ii) in general animal welfare and food safety policy areas. As part of a larger study called PIGWELFIND, the analysis sought to explore the potential development of pig meat inspection as an animal welfare and diagnostic tool. The final SWOT framework comprised two strengths, three opportunities, six weaknesses, and five threats. Issues around relationships and communication between producers and their veterinary practitioner, processors and producers were common to both the strengths and weakness clusters. Practical challenges within the processing plant were also named. Overall, the SWOT framework complements results reported in Devitt et al. (Ir Vet J 69:2, 2016) regarding problematic issues within the current system of information feedback on meat inspection especially within the Republic of Ireland, and the wider challenges of communication and problems of distrust. The results of the SWOT analysis support the conclusions from Devitt et al. (Ir Vet J 69:2, 2016), that trust between all stakeholders across the supply chain will be essential for the development of an effective environment in which to realise the full diagnostic potential of MI data. Further stakeholder engagement could seek to apply the findings of the SWOT analysis to a policy Delphi methodology, as used elsewhere.

  9. Mental Imagery Scale: a new measurement tool to assess structural features of mental representations

    NASA Astrophysics Data System (ADS)

    D'Ercole, Martina; Castelli, Paolo; Giannini, Anna Maria; Sbrilli, Antonella

    2010-05-01

    Mental imagery is a quasi-perceptual experience which resembles perceptual experience, but occurring without (appropriate) external stimuli. It is a form of mental representation and is often considered centrally involved in visuo-spatial reasoning and inventive and creative thought. Although imagery ability is assumed to be functionally independent of verbal systems, it is still considered to interact with verbal representations, enabling objects to be named and names to evoke images. In literature, most measurement tools for evaluating imagery capacity are self-report instruments focusing on differences in individuals. In the present work, we applied a Mental Imagery Scale (MIS) to mental images derived from verbal descriptions in order to assess the structural features of such mental representations. This is a key theme for those disciplines which need to turn objects and representations into words and vice versa, such as art or architectural didactics. To this aim, an MIS questionnaire was administered to 262 participants. The questionnaire, originally consisting of a 33-item 5-step Likert scale, was reduced to 28 items covering six areas: (1) Image Formation Speed, (2) Permanence/Stability, (3) Dimensions, (4) Level of Detail/Grain, (5) Distance and (6) Depth of Field or Perspective. Factor analysis confirmed our six-factor hypothesis underlying the 28 items.

  10. The Global Modeling and Assimilation Office (GMAO) 4d-Var and its Adjoint-based Tools

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo; Tremolet, Yannick

    2008-01-01

    The fifth generation of the Goddard Earth Observing System (GEOS-5) Data Assimilation System (DAS) is a 3d-var system that uses the Grid-point Statistical Interpolation (GSI) system developed in collaboration with NCEP, and a general circulation model developed at Goddard, that includes the finite-volume hydrodynamics of GEOS-4 wrapped in the Earth System Modeling Framework and physical packages tuned to provide a reliable hydrological cycle for the integration of the Modern Era Retrospective-analysis for Research and Applications (MERRA). This MERRA system is essentially complete and the next generation GEOS is under intense development. A prototype next generation system is now complete and has been producing preliminary results. This prototype system replaces the GSI-based Incremental Analysis Update procedure with a GSI-based 4d-var which uses the adjoint of the finite-volume hydrodynamics of GEOS-4 together with a vertical diffusing scheme for simplified physics. As part of this development we have kept the GEOS-5 IAU procedure as an option and have added the capability to experiment with a First Guess at the Appropriate Time (FGAT) procedure, thus allowing for at least three modes of running the data assimilation experiments. The prototype system is a large extension of GEOS-5 as it also includes various adjoint-based tools, namely, a forecast sensitivity tool, a singular vector tool, and an observation impact tool, that combines the model sensitivity tool with a GSI-based adjoint tool. These features bring the global data assimilation effort at Goddard up to date with technologies used in data assimilation systems at major meteorological centers elsewhere. Various aspects of the next generation GEOS will be discussed during the presentation at the Workshop, and preliminary results will illustrate the discussion.

  11. ISOT_Calc: A versatile tool for parameter estimation in sorption isotherms

    NASA Astrophysics Data System (ADS)

    Beltrán, José L.; Pignatello, Joseph J.; Teixidó, Marc

    2016-09-01

    Geochemists and soil chemists commonly use parametrized sorption data to assess transport and impact of pollutants in the environment. However, this evaluation is often hampered by a lack of detailed sorption data analysis, which implies further non-accurate transport modeling. To this end, we present a novel software tool to precisely analyze and interpret sorption isotherm data. Our developed tool, coded in Visual Basic for Applications (VBA), operates embedded within the Microsoft Excel™ environment. It consists of a user-defined function named ISOT_Calc, followed by a supplementary optimization Excel macro (Ref_GN_LM). The ISOT_Calc function estimates the solute equilibrium concentration in the aqueous and solid phases (Ce and q, respectively). Hence, it represents a very flexible way in the optimization of the sorption isotherm parameters, as it can be carried out over the residuals of q, Ce, or both simultaneously (i.e., orthogonal distance regression). The developed function includes the most usual sorption isotherm models, as predefined equations, as well as the possibility to easily introduce custom-defined ones. Regarding the Ref_GN_LM macro, it allows the parameter optimization by using a Levenberg-Marquardt modified Gauss-Newton iterative procedure. In order to evaluate the performance of the presented tool, both function and optimization macro have been applied to different sorption data examples described in the literature. Results showed that the optimization of the isotherm parameters was successfully achieved in all cases, indicating the robustness and reliability of the developed tool. Thus, the presented software tool, available to researchers and students for free, has proven to be a user-friendly and an interesting alternative to conventional fitting tools used in sorption data analysis.

  12. A forensic identification case and DPid - can it be a useful tool?

    PubMed

    Queiroz, Cristhiane Leão de; Bostock, Ellen Marie; Santos, Carlos Ferreira; Guimarães, Marco Aurélio; Silva, Ricardo Henrique Alves da

    2017-01-01

    The aim of this study was to show DPid as an important tool of potential application to solve cases with dental prosthesis, such as the forensic case reported, in which a skull, denture and dental records were received for analysis. Human identification is still challenging in various circumstances and Dental Prosthetics Identification (DPid) stores the patient's name and prosthesis information and provides access through an embedded code in dental prosthesis or an identification card. All of this information is digitally stored on servers accessible only by dentists, laboratory technicians and patients with their own level of secure access. DPid provides a complete single-source list of all dental prosthesis features (materials and components) under complete and secure documentation used for clinical follow-up and for human identification. If DPid tool was present in this forensic case, it could have been solved without requirement of DNA exam, which confirmed the dental comparison of antemortem and postmortem records, and concluded the case as a positive identification.

  13. Visualization of multiple influences on ocellar flight control in giant honeybees with the data-mining tool Viscovery SOMine.

    PubMed

    Kastberger, G; Kranner, G

    2000-02-01

    Viscovery SOMine is a software tool for advanced analysis and monitoring of numerical data sets. It was developed for professional use in business, industry, and science and to support dependency analysis, deviation detection, unsupervised clustering, nonlinear regression, data association, pattern recognition, and animated monitoring. Based on the concept of self-organizing maps (SOMs), it employs a robust variant of unsupervised neural networks--namely, Kohonen's Batch-SOM, which is further enhanced with a new scaling technique for speeding up the learning process. This tool provides a powerful means by which to analyze complex data sets without prior statistical knowledge. The data representation contained in the trained SOM is systematically converted to be used in a spectrum of visualization techniques, such as evaluating dependencies between components, investigating geometric properties of the data distribution, searching for clusters, or monitoring new data. We have used this software tool to analyze and visualize multiple influences of the ocellar system on free-flight behavior in giant honeybees. Occlusion of ocelli will affect orienting reactivities in relation to flight target, level of disturbance, and position of the bee in the flight chamber; it will induce phototaxis and make orienting imprecise and dependent on motivational settings. Ocelli permit the adjustment of orienting strategies to environmental demands by enforcing abilities such as centering or flight kinetics and by providing independent control of posture and flight course.

  14. Study of the Effect of Lubricant Emulsion Percentage and Tool Material on Surface Roughness in Machining of EN-AC 48000 Alloy

    NASA Astrophysics Data System (ADS)

    Soltani, E.; Shahali, H.; Zarepour, H.

    2011-01-01

    In this paper, the effect of machining parameters, namely, lubricant emulsion percentage and tool material on surface roughness has been studied in machining process of EN-AC 48000 aluminum alloy. EN-AC 48000 aluminum alloy is an important alloy in industries. Machining of this alloy is of vital importance due to built-up edge and tool wear. A L9 Taguchi standard orthogonal array has been applied as experimental design to investigate the effect of the factors and their interaction. Nine machining tests have been carried out with three random replications resulting in 27 experiments. Three type of cutting tools including coated carbide (CD1810), uncoated carbide (H10), and polycrystalline diamond (CD10) have been used in this research. Emulsion percentage of lubricant is selected at three levels including 3%, 5% and 10%. Statistical analysis has been employed to study the effect of factors and their interactions using ANOVA method. Moreover, the optimal factors level has been achieved through signal to noise ratio (S/N) analysis. Also, a regression model has been provided to predict the surface roughness. Finally, the results of the confirmation tests have been presented to verify the adequacy of the predictive model. In this research, surface quality was improved by 9% using lubricant and statistical optimization method.

  15. INDOOR AIR QUALITY AND INHALATION EXPOSURE - SIMULATION TOOL KIT

    EPA Science Inventory

    A Microsoft Windows-based indoor air quality (IAQ) simulation software package is presented. Named Simulation Tool Kit for Indoor Air Quality and Inhalation Exposure, or IAQX for short, this package complements and supplements existing IAQ simulation programs and is desi...

  16. An R package for the integrated analysis of metabolomics and spectral data.

    PubMed

    Costa, Christopher; Maraschin, Marcelo; Rocha, Miguel

    2016-06-01

    Recently, there has been a growing interest in the field of metabolomics, materialized by a remarkable growth in experimental techniques, available data and related biological applications. Indeed, techniques as nuclear magnetic resonance, gas or liquid chromatography, mass spectrometry, infrared and UV-visible spectroscopies have provided extensive datasets that can help in tasks as biological and biomedical discovery, biotechnology and drug development. However, as it happens with other omics data, the analysis of metabolomics datasets provides multiple challenges, both in terms of methodologies and in the development of appropriate computational tools. Indeed, from the available software tools, none addresses the multiplicity of existing techniques and data analysis tasks. In this work, we make available a novel R package, named specmine, which provides a set of methods for metabolomics data analysis, including data loading in different formats, pre-processing, metabolite identification, univariate and multivariate data analysis, machine learning, and feature selection. Importantly, the implemented methods provide adequate support for the analysis of data from diverse experimental techniques, integrating a large set of functions from several R packages in a powerful, yet simple to use environment. The package, already available in CRAN, is accompanied by a web site where users can deposit datasets, scripts and analysis reports to be shared with the community, promoting the efficient sharing of metabolomics data analysis pipelines. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Stroop effects in Alzheimer's disease: selective attention speed of processing, or color-naming? A meta-analysis.

    PubMed

    Ben-David, Boaz M; Tewari, Anita; Shakuf, Vered; Van Lieshout, Pascal H H M

    2014-01-01

    Selective attention, an essential part of daily activity, is often impaired in people with Alzheimer's disease (AD). Usually, it is measured by the color-word Stroop test. However, there is no universal agreement whether performance on the Stroop task changes significantly in AD patients; or if so, whether an increase in Stroop effects reflects a decrease in selective attention, a slowing in generalized speed of processing (SOP), or is the result of degraded color-vision. The current study investigated the impact of AD on Stroop performance and its potential sources in a meta-analysis and mathematical modeling of 18 studies, comparing 637 AD patients with 977 healthy age-matched participants. We found a significant increase in Stroop effects for AD patients, across studies. This AD-related change was associated with a slowing in SOP. However, after correcting for a bias in the distribution of latencies, SOP could only explain a moderate portion of the total variance (25%). Moreover, we found strong evidence for an AD-related increase in the latency difference between naming the font-color and reading color-neutral stimuli (r2 = 0.98). This increase in the dimensional imbalance between color-naming and word-reading was found to explain a significant portion of the AD-related increase in Stroop effects (r2 = 0.87), hinting on a possible sensory source. In conclusion, our analysis highlights the importance of controlling for sensory degradation and SOP when testing cognitive performance and, specifically, selective attention in AD patients. We also suggest possible measures and tools to better test for selective attention in AD.

  18. PIPI: PTM-Invariant Peptide Identification Using Coding Method.

    PubMed

    Yu, Fengchao; Li, Ning; Yu, Weichuan

    2016-12-02

    In computational proteomics, the identification of peptides with an unlimited number of post-translational modification (PTM) types is a challenging task. The computational cost associated with database search increases exponentially with respect to the number of modified amino acids and linearly with respect to the number of potential PTM types at each amino acid. The problem becomes intractable very quickly if we want to enumerate all possible PTM patterns. To address this issue, one group of methods named restricted tools (including Mascot, Comet, and MS-GF+) only allow a small number of PTM types in database search process. Alternatively, the other group of methods named unrestricted tools (including MS-Alignment, ProteinProspector, and MODa) avoids enumerating PTM patterns with an alignment-based approach to localizing and characterizing modified amino acids. However, because of the large search space and PTM localization issue, the sensitivity of these unrestricted tools is low. This paper proposes a novel method named PIPI to achieve PTM-invariant peptide identification. PIPI belongs to the category of unrestricted tools. It first codes peptide sequences into Boolean vectors and codes experimental spectra into real-valued vectors. For each coded spectrum, it then searches the coded sequence database to find the top scored peptide sequences as candidates. After that, PIPI uses dynamic programming to localize and characterize modified amino acids in each candidate. We used simulation experiments and real data experiments to evaluate the performance in comparison with restricted tools (i.e., Mascot, Comet, and MS-GF+) and unrestricted tools (i.e., Mascot with error tolerant search, MS-Alignment, ProteinProspector, and MODa). Comparison with restricted tools shows that PIPI has a close sensitivity and running speed. Comparison with unrestricted tools shows that PIPI has the highest sensitivity except for Mascot with error tolerant search and ProteinProspector. These two tools simplify the task by only considering up to one modified amino acid in each peptide, which results in a higher sensitivity but has difficulty in dealing with multiple modified amino acids. The simulation experiments also show that PIPI has the lowest false discovery proportion, the highest PTM characterization accuracy, and the shortest running time among the unrestricted tools.

  19. Sizing Determination Final Report

    DTIC Science & Technology

    1988-02-01

    A Name: ERIC WHEATLEY Subject No.: 4 S/N: ------------ Sex.: M Race: BLACK Age...z:ni:zum Frcx-tal Arc - Tahc and -Marker Tool 19.= 6- - B-prozy.ncmatic_ ,’entlon Ar- - T"p_ and Marler Tool ......... .. 26.:; i7 Bitragion Mini-o-u- Frc...tape and Marler Tool ............. 24.5 7 . ilt’-aoion inigmu F-ontal Ar-c - -:•c= Oil- ---................... . ?.. S:-.-_,_ . ................. 1

  20. Motor-Iconicity of Sign Language Does Not Alter the Neural Systems Underlying Tool and Action Naming

    ERIC Educational Resources Information Center

    Emmorey, Karen; Grabowski, Thomas; McCullough, Stephen; Damasio, Hannah; Ponto, Laurie; Hichwa, Richard; Bellugi, Ursula

    2004-01-01

    Positron emission tomography was used to investigate whether the motor-iconic basis of certain forms in American Sign Language (ASL) partially alters the neural systems engaged during lexical retrieval. Most ASL nouns denoting tools and ASL verbs referring to tool-based actions are produced with a handshape representing the human hand holding a…

  1. A web-based information system for management and analysis of patient data after refractive eye surgery.

    PubMed

    Zuberbuhler, Bruno; Galloway, Peter; Reddy, Aravind; Saldana, Manuel; Gale, Richard

    2007-12-01

    The aim was to develop a software tool for refractive surgeons using a standard user-friendly web-based interface, providing the user with a secure environment to protect large volumes of patient data. The software application was named "Internet-based refractive analysis" (IBRA), and was programmed with the computer languages PHP, HTML and JavaScript, attached to the opensource MySQL database. IBRA facilitated internationally accepted presentation methods including the stability chart, the predictability chart and the safety chart; it was able to perform vector analysis for the course of a single patient or for group data. With the integrated nomogram calculation, treatment could be customised to reduce the postoperative refractive error. Multicenter functions permitted quality-control comparisons between different surgeons and laser units.

  2. Global catalogue of microorganisms (gcm): a comprehensive database and information retrieval, analysis, and visualization system for microbial resources

    PubMed Central

    2013-01-01

    Background Throughout the long history of industrial and academic research, many microbes have been isolated, characterized and preserved (whenever possible) in culture collections. With the steady accumulation in observational data of biodiversity as well as microbial sequencing data, bio-resource centers have to function as data and information repositories to serve academia, industry, and regulators on behalf of and for the general public. Hence, the World Data Centre for Microorganisms (WDCM) started to take its responsibility for constructing an effective information environment that would promote and sustain microbial research data activities, and bridge the gaps currently present within and outside the microbiology communities. Description Strain catalogue information was collected from collections by online submission. We developed tools for automatic extraction of strain numbers and species names from various sources, including Genbank, Pubmed, and SwissProt. These new tools connect strain catalogue information with the corresponding nucleotide and protein sequences, as well as to genome sequence and references citing a particular strain. All information has been processed and compiled in order to create a comprehensive database of microbial resources, and was named Global Catalogue of Microorganisms (GCM). The current version of GCM contains information of over 273,933 strains, which includes 43,436bacterial, fungal and archaea species from 52 collections in 25 countries and regions. A number of online analysis and statistical tools have been integrated, together with advanced search functions, which should greatly facilitate the exploration of the content of GCM. Conclusion A comprehensive dynamic database of microbial resources has been created, which unveils the resources preserved in culture collections especially for those whose informatics infrastructures are still under development, which should foster cumulative research, facilitating the activities of microbiologists world-wide, who work in both public and industrial research centres. This database is available from http://gcm.wfcc.info. PMID:24377417

  3. MicroScope—an integrated microbial resource for the curation and comparative analysis of genomic and metabolic data

    PubMed Central

    Vallenet, David; Belda, Eugeni; Calteau, Alexandra; Cruveiller, Stéphane; Engelen, Stefan; Lajus, Aurélie; Le Fèvre, François; Longin, Cyrille; Mornico, Damien; Roche, David; Rouy, Zoé; Salvignol, Gregory; Scarpelli, Claude; Thil Smith, Adam Alexander; Weiman, Marion; Médigue, Claudine

    2013-01-01

    MicroScope is an integrated platform dedicated to both the methodical updating of microbial genome annotation and to comparative analysis. The resource provides data from completed and ongoing genome projects (automatic and expert annotations), together with data sources from post-genomic experiments (i.e. transcriptomics, mutant collections) allowing users to perfect and improve the understanding of gene functions. MicroScope (http://www.genoscope.cns.fr/agc/microscope) combines tools and graphical interfaces to analyse genomes and to perform the manual curation of gene annotations in a comparative context. Since its first publication in January 2006, the system (previously named MaGe for Magnifying Genomes) has been continuously extended both in terms of data content and analysis tools. The last update of MicroScope was published in 2009 in the Database journal. Today, the resource contains data for >1600 microbial genomes, of which ∼300 are manually curated and maintained by biologists (1200 personal accounts today). Expert annotations are continuously gathered in the MicroScope database (∼50 000 a year), contributing to the improvement of the quality of microbial genomes annotations. Improved data browsing and searching tools have been added, original tools useful in the context of expert annotation have been developed and integrated and the website has been significantly redesigned to be more user-friendly. Furthermore, in the context of the European project Microme (Framework Program 7 Collaborative Project), MicroScope is becoming a resource providing for the curation and analysis of both genomic and metabolic data. An increasing number of projects are related to the study of environmental bacterial (meta)genomes that are able to metabolize a large variety of chemical compounds that may be of high industrial interest. PMID:23193269

  4. Galaxy-M: a Galaxy workflow for processing and analyzing direct infusion and liquid chromatography mass spectrometry-based metabolomics data.

    PubMed

    Davidson, Robert L; Weber, Ralf J M; Liu, Haoyu; Sharma-Oates, Archana; Viant, Mark R

    2016-01-01

    Metabolomics is increasingly recognized as an invaluable tool in the biological, medical and environmental sciences yet lags behind the methodological maturity of other omics fields. To achieve its full potential, including the integration of multiple omics modalities, the accessibility, standardization and reproducibility of computational metabolomics tools must be improved significantly. Here we present our end-to-end mass spectrometry metabolomics workflow in the widely used platform, Galaxy. Named Galaxy-M, our workflow has been developed for both direct infusion mass spectrometry (DIMS) and liquid chromatography mass spectrometry (LC-MS) metabolomics. The range of tools presented spans from processing of raw data, e.g. peak picking and alignment, through data cleansing, e.g. missing value imputation, to preparation for statistical analysis, e.g. normalization and scaling, and principal components analysis (PCA) with associated statistical evaluation. We demonstrate the ease of using these Galaxy workflows via the analysis of DIMS and LC-MS datasets, and provide PCA scores and associated statistics to help other users to ensure that they can accurately repeat the processing and analysis of these two datasets. Galaxy and data are all provided pre-installed in a virtual machine (VM) that can be downloaded from the GigaDB repository. Additionally, source code, executables and installation instructions are available from GitHub. The Galaxy platform has enabled us to produce an easily accessible and reproducible computational metabolomics workflow. More tools could be added by the community to expand its functionality. We recommend that Galaxy-M workflow files are included within the supplementary information of publications, enabling metabolomics studies to achieve greater reproducibility.

  5. Towards an integral computer environment supporting system operations analysis and conceptual design

    NASA Technical Reports Server (NTRS)

    Barro, E.; Delbufalo, A.; Rossi, F.

    1994-01-01

    VITROCISET has in house developed a prototype tool named System Dynamic Analysis Environment (SDAE) to support system engineering activities in the initial definition phase of a complex space system. The SDAE goal is to provide powerful means for the definition, analysis, and trade-off of operations and design concepts for the space and ground elements involved in a mission. For this purpose SDAE implements a dedicated modeling methodology based on the integration of different modern (static and dynamic) analysis and simulation techniques. The resulting 'system model' is capable of representing all the operational, functional, and behavioral aspects of the system elements which are part of a mission. The execution of customized model simulations enables: the validation of selected concepts with respect to mission requirements; the in-depth investigation of mission specific operational and/or architectural aspects; and the early assessment of performances required by the system elements to cope with mission constraints and objectives. Due to its characteristics, SDAE is particularly tailored for nonconventional or highly complex systems, which require a great analysis effort in their early definition stages. SDAE runs under PC-Windows and is currently used by VITROCISET system engineering group. This paper describes the SDAE main features, showing some tool output examples.

  6. Pantomiming tool use with an imaginary tool in hand as compared to demonstration with tool in hand specifically modulates the left middle and superior temporal gyri.

    PubMed

    Lausberg, Hedda; Kazzer, Philipp; Heekeren, Hauke R; Wartenburger, Isabell

    2015-10-01

    Neuropsychological lesion studies evidence the necessity to differentiate between various forms of tool-related actions such as real tool use, tool use demonstration with tool in hand and without physical target object, and pantomime without tool in hand. However, thus far, neuroimaging studies have primarily focused only on investigating tool use pantomimes. The present fMRI study investigates pantomime without tool in hand as compared to tool use demonstration with tool in hand in order to explore patterns of cerebral signal modulation associated with acting with imaginary tools in hand. Fifteen participants performed with either hand (i) tool use pantomime with an imaginary tool in hand in response to visual tool presentation and (ii) tool use demonstration with tool in hand in response to visual-tactile tool presentation. In both conditions, no physical target object was present. The conjunction analysis of the right and left hands executions of tool use pantomime relative to tool use demonstration yielded significant activity in the left middle and superior temporal lobe. In contrast, demonstration relative to pantomime revealed large bihemispherically distributed homologous areas of activity. Thus far, fMRI studies have demonstrated the relevance of the left middle and superior temporal gyri in viewing, naming, and matching tools and related actions and contexts. Since in our study all these factors were equally (ir)relevant both in the tool use pantomime and the tool use demonstration conditions, the present findings enhance the knowledge about the function of these brain regions in tool-related cognitive processes. The two contrasted conditions only differ regarding the fact that the pantomime condition requires the individual to act with an imaginary tool in hand. Therefore, we suggest that the left middle and superior temporal gyri are specifically involved in integrating the projected mental image of a tool in the execution of a tool-specific movement concept. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Semantic Web Compatible Names and Descriptions for Organisms

    NASA Astrophysics Data System (ADS)

    Wang, H.; Wilson, N.; McGuinness, D. L.

    2012-12-01

    Modern scientific names are critical for understanding the biological literature and provide a valuable way to understand evolutionary relationships. To validly publish a name, a description is required to separate the described group of organisms from those described by other names at the same level of the taxonomic hierarchy. The frequent revision of descriptions due to new evolutionary evidence has lead to situations where a single given scientific name may over time have multiple descriptions associated with it and a given published description may apply to multiple scientific names. Because of these many-to-many relationships between scientific names and descriptions, the usage of scientific names as a proxy for descriptions is inevitably ambiguous. Another issue lies in the fact that the precise application of scientific names often requires careful microscopic work, or increasingly, genetic sequencing, as scientific names are focused on the evolutionary relatedness between and within named groups such as species, genera, families, etc. This is problematic to many audiences, especially field biologists, who often do not have access to the instruments and tools required to make identifications on a microscopic or genetic basis. To better connect scientific names to descriptions and find a more convenient way to support computer assisted identification, we proposed the Semantic Vernacular System, a novel naming system that creates named, machine-interpretable descriptions for groups of organisms, and is compatible with the Semantic Web. Unlike the evolutionary relationship based scientific naming system, it emphasizes the observable features of organisms. By independently naming the descriptions composed of sets of observational features, as well as maintaining connections to scientific names, it preserves the observational data used to identify organisms. The system is designed to support a peer-review mechanism for creating new names, and uses a controlled vocabulary encoded in the Web Ontology Language to represent the observational features. A prototype of the system is currently under development in collaboration with the Mushroom Observer website. It allows users to propose new names and descriptions for fungi, provide feedback on those proposals, and ultimately have them formally approved. It relies on SPARQL queries and semantic reasoning for data management. This effort will offer the mycology community a knowledge base of fungal observational features and a tool for identifying fungal observations. It will also serve as an operational specification of how the Semantic Vernacular System can be used in practice in one scientific community (in this case mycology).

  8. MNE software for processing MEG and EEG data

    PubMed Central

    Gramfort, A.; Luessi, M.; Larson, E.; Engemann, D.; Strohmeier, D.; Brodbeck, C.; Parkkonen, L.; Hämäläinen, M.

    2013-01-01

    Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals originating from neural currents in the brain. Using these signals to characterize and locate brain activity is a challenging task, as evidenced by several decades of methodological contributions. MNE, whose name stems from its capability to compute cortically-constrained minimum-norm current estimates from M/EEG data, is a software package that provides comprehensive analysis tools and workflows including preprocessing, source estimation, time–frequency analysis, statistical analysis, and several methods to estimate functional connectivity between distributed brain regions. The present paper gives detailed information about the MNE package and describes typical use cases while also warning about potential caveats in analysis. The MNE package is a collaborative effort of multiple institutes striving to implement and share best methods and to facilitate distribution of analysis pipelines to advance reproducibility of research. Full documentation is available at http://martinos.org/mne. PMID:24161808

  9. Design of oligonucleotides for microarrays and perspectives for design of multi-transcriptome arrays.

    PubMed

    Nielsen, Henrik Bjørn; Wernersson, Rasmus; Knudsen, Steen

    2003-07-01

    Optimal design of oligonucleotides for microarrays involves tedious and laborious work evaluating potential oligonucleotides relative to a series of parameters. The currently available tools for this purpose are limited in their flexibility and do not present the oligonucleotide designer with an overview of these parameters. We present here a flexible tool named OligoWiz for designing oligonucleotides for multiple purposes. OligoWiz presents a set of parameter scores in a graphical interface to facilitate an overview for the user. Additional custom parameter scores can easily be added to the program to extend the default parameters: homology, DeltaTm, low-complexity, position and GATC-only. Furthermore we present an analysis of the limitations in designing oligonucleotide sets that can detect transcripts from multiple organisms. OligoWiz is available at www.cbs.dtu.dk/services/OligoWiz/.

  10. Utilization of wheel dop based on ergonomic aspects

    NASA Astrophysics Data System (ADS)

    Widiasih, Wiwin; Murnawan, Hery; Setiawan, Danny

    2017-06-01

    Time is an important thing in life. People need a tool or equipment to measure time which is divided into two types, namely clock and watch. Everyone needs those kinds of tool. It becomes an opportunity for manufacturer to build a business. However, establishing a business by depending on the demand is not enough, it is necessary to take a consideration of making innovation. Innovation is a difficult thing to find out, but it is not impossible to do it. By creating an innovative product, it can be a strategy to win the competitive market. This study aimed to create an innovative product based on the ergonomic aspects, which was by utilizing wheel dop. This methodology consisted of pre-study, planning and product development, and product analysis. This product utilized wheel dop and was made based on the ergonomic aspects.

  11. GeoTools: An android phone application in geology

    NASA Astrophysics Data System (ADS)

    Weng, Yi-Hua; Sun, Fu-Shing; Grigsby, Jeffry D.

    2012-07-01

    GeoTools is an Android application that can carry out several tasks essential in geological field studies. By employing the accelerometer in the Android phone, the application turns the handset into a pocket transit compass by which users can measure directions, strike and dip of a bedding plane, or trend and plunge of a fold. The application integrates functionalities of photo taking, videotaping, audio recording, and note writing with GPS coordinates to track the location at which each datum was taken. A time-stamped file name is shared by the various types of data taken at the same location. Data collected at different locations are named in a chronological sequence. At the end of each set of operations, GeoTools also automatically generates an XML file to summarize the characteristics of data being collected corresponding to a specific location. In this way, GeoTools allows geologists to use a multimedia approach to document their field observations with a clear data organization scheme in one handy gadget.

  12. A Novel Way to Relate Ontology Classes

    PubMed Central

    Choksi, Ami T.; Jinwala, Devesh C.

    2015-01-01

    The existing ontologies in the semantic web typically have anonymous union and intersection classes. The anonymous classes are limited in scope and may not be part of the whole inference process. The tools, namely, the pellet, the jena, and the protégé, interpret collection classes as (a) equivalent/subclasses of union class and (b) superclasses of intersection class. As a result, there is a possibility that the tools will produce error prone inference results for relations, namely, sub-, union, intersection, equivalent relations, and those dependent on these relations, namely, complement. To verify whether a class is complement of other involves utilization of sub- and equivalent relations. Motivated by the same, we (i) refine the test data set of the conference ontology by adding named, union, and intersection classes and (ii) propose a match algorithm to (a) calculate corrected subclasses list, (b) correctly relate intersection and union classes with their collection classes, and (c) match union, intersection, sub-, complement, and equivalent classes in a proper sequence, to avoid error prone match results. We compare the results of our algorithms with those of a candidate reasoner, namely, the pellet reasoner. To the best of our knowledge, ours is a unique attempt in establishing a novel way to relate ontology classes. PMID:25984560

  13. Comparing soil moisture anomalies from multiple independent sources over different regions across the globe

    NASA Astrophysics Data System (ADS)

    Cammalleri, Carmelo; Vogt, Jürgen V.; Bisselink, Bernard; de Roo, Ad

    2017-12-01

    Agricultural drought events can affect large regions across the world, implying the need for a suitable global tool for an accurate monitoring of this phenomenon. Soil moisture anomalies are considered a good metric to capture the occurrence of agricultural drought events, and they have become an important component of several operational drought monitoring systems. In the framework of the JRC Global Drought Observatory (GDO, http://edo.jrc.ec.europa.eu/gdo/), the suitability of three datasets as possible representations of root zone soil moisture anomalies has been evaluated: (1) the soil moisture from the Lisflood distributed hydrological model (namely LIS), (2) the remotely sensed Land Surface Temperature data from the MODIS satellite (namely LST), and (3) the ESA Climate Change Initiative combined passive/active microwave skin soil moisture dataset (namely CCI). Due to the independency of these three datasets, the triple collocation (TC) technique has been applied, aiming at quantifying the likely error associated with each dataset in comparison to the unknown true status of the system. TC analysis was performed on five macro-regions (namely North America, Europe, India, southern Africa and Australia) detected as suitable for the experiment, providing insight into the mutual relationship between these datasets as well as an assessment of the accuracy of each method. Even if no definitive statement on the spatial distribution of errors can be provided, a clear outcome of the TC analysis is the good performance of the remote sensing datasets, especially CCI, over dry regions such as Australia and southern Africa, whereas the outputs of LIS seem to be more reliable over areas that are well monitored through meteorological ground station networks, such as North America and Europe. In a global drought monitoring system, the results of the error analysis are used to design a weighted-average ensemble system that exploits the advantages of each dataset.

  14. Tools for language: patterned iconicity in sign language nouns and verbs.

    PubMed

    Padden, Carol; Hwang, So-One; Lepic, Ryan; Seegers, Sharon

    2015-01-01

    When naming certain hand-held, man-made tools, American Sign Language (ASL) signers exhibit either of two iconic strategies: a handling strategy, where the hands show holding or grasping an imagined object in action, or an instrument strategy, where the hands represent the shape or a dimension of the object in a typical action. The same strategies are also observed in the gestures of hearing nonsigners identifying pictures of the same set of tools. In this paper, we compare spontaneously created gestures from hearing nonsigning participants to commonly used lexical signs in ASL. Signers and gesturers were asked to respond to pictures of tools and to video vignettes of actions involving the same tools. Nonsigning gesturers overwhelmingly prefer the handling strategy for both the Picture and Video conditions. Nevertheless, they use more instrument forms when identifying tools in pictures, and more handling forms when identifying actions with tools. We found that ASL signers generally favor the instrument strategy when naming tools, but when describing tools being used by an actor, they are significantly more likely to use more handling forms. The finding that both gesturers and signers are more likely to alternate strategies when the stimuli are pictures or video suggests a common cognitive basis for differentiating objects from actions. Furthermore, the presence of a systematic handling/instrument iconic pattern in a sign language demonstrates that a conventionalized sign language exploits the distinction for grammatical purpose, to distinguish nouns and verbs related to tool use. Copyright © 2014 Cognitive Science Society, Inc.

  15. Unsupervised Biomedical Named Entity Recognition: Experiments with Clinical and Biological Texts

    PubMed Central

    Zhang, Shaodian; Elhadad, Nóemie

    2013-01-01

    Named entity recognition is a crucial component of biomedical natural language processing, enabling information extraction and ultimately reasoning over and knowledge discovery from text. Much progress has been made in the design of rule-based and supervised tools, but they are often genre and task dependent. As such, adapting them to different genres of text or identifying new types of entities requires major effort in re-annotation or rule development. In this paper, we propose an unsupervised approach to extracting named entities from biomedical text. We describe a stepwise solution to tackle the challenges of entity boundary detection and entity type classification without relying on any handcrafted rules, heuristics, or annotated data. A noun phrase chunker followed by a filter based on inverse document frequency extracts candidate entities from free text. Classification of candidate entities into categories of interest is carried out by leveraging principles from distributional semantics. Experiments show that our system, especially the entity classification step, yields competitive results on two popular biomedical datasets of clinical notes and biological literature, and outperforms a baseline dictionary match approach. Detailed error analysis provides a road map for future work. PMID:23954592

  16. FoodMicrobionet: A database for the visualisation and exploration of food bacterial communities based on network analysis.

    PubMed

    Parente, Eugenio; Cocolin, Luca; De Filippis, Francesca; Zotta, Teresa; Ferrocino, Ilario; O'Sullivan, Orla; Neviani, Erasmo; De Angelis, Maria; Cotter, Paul D; Ercolini, Danilo

    2016-02-16

    Amplicon targeted high-throughput sequencing has become a popular tool for the culture-independent analysis of microbial communities. Although the data obtained with this approach are portable and the number of sequences available in public databases is increasing, no tool has been developed yet for the analysis and presentation of data obtained in different studies. This work describes an approach for the development of a database for the rapid exploration and analysis of data on food microbial communities. Data from seventeen studies investigating the structure of bacterial communities in dairy, meat, sourdough and fermented vegetable products, obtained by 16S rRNA gene targeted high-throughput sequencing, were collated and analysed using Gephi, a network analysis software. The resulting database, which we named FoodMicrobionet, was used to analyse nodes and network properties and to build an interactive web-based visualisation. The latter allows the visual exploration of the relationships between Operational Taxonomic Units (OTUs) and samples and the identification of core- and sample-specific bacterial communities. It also provides additional search tools and hyperlinks for the rapid selection of food groups and OTUs and for rapid access to external resources (NCBI taxonomy, digital versions of the original articles). Microbial interaction network analysis was carried out using CoNet on datasets extracted from FoodMicrobionet: the complexity of interaction networks was much lower than that found for other bacterial communities (human microbiome, soil and other environments). This may reflect both a bias in the dataset (which was dominated by fermented foods and starter cultures) and the lower complexity of food bacterial communities. Although some technical challenges exist, and are discussed here, the net result is a valuable tool for the exploration of food bacterial communities by the scientific community and food industry. Copyright © 2015. Published by Elsevier B.V.

  17. ReMap 2018: an updated atlas of regulatory regions from an integrative analysis of DNA-binding ChIP-seq experiments

    PubMed Central

    Chèneby, Jeanne; Gheorghe, Marius; Artufel, Marie

    2018-01-01

    Abstract With this latest release of ReMap (http://remap.cisreg.eu), we present a unique collection of regulatory regions in human, as a result of a large-scale integrative analysis of ChIP-seq experiments for hundreds of transcriptional regulators (TRs) such as transcription factors, transcriptional co-activators and chromatin regulators. In 2015, we introduced the ReMap database to capture the genome regulatory space by integrating public ChIP-seq datasets, covering 237 TRs across 13 million (M) peaks. In this release, we have extended this catalog to constitute a unique collection of regulatory regions. Specifically, we have collected, analyzed and retained after quality control a total of 2829 ChIP-seq datasets available from public sources, covering a total of 485 TRs with a catalog of 80M peaks. Additionally, the updated database includes new search features for TR names as well as aliases, including cell line names and the ability to navigate the data directly within genome browsers via public track hubs. Finally, full access to this catalog is available online together with a TR binding enrichment analysis tool. ReMap 2018 provides a significant update of the ReMap database, providing an in depth view of the complexity of the regulatory landscape in human. PMID:29126285

  18. Stroop effects in persons with traumatic brain injury: selective attention, speed of processing, or color-naming? A meta-analysis.

    PubMed

    Ben-David, Boaz M; Nguyen, Linh L T; van Lieshout, Pascal H H M

    2011-03-01

    The color word Stroop test is the most common tool used to assess selective attention in persons with traumatic brain injury (TBI). A larger Stroop effect for TBI patients, as compared to controls, is generally interpreted as reflecting a decrease in selective attention. Alternatively, it has been suggested that this increase in Stroop effects is influenced by group differences in generalized speed of processing (SOP). The current study describes an overview and meta-analysis of 10 studies, where persons with TBI (N = 324) were compared to matched controls (N = 501) on the Stroop task. The findings confirmed that Stroop interference was significantly larger for TBI groups (p = .008). However, these differences may be strongly biased by TBI-related slowdown in generalized SOP (r² = .81 in a Brinley analysis). We also found that TBI-related changes in sensory processing may affect group differences. Mainly, a TBI-related increase in the latency difference between reading and naming the font color of a color-neutral word (r² = .96) was linked to Stroop effects. Our results suggest that, in using Stroop, it seems prudent to control for both sensory factors and SOP to differentiate potential changes in selective attention from other changes following TBI.

  19. Nomenclature101.com: A Free, Student-Driven Organic Chemistry Nomenclature Learning Tool

    ERIC Educational Resources Information Center

    Flynn, Alison B.; Caron, Jeanette; Laroche, Jamey; Daviau-Duguay, Melissa; Marcoux, Caroline; Richard, Gise`le

    2014-01-01

    Fundamental to a student's understanding of organic chemistry is the ability to interpret and use its language, including molecules' names and other key terms. A learning gap exists in that students often struggle with organic nomenclature. Although many resources describe the rules for naming molecules, there is a paucity of resources…

  20. An Interactive Attention Board: Improving the Attention of Individuals with Autism and Mental Retardation

    ERIC Educational Resources Information Center

    Sahin, Yasar Guneri; Cimen, Fatih Mehmet

    2011-01-01

    This paper presents a tool named "Interactive Attention Board" (IAB) and an associated software named "Interactive Attention Boards Software" (IABS) for individuals with Mental Retardation and Autism. The proposed system is based on several theories such as perception and learning theories, and it is intended to improve hand-eye coordination and…

  1. Name that Gene: A Meaningful Computer-Based Genetics Classroom Activity that Incorporates Tolls Used by Real Research Scientists

    ERIC Educational Resources Information Center

    Wefer, Stephen H.

    2003-01-01

    "Name That Gene" is a simple classroom activity that incorporates bioinformatics (available biological information) into the classroom using "Basic Logical Alignment Search Tool (BLAST)." An excellent classroom activity involving bioinformatics and "BLAST" has been previously explored using sequences from bacteria, but it is tailored for college…

  2. Computer-Mediated Training Tools to Enhance Joint Task Force Cognitive Leadership Skills

    DTIC Science & Technology

    2007-04-01

    University); and 5d. TASK NUMBER Barclay Lewis (American Systems) 5e. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ...ple G am ing Platform D ecisive A ction for Training ..................................................... 43 6. Perform ance M etrics...Figure 15: Automated Performance Measurement System ................................................................... 48 iv COMPUTER-MEDIATED TRAINING

  3. TBI Endpoints Development

    DTIC Science & Technology

    2015-10-01

    Medical Research and Materiel Command Fort Detrick, Maryland 21702-5012 DISTRIBUTION STATEMENT: Approved for Public Release; Distribution Unlimited The...SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) U.S. Army Medical Resear ch and Materiel Command Fort Detrick...DDT) and Medical Device Development Tool (MDDT) programs with case study presentations and question and answer opportunities. Expert Working Groups

  4. Pilot Study for Standardizing Rapid Automatized Naming and Rapid Alternating Stimulus Tests in Arabic

    ERIC Educational Resources Information Center

    Abu-Hamour, Bashir

    2013-01-01

    This study examined the acceptability, reliability, and validity of the Arabic translated version of the Rapid Automatized Naming and Rapid Alternating Stimulus Tests (RAN/RAS; Wolf & Denckla, 2005) for Jordanian students. RAN/RAS tests are a vital assessment tool to distinguish good readers from poor readers. These tests have been…

  5. Development of Software Tools for ADA Compliance Data Collection, Management, and Inquiry

    DOT National Transportation Integrated Search

    2014-07-01

    In this NUTC research project, the UNR research team developed an iOS application (named NDOT ADA Data) to efficiently and intuitively collect ADA inventory data with iPhones or iPads. This tool was developed to facilitate NDOT ADA data collect...

  6. 77 FR 33227 - Assessment Questionnaire-IP Sector Specific Agency Risk Self Assessment Tool (IP-SSARSAT)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-05

    ... DEPARTMENT OF HOMELAND SECURITY [Docket No. DHS-2011-0069] Assessment Questionnaire--IP Sector Specific Agency Risk Self Assessment Tool (IP-SSARSAT) AGENCY: National Protection and Programs Directorate...), Office of Infrastructure Protection (IP), Sector Outreach and Programs Division (SOPD), previously named...

  7. Unique Sensor Plane Maps Invisible Toxins for First Responders

    ScienceCinema

    Kroutil, Robert; Thomas, Mark; Aten, Keith

    2018-05-30

    A unique airborne emergency response tool, ASPECT is a Los Alamos/U.S. Environmental Protection Agency project that can put chemical and radiological mapping tools in the air over an accident scene. The name ASPECT is an acronym for Airborne Spectral Photometric Environmental Collection Technology.

  8. Benchmarking desktop and mobile handwriting across COTS devices: The e-BioSign biometric database

    PubMed Central

    Tolosana, Ruben; Vera-Rodriguez, Ruben; Fierrez, Julian; Morales, Aythami; Ortega-Garcia, Javier

    2017-01-01

    This paper describes the design, acquisition process and baseline evaluation of the new e-BioSign database, which includes dynamic signature and handwriting information. Data is acquired from 5 different COTS devices: three Wacom devices (STU-500, STU-530 and DTU-1031) specifically designed to capture dynamic signatures and handwriting, and two general purpose tablets (Samsung Galaxy Note 10.1 and Samsung ATIV 7). For the two Samsung tablets, data is collected using both pen stylus and also the finger in order to study the performance of signature verification in a mobile scenario. Data was collected in two sessions for 65 subjects, and includes dynamic information of the signature, the full name and alpha numeric sequences. Skilled forgeries were also performed for signatures and full names. We also report a benchmark evaluation based on e-BioSign for person verification under three different real scenarios: 1) intra-device, 2) inter-device, and 3) mixed writing-tool. We have experimented the proposed benchmark using the main existing approaches for signature verification: feature- and time functions-based. As a result, new insights into the problem of signature biometrics in sensor-interoperable scenarios have been obtained, namely: the importance of specific methods for dealing with device interoperability, and the necessity of a deeper analysis on signatures acquired using the finger as the writing tool. This e-BioSign public database allows the research community to: 1) further analyse and develop signature verification systems in realistic scenarios, and 2) investigate towards a better understanding of the nature of the human handwriting when captured using electronic COTS devices in realistic conditions. PMID:28475590

  9. Benchmarking desktop and mobile handwriting across COTS devices: The e-BioSign biometric database.

    PubMed

    Tolosana, Ruben; Vera-Rodriguez, Ruben; Fierrez, Julian; Morales, Aythami; Ortega-Garcia, Javier

    2017-01-01

    This paper describes the design, acquisition process and baseline evaluation of the new e-BioSign database, which includes dynamic signature and handwriting information. Data is acquired from 5 different COTS devices: three Wacom devices (STU-500, STU-530 and DTU-1031) specifically designed to capture dynamic signatures and handwriting, and two general purpose tablets (Samsung Galaxy Note 10.1 and Samsung ATIV 7). For the two Samsung tablets, data is collected using both pen stylus and also the finger in order to study the performance of signature verification in a mobile scenario. Data was collected in two sessions for 65 subjects, and includes dynamic information of the signature, the full name and alpha numeric sequences. Skilled forgeries were also performed for signatures and full names. We also report a benchmark evaluation based on e-BioSign for person verification under three different real scenarios: 1) intra-device, 2) inter-device, and 3) mixed writing-tool. We have experimented the proposed benchmark using the main existing approaches for signature verification: feature- and time functions-based. As a result, new insights into the problem of signature biometrics in sensor-interoperable scenarios have been obtained, namely: the importance of specific methods for dealing with device interoperability, and the necessity of a deeper analysis on signatures acquired using the finger as the writing tool. This e-BioSign public database allows the research community to: 1) further analyse and develop signature verification systems in realistic scenarios, and 2) investigate towards a better understanding of the nature of the human handwriting when captured using electronic COTS devices in realistic conditions.

  10. Brief communication: Getting Greenland's glaciers right - a new data set of all official Greenlandic glacier names

    NASA Astrophysics Data System (ADS)

    Bjørk, A. A.; Kruse, L. M.; Michaelsen, P. B.

    2015-12-01

    Place names in Greenland can be difficult to get right, as they are a mix of Greenlandic, Danish, and other foreign languages. In addition, orthographies have changed over time. With this new data set, we give the researcher working with Greenlandic glaciers the proper tool to find the correct name for glaciers and ice caps in Greenland and to locate glaciers described in the historic literature with the old Greenlandic orthography. The data set contains information on the names of 733 glaciers, 285 originating from the Greenland Ice Sheet (GrIS) and 448 from local glaciers and ice caps (LGICs).

  11. What population studies can do for business.

    PubMed

    Hugo, G

    1991-05-01

    This paper examines how specific skills essential to demography, the scientific study of human populations, can be useful in private and public sector planning. Over the past 2 decades, Australia's population has undergone profound transformations -- a shift to below replacement level fertility and a change in ethnic composition, to name a few. And these changes have reshaped the markets for goods, services, and labor. Because demography seeks to analyze and explain changes in the size, composition, and spatial distribution of people, this discipline requires certain skills that can be particularly valuable to both private and public sector planning. These skills include: 1) a sound knowledge of why and how populations change over time; 2) a wide range of concepts (the "cohort," for example) which allow demographers to analyze the dynamics of change in a population; 3) statistical techniques; and 4) life tables techniques. Having named the specific skills of demographers, the author identifies the areas of business and public administration where these skills can be most useful, areas that include the following: strategic long-term planning, marketing, market segmentation, small area analysis, household and family level analysis, projections and estimates, human resources analysis, and international population trends. Finally, the author discusses the implications of applied population analysis on the training of demographers in Australia, emphasizing the role of the Australian Population Association in improving the status of demography as an important planning tool.

  12. On a learning curve for shared decision making: Interviews with clinicians using the knee osteoarthritis Option Grid.

    PubMed

    Elwyn, Glyn; Rasmussen, Julie; Kinsey, Katharine; Firth, Jill; Marrin, Katy; Edwards, Adrian; Wood, Fiona

    2018-02-01

    Tools used in clinical encounters to illustrate to patients the risks and benefits of treatment options have been shown to increase shared decision making. However, we do not have good information about how these tools are viewed by clinicians and how clinicians think patients would react to their use. Our aim was to examine clinicians' views about the possible and actual use of tools designed to support patients and clinicians to collaborate and deliberate about treatment options, namely, Option Grid decision aids. We conducted a thematic analysis of qualitative interviews embedded in the intervention phase of a trial of an Option Grid decision aid for osteoarthritis of the knee. Interviews were conducted with 6 participating clinicians before they used the tool and again after clinicians had used the tool with 6 patients. In the first interview, clinicians voiced concerns that the tool would lead to an increase in encounter duration, patient resistance regarding involvement in decision making, and potential information overload. At the second interview, after minimal training, the clinicians reported that the tool had changed their usual way of communicating, and it was generally acceptable and helpful to integrate it into practice. After experiencing the use of Option Grids, clinicians became more willing to use the tools in their clinical encounters with patients. How best to introduce Option Grids to clinicians and adopt their use into practice will need careful consideration of context, workflow, and clinical pathways. © 2016 John Wiley & Sons, Ltd.

  13. Increasing the Operational Value of Event Messages

    NASA Technical Reports Server (NTRS)

    Li, Zhenping; Savkli, Cetin; Smith, Dan

    2003-01-01

    Assessing the health of a space mission has traditionally been performed using telemetry analysis tools. Parameter values are compared to known operational limits and are plotted over various time periods. This presentation begins with the notion that there is an incredible amount of untapped information contained within the mission s event message logs. Through creative advancements in message handling tools, the event message logs can be used to better assess spacecraft and ground system status and to highlight and report on conditions not readily apparent when messages are evaluated one-at-a-time during a real-time pass. Work in this area is being funded as part of a larger NASA effort at the Goddard Space Flight Center to create component-based, middleware-based, standards-based general purpose ground system architecture referred to as GMSEC - the GSFC Mission Services Evolution Center. The new capabilities and operational concepts for event display, event data analyses and data mining are being developed by Lockheed Martin and the new subsystem has been named GREAT - the GMSEC Reusable Event Analysis Toolkit. Planned for use on existing and future missions, GREAT has the potential to increase operational efficiency in areas of problem detection and analysis, general status reporting, and real-time situational awareness.

  14. On-Line Loss of Control Detection Using Wavelets

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J. (Technical Monitor); Thompson, Peter M.; Klyde, David H.; Bachelder, Edward N.; Rosenthal, Theodore J.

    2005-01-01

    Wavelet transforms are used for on-line detection of aircraft loss of control. Wavelet transforms are compared with Fourier transform methods and shown to more rapidly detect changes in the vehicle dynamics. This faster response is due to a time window that decreases in length as the frequency increases. New wavelets are defined that further decrease the detection time by skewing the shape of the envelope. The wavelets are used for power spectrum and transfer function estimation. Smoothing is used to tradeoff the variance of the estimate with detection time. Wavelets are also used as front-end to the eigensystem reconstruction algorithm. Stability metrics are estimated from the frequency response and models, and it is these metrics that are used for loss of control detection. A Matlab toolbox was developed for post-processing simulation and flight data using the wavelet analysis methods. A subset of these methods was implemented in real time and named the Loss of Control Analysis Tool Set or LOCATS. A manual control experiment was conducted using a hardware-in-the-loop simulator for a large transport aircraft, in which the real time performance of LOCATS was demonstrated. The next step is to use these wavelet analysis tools for flight test support.

  15. Reusable Social Networking Capabilities for an Earth Science Collaboratory

    NASA Astrophysics Data System (ADS)

    Lynnes, C.; Da Silva, D.; Leptoukh, G. G.; Ramachandran, R.

    2011-12-01

    A vast untapped resource of data, tools, information and knowledge lies within the Earth science community. This is due to the fact that it is difficult to share the full spectrum of these entities, particularly their full context. As a result, most knowledge exchange is through person-to-person contact at meetings, email and journal articles, each of which can support only a limited level of detail. We propose the creation of an Earth Science Collaboratory (ESC): a framework that would enable sharing of data, tools, workflows, results and the contextual knowledge about these information entities. The Drupal platform is well positioned to provide the key social networking capabilities to the ESC. As a proof of concept of a rich collaboration mechanism, we have developed a Drupal-based mechanism for graphically annotating and commenting on results images from analysis workflows in the online Giovanni analysis system for remote sensing data. The annotations can be tagged and shared with others in the community. These capabilities are further supplemented by a Research Notebook capability reused from another online analysis system named Talkoot. The goal is a reusable set of modules that can integrate with variety of other applications either within Drupal web frameworks or at a machine level.

  16. Computer-aided modelling and analysis of PV systems: a comparative study.

    PubMed

    Koukouvaos, Charalambos; Kandris, Dionisis; Samarakou, Maria

    2014-01-01

    Modern scientific advances have enabled remarkable efficacy for photovoltaic systems with regard to the exploitation of solar energy, boosting them into having a rapidly growing position among the systems developed for the production of renewable energy. However, in many cases the design, analysis, and control of photovoltaic systems are tasks which are quite complex and thus difficult to be carried out. In order to cope with this kind of problems, appropriate software tools have been developed either as standalone products or parts of general purpose software platforms used to model and simulate the generation, transmission, and distribution of solar energy. The utilization of this kind of software tools may be extremely helpful to the successful performance evaluation of energy systems with maximum accuracy and minimum cost in time and effort. The work presented in this paper aims on a first level at the performance analysis of various configurations of photovoltaic systems through computer-aided modelling. On a second level, it provides a comparative evaluation of the credibility of two of the most advanced graphical programming environments, namely, Simulink and LabVIEW, with regard to their application in photovoltaic systems.

  17. Computer-Aided Modelling and Analysis of PV Systems: A Comparative Study

    PubMed Central

    Koukouvaos, Charalambos

    2014-01-01

    Modern scientific advances have enabled remarkable efficacy for photovoltaic systems with regard to the exploitation of solar energy, boosting them into having a rapidly growing position among the systems developed for the production of renewable energy. However, in many cases the design, analysis, and control of photovoltaic systems are tasks which are quite complex and thus difficult to be carried out. In order to cope with this kind of problems, appropriate software tools have been developed either as standalone products or parts of general purpose software platforms used to model and simulate the generation, transmission, and distribution of solar energy. The utilization of this kind of software tools may be extremely helpful to the successful performance evaluation of energy systems with maximum accuracy and minimum cost in time and effort. The work presented in this paper aims on a first level at the performance analysis of various configurations of photovoltaic systems through computer-aided modelling. On a second level, it provides a comparative evaluation of the credibility of two of the most advanced graphical programming environments, namely, Simulink and LabVIEW, with regard to their application in photovoltaic systems. PMID:24772007

  18. Varietal discrimination of hop pellets by near and mid infrared spectroscopy.

    PubMed

    Machado, Julio C; Faria, Miguel A; Ferreira, Isabel M P L V O; Páscoa, Ricardo N M J; Lopes, João A

    2018-04-01

    Hop is one of the most important ingredients of beer production and several varieties are commercialized. Therefore, it is important to find an eco-real-time-friendly-low-cost technique to distinguish and discriminate hop varieties. This paper describes the development of a method based on vibrational spectroscopy techniques, namely near- and mid-infrared spectroscopy, for the discrimination of 33 commercial hop varieties. A total of 165 samples (five for each hop variety) were analysed by both techniques. Principal component analysis, hierarchical cluster analysis and partial least squares discrimination analysis were the chemometric tools used to discriminate positively the hop varieties. After optimizing the spectral regions and pre-processing methods a total of 94.2% and 96.6% correct hop varieties discrimination were obtained for near- and mid-infrared spectroscopy, respectively. The results obtained demonstrate the suitability of these vibrational spectroscopy techniques to discriminate different hop varieties and consequently their potential to be used as an authenticity tool. Compared with the reference procedures normally used for hops variety discrimination these techniques are quicker, cost-effective, non-destructive and eco-friendly. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Scalable Performance Environments for Parallel Systems

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Olson, Robert D.; Aydt, Ruth A.; Madhyastha, Tara M.; Birkett, Thomas; Jensen, David W.; Nazief, Bobby A. A.; Totty, Brian K.

    1991-01-01

    As parallel systems expand in size and complexity, the absence of performance tools for these parallel systems exacerbates the already difficult problems of application program and system software performance tuning. Moreover, given the pace of technological change, we can no longer afford to develop ad hoc, one-of-a-kind performance instrumentation software; we need scalable, portable performance analysis tools. We describe an environment prototype based on the lessons learned from two previous generations of performance data analysis software. Our environment prototype contains a set of performance data transformation modules that can be interconnected in user-specified ways. It is the responsibility of the environment infrastructure to hide details of module interconnection and data sharing. The environment is written in C++ with the graphical displays based on X windows and the Motif toolkit. It allows users to interconnect and configure modules graphically to form an acyclic, directed data analysis graph. Performance trace data are represented in a self-documenting stream format that includes internal definitions of data types, sizes, and names. The environment prototype supports the use of head-mounted displays and sonic data presentation in addition to the traditional use of visual techniques.

  20. DECOMP: a PDB decomposition tool on the web.

    PubMed

    Ordog, Rafael; Szabadka, Zoltán; Grolmusz, Vince

    2009-07-27

    The protein databank (PDB) contains high quality structural data for computational structural biology investigations. We have earlier described a fast tool (the decomp_pdb tool) for identifying and marking missing atoms and residues in PDB files. The tool also automatically decomposes PDB entries into separate files describing ligands and polypeptide chains. Here, we describe a web interface named DECOMP for the tool. Our program correctly identifies multi-monomer ligands, and the server also offers the preprocessed ligand-protein decomposition of the complete PDB for downloading (up to size: 5GB) AVAILABILITY: http://decomp.pitgroup.org.

  1. CFGP: a web-based, comparative fungal genomics platform

    PubMed Central

    Park, Jongsun; Park, Bongsoo; Jung, Kyongyong; Jang, Suwang; Yu, Kwangyul; Choi, Jaeyoung; Kong, Sunghyung; Park, Jaejin; Kim, Seryun; Kim, Hyojeong; Kim, Soonok; Kim, Jihyun F.; Blair, Jaime E.; Lee, Kwangwon; Kang, Seogchan; Lee, Yong-Hwan

    2008-01-01

    Since the completion of the Saccharomyces cerevisiae genome sequencing project in 1996, the genomes of over 80 fungal species have been sequenced or are currently being sequenced. Resulting data provide opportunities for studying and comparing fungal biology and evolution at the genome level. To support such studies, the Comparative Fungal Genomics Platform (CFGP; http://cfgp.snu.ac.kr), a web-based multifunctional informatics workbench, was developed. The CFGP comprises three layers, including the basal layer, middleware and the user interface. The data warehouse in the basal layer contains standardized genome sequences of 65 fungal species. The middleware processes queries via six analysis tools, including BLAST, ClustalW, InterProScan, SignalP 3.0, PSORT II and a newly developed tool named BLASTMatrix. The BLASTMatrix permits the identification and visualization of genes homologous to a query across multiple species. The Data-driven User Interface (DUI) of the CFGP was built on a new concept of pre-collecting data and post-executing analysis instead of the ‘fill-in-the-form-and-press-SUBMIT’ user interfaces utilized by most bioinformatics sites. A tool termed Favorite, which supports the management of encapsulated sequence data and provides a personalized data repository to users, is another novel feature in the DUI. PMID:17947331

  2. GEOGLE: context mining tool for the correlation between gene expression and the phenotypic distinction.

    PubMed

    Yu, Yao; Tu, Kang; Zheng, Siyuan; Li, Yun; Ding, Guohui; Ping, Jie; Hao, Pei; Li, Yixue

    2009-08-25

    In the post-genomic era, the development of high-throughput gene expression detection technology provides huge amounts of experimental data, which challenges the traditional pipelines for data processing and analyzing in scientific researches. In our work, we integrated gene expression information from Gene Expression Omnibus (GEO), biomedical ontology from Medical Subject Headings (MeSH) and signaling pathway knowledge from sigPathway entries to develop a context mining tool for gene expression analysis - GEOGLE. GEOGLE offers a rapid and convenient way for searching relevant experimental datasets, pathways and biological terms according to multiple types of queries: including biomedical vocabularies, GDS IDs, gene IDs, pathway names and signature list. Moreover, GEOGLE summarizes the signature genes from a subset of GDSes and estimates the correlation between gene expression and the phenotypic distinction with an integrated p value. This approach performing global searching of expression data may expand the traditional way of collecting heterogeneous gene expression experiment data. GEOGLE is a novel tool that provides researchers a quantitative way to understand the correlation between gene expression and phenotypic distinction through meta-analysis of gene expression datasets from different experiments, as well as the biological meaning behind. The web site and user guide of GEOGLE are available at: http://omics.biosino.org:14000/kweb/workflow.jsp?id=00020.

  3. Analyzing the impacts of final demand changes on total output using input-output approach: The case of Japanese ICT sectors

    NASA Astrophysics Data System (ADS)

    Zuhdi, Ubaidillah

    2014-03-01

    The purpose of this study is to analyze the impacts of final demand changes on total output of Japanese Information and Communication Technologies (ICT) sectors in future time. This study employs one of analysis tool in Input-Output (IO) analysis, demand-pull IO quantity model, in achieving the purpose. There are three final demand changes used in this study, namely (1) export, (2) import, and (3) outside households consumption changes. This study focuses on "pure change" condition, the condition that final demand changes only appear in analyzed sectors. The results show that export and outside households consumption modifications give positive impact while opposite impact could be seen in import change.

  4. Use of dirichlet distributions and orthogonal projection techniques for the fluctuation analysis of steady-state multivariate birth-death systems

    NASA Astrophysics Data System (ADS)

    Palombi, Filippo; Toti, Simona

    2015-05-01

    Approximate weak solutions of the Fokker-Planck equation represent a useful tool to analyze the equilibrium fluctuations of birth-death systems, as they provide a quantitative knowledge lying in between numerical simulations and exact analytic arguments. In this paper, we adapt the general mathematical formalism known as the Ritz-Galerkin method for partial differential equations to the Fokker-Planck equation with time-independent polynomial drift and diffusion coefficients on the simplex. Then, we show how the method works in two examples, namely the binary and multi-state voter models with zealots.

  5. Strange non-chaotic attractors in a state controlled-cellular neural network-based quasiperiodically forced MLC circuit

    NASA Astrophysics Data System (ADS)

    Ezhilarasu, P. Megavarna; Inbavalli, M.; Murali, K.; Thamilmaran, K.

    2018-07-01

    In this paper, we report the dynamical transitions to strange non-chaotic attractors in a quasiperiodically forced state controlled-cellular neural network (SC-CNN)-based MLC circuit via two different mechanisms, namely the Heagy-Hammel route and the gradual fractalisation route. These transitions were observed through numerical simulations and hardware experiments and confirmed using statistical tools, such as maximal Lyapunov exponent spectrum and its variance and singular continuous spectral analysis. We find that there is a remarkable agreement of the results from both numerical simulations as well as from hardware experiments.

  6. The gravity apple tree

    NASA Astrophysics Data System (ADS)

    Espinosa Aldama, Mariana

    2015-04-01

    The gravity apple tree is a genealogical tree of the gravitation theories developed during the past century. The graphic representation is full of information such as guides in heuristic principles, names of main proponents, dates and references for original articles (See under Supplementary Data for the graphic representation). This visual presentation and its particular classification allows a quick synthetic view for a plurality of theories, many of them well validated in the Solar System domain. Its diachronic structure organizes information in a shape of a tree following similarities through a formal concept analysis. It can be used for educational purposes or as a tool for philosophical discussion.

  7. Exploring Digisonde Ionogram Data with SAO-X and DIDBase

    NASA Astrophysics Data System (ADS)

    Khmyrov, Grigori M.; Galkin, Ivan A.; Kozlov, Alexander V.; Reinisch, Bodo W.; McElroy, Jonathan; Dozois, Claude

    2008-02-01

    A comprehensive suite of software tools for ionogram data analysis and archiving has been developed at UMLCAR to support the exploration of raw and processed data from the worldwide network of digisondes in a low-latency, user-friendly environment. Paired with the remotely accessible Digital Ionogram Data Base (DIDBase), the SAO Explorer software serves as an example of how an academic institution conscientiously manages its resident data archive while local experts continue to work on design of new and improved data products, all in the name of free public access to the full roster of acquired ionospheric sounding data.

  8. OSCAR4: a flexible architecture for chemical text-mining

    PubMed Central

    2011-01-01

    The Open-Source Chemistry Analysis Routines (OSCAR) software, a toolkit for the recognition of named entities and data in chemistry publications, has been developed since 2002. Recent work has resulted in the separation of the core OSCAR functionality and its release as the OSCAR4 library. This library features a modular API (based on reduction of surface coupling) that permits client programmers to easily incorporate it into external applications. OSCAR4 offers a domain-independent architecture upon which chemistry specific text-mining tools can be built, and its development and usage are discussed. PMID:21999457

  9. A new tool to evaluate postgraduate training posts: the Job Evaluation Survey Tool (JEST).

    PubMed

    Wall, David; Goodyear, Helen; Singh, Baldev; Whitehouse, Andrew; Hughes, Elizabeth; Howes, Jonathan

    2014-10-02

    Three reports in 2013 about healthcare and patient safety in the UK, namely Berwick, Francis and Keogh have highlighted the need for junior doctors' views about their training experience to be heard. In the UK, the General Medical Council (GMC) quality assures medical training programmes and requires postgraduate deaneries to undertake quality management and monitoring of all training posts in their area. The aim of this study was to develop a simple trainee questionnaire for evaluation of postgraduate training posts based on the GMC, UK standards and to look at the reliability and validity including comparison with a well-established and internationally validated tool, the Postgraduate Hospital Educational Environment Measure (PHEEM). The Job Evaluation Survey Tool (JEST), a fifteen item job evaluation questionnaire was drawn up in 2006, piloted with Foundation doctors (2007), field tested with specialist paediatric registrars (2008) and used over a three year period (2008-11) by Foundation Doctors. Statistical analyses including descriptives, reliability, correlation and factor analysis were undertaken and JEST compared with PHEEM. The JEST had a reliability of 0.91 in the pilot study of 76 Foundation doctors, 0.88 in field testing of 173 Paediatric specialist registrars and 0.91 in three years of general use in foundation training with 3367 doctors completing JEST. Correlation of JEST with PHEEM was 0.80 (p < 0.001). Factor analysis showed two factors, a teaching factor and a social and lifestyle one. The JEST has proved to be a simple, valid and reliable evaluation tool in the monitoring and evaluation of postgraduate hospital training posts.

  10. Planting, Growing, Caring.

    ERIC Educational Resources Information Center

    Carrick, James

    Six units of instruction are provided in this manual designed for deaf students enrolled in an ornamental horticulture program. Unit 1 contains eight lessons (pictures and names) on tool and equipment identification (e.g., cutting and pruning tools, lawn and garden equipment, and power equipment). Unit 2 provides ten lessons on the care of tools…

  11. Steady-State Multiplicity Features of Chemically Reacting Systems.

    ERIC Educational Resources Information Center

    Luss, Dan

    1986-01-01

    Analyzes steady-state multiplicity in chemical reactors, focusing on the use of two mathematical tools, namely, the catastrophe theory and the singularity theory with a distinguished parameter. These tools can be used to determine the maximum number of possible solutions and the different types of bifurcation diagrams. (JN)

  12. The Crossword Puzzle as a Teaching Tool.

    ERIC Educational Resources Information Center

    Crossman, Edward K.

    1983-01-01

    In courses such as the history of psychology, it is necessary to learn a variety of relationships, events, and sequences, in addition to the task of having to pair certain key concepts with related names, e.g., phrenology--Hall. One tool useful in this type of learning is the crossword puzzle. (RM)

  13. XS: a FASTQ read simulator.

    PubMed

    Pratas, Diogo; Pinho, Armando J; Rodrigues, João M O S

    2014-01-16

    The emerging next-generation sequencing (NGS) is bringing, besides the natural huge amounts of data, an avalanche of new specialized tools (for analysis, compression, alignment, among others) and large public and private network infrastructures. Therefore, a direct necessity of specific simulation tools for testing and benchmarking is rising, such as a flexible and portable FASTQ read simulator, without the need of a reference sequence, yet correctly prepared for producing approximately the same characteristics as real data. We present XS, a skilled FASTQ read simulation tool, flexible, portable (does not need a reference sequence) and tunable in terms of sequence complexity. It has several running modes, depending on the time and memory available, and is aimed at testing computing infrastructures, namely cloud computing of large-scale projects, and testing FASTQ compression algorithms. Moreover, XS offers the possibility of simulating the three main FASTQ components individually (headers, DNA sequences and quality-scores). XS provides an efficient and convenient method for fast simulation of FASTQ files, such as those from Ion Torrent (currently uncovered by other simulators), Roche-454, Illumina and ABI-SOLiD sequencing machines. This tool is publicly available at http://bioinformatics.ua.pt/software/xs/.

  14. aLicante sUrgical Community Emergencies New Tool for the enUmeration of Morbidities: a simplified auditing tool for community-acquired gastrointestinal surgical emergencies.

    PubMed

    Villodre, Celia; Rebasa, Pere; Estrada, José Luís; Zaragoza, Carmen; Zapater, Pedro; Mena, Luís; Lluís, Félix

    2016-11-01

    In a previous study, we found that Physiological and Operative Severity Score for the enUmeration of Mortality and Morbidity (POSSUM) overpredicts morbidity risk in emergency gastrointestinal surgery. Our aim was to find a POSSUM equation adjustment. A prospective observational study was performed on 2,361 patients presenting with a community-acquired gastrointestinal surgical emergency. The first 1,000 surgeries constituted the development cohort, the second 1,000 events were the first validation intramural cohort, and the remaining 361 cases belonged to a second validation extramural cohort. (1) A modified POSSUM equation was obtained. (2) Logistic regression was used to yield a statistically significant equation that included age, hemoglobin, white cell count, sodium and operative severity. (3) A chi-square automatic interaction detector decision tree analysis yielded a statistically significant equation with 4 variables, namely cardiac failure, sodium, operative severity, and peritoneal soiling. A modified POSSUM equation and a simplified scoring system (aLicante sUrgical Community Emergencies New Tool for the enUmeration of Morbidities [LUCENTUM]) are described. Both tools significantly improve prediction of surgical morbidity in community-acquired gastrointestinal surgical emergencies. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Prostate Upgrading Team Project — EDRN Public Portal

    Cancer.gov

    Aim 1: We will develop a risk assessment tool using commonly-collected clinical information from a series of contemporary radical prostatectomies to predict the risk of prostate cancer upgrading to high grade cancer at radical prostatectomy. These data will be combined as a part of our Early Detection Research Network (EDRN) GU Working Group into a risk assessment tool; this tool will be named the EDRN Prostatectomy Upgrading Calculator or (EPUC).

  16. Categorization for Faces and Tools-Two Classes of Objects Shaped by Different Experience-Differs in Processing Timing, Brain Areas Involved, and Repetition Effects.

    PubMed

    Kozunov, Vladimir; Nikolaeva, Anastasia; Stroganova, Tatiana A

    2017-01-01

    The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se , or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140-170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210-220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions. Furthermore, we hypothesized and tested whether activity within face and tool-specific binding-related patterns would demonstrate oppositely acting effects following procedural perceptual learning. We found that activity in the ventral, face-specific network increased following the stimuli repetition. In contrast, tool processing in the dorsal network adapted by reducing its activity over the repetition period. Altogether, we have demonstrated that activity associated with visual processing of faces and tools during the categorization stage differ in processing timing, brain areas involved, and in their dynamics underlying stimuli learning.

  17. The National Shipbuilding Research Program. Application of Industrial Engineering Techniques to Reduce Workers’ Compensation and Environmental Costs

    DTIC Science & Technology

    1999-10-01

    TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Surface Warfare Center CD Code 2230-Design Integration...Tools Bldg 192, Room 128 9500 MacArthur Blvd Bethesda, MD 20817-5700 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S...Implementation of Task 2.4 • Task 7.0 Conduct Workshops • Task 8.0 Final Report To ensure success with the project, the research needed to be performed at the

  18. Crime Scene Investigation: Clinical Application of Chemical Shift Imaging as a Problem Solving Tool

    DTIC Science & Technology

    2016-02-26

    VERSION OF THE ATTACHED MATERIAL AND CERTIFY THAT IT IS AN ACCURATE MANUSCRIPT FOR PUBLICATION AND/OR PRESENTATION. AUTHOR’S PRINTED NAME/RANK/GRADE...8217""’ ’’’• Dolt.- ......... ,,.,,, ___ APPROVING AUTHORITY’S PRINTED NAME, RANK, TITLE APPROVING AUTHORITY’S SIGNATURE DATE ~ ........ !W l"li..~Aifl...QNO [gJ YES If yes give date: 16 Feb 2016 O N/A 6. COMMENTS [gj APPROVED 0 DISAPPROVED The article is approved. PRINTED NAME, RANK/GRADE, TITLE OF

  19. jFuzz: A Concolic Whitebox Fuzzer for Java

    NASA Technical Reports Server (NTRS)

    Jayaraman, Karthick; Harvison, David; Ganesh, Vijay; Kiezun, Adam

    2009-01-01

    We present jFuzz, a automatic testing tool for Java programs. jFuzz is a concolic whitebox fuzzer, built on the NASA Java PathFinder, an explicit-state Java model checker, and a framework for developing reliability and analysis tools for Java. Starting from a seed input, jFuzz automatically and systematically generates inputs that exercise new program paths. jFuzz uses a combination of concrete and symbolic execution, and constraint solving. Time spent on solving constraints can be significant. We implemented several well-known optimizations and name-independent caching, which aggressively normalizes the constraints to reduce the number of calls to the constraint solver. We present preliminary results due to the optimizations, and demonstrate the effectiveness of jFuzz in creating good test inputs. The source code of jFuzz is available as part of the NASA Java PathFinder. jFuzz is intended to be a research testbed for investigating new testing and analysis techniques based on concrete and symbolic execution. The source code of jFuzz is available as part of the NASA Java PathFinder.

  20. Ranked centroid projection: a data visualization approach with self-organizing maps.

    PubMed

    Yen, G G; Wu, Z

    2008-02-01

    The self-organizing map (SOM) is an efficient tool for visualizing high-dimensional data. In this paper, the clustering and visualization capabilities of the SOM, especially in the analysis of textual data, i.e., document collections, are reviewed and further developed. A novel clustering and visualization approach based on the SOM is proposed for the task of text mining. The proposed approach first transforms the document space into a multidimensional vector space by means of document encoding. Afterwards, a growing hierarchical SOM (GHSOM) is trained and used as a baseline structure to automatically produce maps with various levels of detail. Following the GHSOM training, the new projection method, namely the ranked centroid projection (RCP), is applied to project the input vectors to a hierarchy of 2-D output maps. The RCP is used as a data analysis tool as well as a direct interface to the data. In a set of simulations, the proposed approach is applied to an illustrative data set and two real-world scientific document collections to demonstrate its applicability.

  1. Design and Control of Compliant Tensegrity Robots Through Simulation and Hardware Validation

    NASA Technical Reports Server (NTRS)

    Caluwaerts, Ken; Despraz, Jeremie; Iscen, Atil; Sabelhaus, Andrew P.; Bruce, Jonathan; Schrauwen, Benjamin; Sunspiral, Vytas

    2014-01-01

    To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center has developed and validated two different software environments for the analysis, simulation, and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity ("tensile-integrity") structures have unique physical properties which make them ideal for interaction with uncertain environments. Yet these characteristics, such as variable structural compliance, and global multi-path load distribution through the tension network, make design and control of bio-inspired tensegrity robots extremely challenging. This work presents the progress in using these two tools in tackling the design and control challenges. The results of this analysis includes multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures. The current hardware prototype of a six-bar tensegrity, code-named ReCTeR, is presented in the context of this validation.

  2. WOrk-Related Questionnaire for UPper extremity disorders (WORQ-UP): Factor Analysis and Internal Consistency.

    PubMed

    Aerts, Bas R; Kuijer, P Paul; Beumer, Annechien; Eygendaal, Denise; Frings-Dresen, Monique H

    2018-04-17

    To test a 17-item questionnaire, the WOrk-Related Questionnaire for UPper extremity disorders (WORQ-UP), for dimensionality of the items (factor analysis) and internal consistency. Cross-sectional study. Outpatient clinic. A consecutive sample of patients (N=150) consisting of all new referral patients (either from a general physician or other hospital) who visited the orthopedic outpatient clinic because of an upper extremity musculoskeletal disorder. Not applicable. Number and dimensionality of the factors in the WORQ-UP. Four factors with eigenvalues (EVs) >1.0 were found. The factors were named exertion, dexterity, tools & equipment, and mobility. The EVs of the factors were, respectively, 5.78, 2.38, 1.81, and 1.24. The factors together explained 65.9% of the variance. The Cronbach alpha values for these factors were, respectively, .88, .74, .87, and .66. The 17 items of the WORQ-UP resemble 4 factors-exertion, dexterity, tools & equipment, and mobility-with a good internal consistency. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  3. Service-based analysis of biological pathways

    PubMed Central

    Zheng, George; Bouguettaya, Athman

    2009-01-01

    Background Computer-based pathway discovery is concerned with two important objectives: pathway identification and analysis. Conventional mining and modeling approaches aimed at pathway discovery are often effective at achieving either objective, but not both. Such limitations can be effectively tackled leveraging a Web service-based modeling and mining approach. Results Inspired by molecular recognitions and drug discovery processes, we developed a Web service mining tool, named PathExplorer, to discover potentially interesting biological pathways linking service models of biological processes. The tool uses an innovative approach to identify useful pathways based on graph-based hints and service-based simulation verifying user's hypotheses. Conclusion Web service modeling of biological processes allows the easy access and invocation of these processes on the Web. Web service mining techniques described in this paper enable the discovery of biological pathways linking these process service models. Algorithms presented in this paper for automatically highlighting interesting subgraph within an identified pathway network enable the user to formulate hypothesis, which can be tested out using our simulation algorithm that are also described in this paper. PMID:19796403

  4. Feasibility, reliability, and validity of a smartphone based application for the assessment of cognitive function in the elderly.

    PubMed

    Brouillette, Robert M; Foil, Heather; Fontenot, Stephanie; Correro, Anthony; Allen, Ray; Martin, Corby K; Bruce-Keller, Annadora J; Keller, Jeffrey N

    2013-01-01

    While considerable knowledge has been gained through the use of established cognitive and motor assessment tools, there is a considerable interest and need for the development of a battery of reliable and validated assessment tools that provide real-time and remote analysis of cognitive and motor function in the elderly. Smartphones appear to be an obvious choice for the development of these "next-generation" assessment tools for geriatric research, although to date no studies have reported on the use of smartphone-based applications for the study of cognition in the elderly. The primary focus of the current study was to assess the feasibility, reliability, and validity of a smartphone-based application for the assessment of cognitive function in the elderly. A total of 57 non-demented elderly individuals were administered a newly developed smartphone application-based Color-Shape Test (CST) in order to determine its utility in measuring cognitive processing speed in the elderly. Validity of this novel cognitive task was assessed by correlating performance on the CST with scores on widely accepted assessments of cognitive function. Scores on the CST were significantly correlated with global cognition (Mini-Mental State Exam: r = 0.515, p<0.0001) and multiple measures of processing speed and attention (Digit Span: r = 0.427, p<0.0001; Trail Making Test: r = -0.651, p<0.00001; Digit Symbol Test: r = 0.508, p<0.0001). The CST was not correlated with naming and verbal fluency tasks (Boston Naming Test, Vegetable/Animal Naming) or memory tasks (Logical Memory Test). Test re-test reliability was observed to be significant (r = 0.726; p = 0.02). Together, these data are the first to demonstrate the feasibility, reliability, and validity of using a smartphone-based application for the purpose of assessing cognitive function in the elderly. The importance of these findings for the establishment of smartphone-based assessment batteries of cognitive and motor function in the elderly is discussed.

  5. Teaching receptive naming of Chinese characters to children with autism by incorporating echolalia.

    PubMed

    Leung, J P; Wu, K I

    1997-01-01

    The facilitative effect of incorporating echolalia on teaching receptive naming of Chinese characters to children with autism was assessed. In Experiment 1, echoing the requested character name prior to the receptive naming task facilitated matching a character to its name. In addition, task performance was consistently maintained only when echolalia preceded the receptive manual response. Positive results from generalization tests suggested that learned responses occurred across various novel conditions. In Experiment 2, we examined the relation between task difficulty and speed of acquisition. All 3 participants achieved 100% correct responding in training, but learning less discriminable characters took more trials than learning more discriminable characters. These results provide support for incorporating echolalia as an educational tool within language instruction for some children with autism.

  6. Teaching receptive naming of Chinese characters to children with autism by incorporating echolalia.

    PubMed Central

    Leung, J P; Wu, K I

    1997-01-01

    The facilitative effect of incorporating echolalia on teaching receptive naming of Chinese characters to children with autism was assessed. In Experiment 1, echoing the requested character name prior to the receptive naming task facilitated matching a character to its name. In addition, task performance was consistently maintained only when echolalia preceded the receptive manual response. Positive results from generalization tests suggested that learned responses occurred across various novel conditions. In Experiment 2, we examined the relation between task difficulty and speed of acquisition. All 3 participants achieved 100% correct responding in training, but learning less discriminable characters took more trials than learning more discriminable characters. These results provide support for incorporating echolalia as an educational tool within language instruction for some children with autism. PMID:9157099

  7. Evolvix BEST Names for semantic reproducibility across code2brain interfaces

    PubMed Central

    Scheuer, Katherine S.; Keel, Seth A.; Vyas, Vaibhav; Liblit, Ben; Hanlon, Bret; Ferris, Michael C.; Yin, John; Dutra, Inês; Pietsch, Anthony; Javid, Christine G.; Moog, Cecilia L.; Meyer, Jocelyn; Dresel, Jerdon; McLoone, Brian; Loberger, Sonya; Movaghar, Arezoo; Gilchrist‐Scott, Morgaine; Sabri, Yazeed; Sescleifer, Dave; Pereda‐Zorrilla, Ivan; Zietlow, Andrew; Smith, Rodrigo; Pietenpol, Samantha; Goldfinger, Jacob; Atzen, Sarah L.; Freiberg, Erika; Waters, Noah P.; Nusbaum, Claire; Nolan, Erik; Hotz, Alyssa; Kliman, Richard M.; Mentewab, Ayalew; Fregien, Nathan; Loewe, Martha

    2016-01-01

    Names in programming are vital for understanding the meaning of code and big data. We define code2brain (C2B) interfaces as maps in compilers and brains between meaning and naming syntax, which help to understand executable code. While working toward an Evolvix syntax for general‐purpose programming that makes accurate modeling easy for biologists, we observed how names affect C2B quality. To protect learning and coding investments, C2B interfaces require long‐term backward compatibility and semantic reproducibility (accurate reproduction of computational meaning from coder‐brains to reader‐brains by code alone). Semantic reproducibility is often assumed until confusing synonyms degrade modeling in biology to deciphering exercises. We highlight empirical naming priorities from diverse individuals and roles of names in different modes of computing to show how naming easily becomes impossibly difficult. We present the Evolvix BEST (Brief, Explicit, Summarizing, Technical) Names concept for reducing naming priority conflicts, test it on a real challenge by naming subfolders for the Project Organization Stabilizing Tool system, and provide naming questionnaires designed to facilitate C2B debugging by improving names used as keywords in a stabilizing programming language. Our experiences inspired us to develop Evolvix using a flipped programming language design approach with some unexpected features and BEST Names at its core. PMID:27918836

  8. Tools to improve planning, implementation, monitoring, and evaluation of complementary feeding programmes.

    PubMed

    Untoro, Juliawati; Childs, Rachel; Bose, Indira; Winichagoon, Pattanee; Rudert, Christiane; Hall, Andrew; de Pee, Saskia

    2017-10-01

    Adequate nutrient intake is a prerequisite for achieving good nutrition status. Suboptimal complementary feeding practices are a main risk factor for stunting. The need for systematic and user-friendly tools to guide the planning, implementation, monitoring, and evaluation of dietary interventions for children aged 6-23 months has been recognized. This paper describes five tools, namely, ProPAN, Optifood, Cost of the Diet, Fill the Nutrient Gap, and Monitoring Results for Equity System that can be used in different combinations to improve situation analysis, planning, implementation, monitoring, or evaluation approaches for complementary feeding in a particular context. ProPAN helps with development of strategies and activities designed to change the behaviours of the target population. Optifood provides guidance for developing food-based recommendations. The Cost of the Diet can provide insight on economic barriers to accessing a nutritious and balanced diet. The Fill the Nutrient Gap facilitates formulation of context-specific policies and programmatic approaches to improve nutrient intake, through a multistakeholder process that uses insights from linear programming and secondary data. The Monitoring Results for Equity System helps with analysis of gaps, constraints, and determinants of complementary feeding interventions and adoption of recommended practices especially in the most vulnerable and deprived populations. These tools, and support for their use, are readily available and can be used either alone and/or complementarily throughout the programme cycle to improve infant and young child-feeding programmes at subnational and national levels. © 2017 John Wiley & Sons Ltd.

  9. StructRNAfinder: an automated pipeline and web server for RNA families prediction.

    PubMed

    Arias-Carrasco, Raúl; Vásquez-Morán, Yessenia; Nakaya, Helder I; Maracaja-Coutinho, Vinicius

    2018-02-17

    The function of many noncoding RNAs (ncRNAs) depend upon their secondary structures. Over the last decades, several methodologies have been developed to predict such structures or to use them to functionally annotate RNAs into RNA families. However, to fully perform this analysis, researchers should utilize multiple tools, which require the constant parsing and processing of several intermediate files. This makes the large-scale prediction and annotation of RNAs a daunting task even to researchers with good computational or bioinformatics skills. We present an automated pipeline named StructRNAfinder that predicts and annotates RNA families in transcript or genome sequences. This single tool not only displays the sequence/structural consensus alignments for each RNA family, according to Rfam database but also provides a taxonomic overview for each assigned functional RNA. Moreover, we implemented a user-friendly web service that allows researchers to upload their own nucleotide sequences in order to perform the whole analysis. Finally, we provided a stand-alone version of StructRNAfinder to be used in large-scale projects. The tool was developed under GNU General Public License (GPLv3) and is freely available at http://structrnafinder.integrativebioinformatics.me . The main advantage of StructRNAfinder relies on the large-scale processing and integrating the data obtained by each tool and database employed along the workflow, of which several files are generated and displayed in user-friendly reports, useful for downstream analyses and data exploration.

  10. Managing EEE part standardisation and procurement

    NASA Astrophysics Data System (ADS)

    Serieys, C.; Bensoussan, A.; Petitmangin, A.; Rigaud, M.; Barbaresco, P.; Lyan, C.

    2002-12-01

    This paper presents the development activities in space components selection and procurement dealing with a new data base tool implemented at Alcatel Space using TransForm softwaa re configurator developed by Techform S.A. Based on TransForm, Access Ingenierie has devv eloped a software product named OLG@DOS which facilitate the part nomenclatures analyses for new equipment design and manufacturing in term of ACCESS data base implementation. Hi-Rel EEE part type technical, production and quality information are collected and compiled usingproduction data base issued from production tools implemented for equipment definition, description and production based on Manufacturing Resource Planning (MRP II Control Open) and Parametric Design Manager (PDM Work Manager). The analysis of any new equipment nomenclature may be conducted through this means for standardisation purpose, cost containment program and management procurement activities as well as preparation of Component reviews as Part Approval Document and Declared Part List validation.

  11. Evaluating the compatibility of multi-functional and intensive urban land uses

    NASA Astrophysics Data System (ADS)

    Taleai, M.; Sharifi, A.; Sliuzas, R.; Mesgari, M.

    2007-12-01

    This research is aimed at developing a model for assessing land use compatibility in densely built-up urban areas. In this process, a new model was developed through the combination of a suite of existing methods and tools: geographical information system, Delphi methods and spatial decision support tools: namely multi-criteria evaluation analysis, analytical hierarchy process and ordered weighted average method. The developed model has the potential to calculate land use compatibility in both horizontal and vertical directions. Furthermore, the compatibility between the use of each floor in a building and its neighboring land uses can be evaluated. The method was tested in a built-up urban area located in Tehran, the capital city of Iran. The results show that the model is robust in clarifying different levels of physical compatibility between neighboring land uses. This paper describes the various steps and processes of developing the proposed land use compatibility evaluation model (CEM).

  12. Coated carbide drill performance under soluble coconut oil lubricant and nanoparticle enhanced MQL in drilling AISI P20

    NASA Astrophysics Data System (ADS)

    Jamil, N. A. M.; Azmi, A. I.; Fairuz, M. A.

    2016-02-01

    This research experimentally investigates the performance of a TiAlN coated carbide drill bit in drilling AISI P20 through two different kinds of lubricants, namely; soluble coconut oil (SCO) and nanoparticle-enhanced coconut oil (NECO) under minimum quantity lubrication system. The tool life and tool wear mechanism were studied using various cutting speeds of 50, 100 and 150 m/min with a constant feed of 0.01 mm/rev. Since the flank wear land was not regular along the cutting edge, the average flank wear (VB) was measured at several points using image analysis software. The drills were inspected using a scanning electron microscope to further elucidate the wear mechanism. The result indicates that drilling with the nanoparticle- enhanced lubricant was better in resisting the wear and improving the drill life to some extent

  13. Analyzing Saturn's Magnetospheric Data After Cassini - Improving and Future-Proofing Cassini / MAPS Tools and Data

    NASA Astrophysics Data System (ADS)

    Brown, L. E.; Faden, J.; Vandegriff, J. D.; Kurth, W. S.; Mitchell, D. G.

    2017-12-01

    We present a plan to provide enhanced longevity to analysis software and science data used throughout the Cassini mission for viewing Magnetosphere and Plasma Science (MAPS) data. While a final archive is being prepared for Cassini, the tools that read from this archive will eventually become moribund as real world hardware and software systems evolve. We will add an access layer over existing and planned Cassini data products that will allow multiple tools to access many public MAPS datasets. The access layer is called the Heliophysics Application Programmer's Interface (HAPI), and this is a mechanism being adopted at many data centers across Heliophysics and planetary science for the serving of time series data. Two existing tools are also being enhanced to read from HAPI servers, namely Autoplot from the University of Iowa and MIDL (Mission Independent Data Layer) from The Johns Hopkins Applied Physics Lab. Thus both tools will be able to access data from RPWS, MAG, CAPS, and MIMI. In addition to being able to access data from each other's institutions, these tools will be able to read from all the new datasets expected to come online using the HAPI standard in the near future. The PDS also plans to use HAPI for all the holdings at the Planetary and Plasma Interactions (PPI) node. A basic presentation of the new HAPI data server mechanism is presented, as is an early demonstration of the modified tools.

  14. Analysis of Palm Oil Production, Export, and Government Consumption to Gross Domestic Product of Five Districts in West Kalimantan by Panel Regression

    NASA Astrophysics Data System (ADS)

    Sulistianingsih, E.; Kiftiah, M.; Rosadi, D.; Wahyuni, H.

    2017-04-01

    Gross Domestic Product (GDP) is an indicator of economic growth in a region. GDP is a panel data, which consists of cross-section and time series data. Meanwhile, panel regression is a tool which can be utilised to analyse panel data. There are three models in panel regression, namely Common Effect Model (CEM), Fixed Effect Model (FEM) and Random Effect Model (REM). The models will be chosen based on results of Chow Test, Hausman Test and Lagrange Multiplier Test. This research analyses palm oil about production, export, and government consumption to five district GDP are in West Kalimantan, namely Sanggau, Sintang, Sambas, Ketapang and Bengkayang by panel regression. Based on the results of analyses, it concluded that REM, which adjusted-determination-coefficient is 0,823, is the best model in this case. Also, according to the result, only Export and Government Consumption that influence GDP of the districts.

  15. [Usefulness and limitations of rapid automatized naming to predict reading difficulties after school entry in preschool children].

    PubMed

    Kaneko, Masato; Uno, Akira; Haruhara, Noriko; Awaya, Noriko

    2012-01-01

    We investigated the usability and limitations of Rapid Automatized Naming (RAN) results in 6-year-old Japanese preschool children to estimate whether reading difficulties will be encountered after school entry. We administered a RAN task to 1,001 preschool children. Then after they had entered school, we performed follow-up surveys yearly to assess their reading performance when these children were in the first, second, third and fourth grades. Also, we examined Hiragana non-words and Kanji words at each time point to detect the children who were having difficulty with reading Hiragana and Kanji. Results by Receiver Operating Characteristic analysis showed that the RAN result in 6-year-old preschool children was predictive of Kanji reading difficulty in the lower grades of elementary school, especially in the second grade with a probability of 0.86, and the area under the curve showed a probability of 0.84 in the third grade. These results suggested that the RAN task was useful as a screening tool.

  16. ALSSAT Development Status

    NASA Technical Reports Server (NTRS)

    Yeh, H. Y. Jannivine; Brown, Cheryl B.; Jeng, Frank F.; Anderson, Molly; Ewert, Michael K.

    2009-01-01

    The development of the Advanced Life Support (ALS) Sizing Analysis Tool (ALSSAT) using Microsoft(Registered TradeMark) Excel was initiated by the Crew and Thermal Systems Division (CTSD) of Johnson Space Center (JSC) in 1997 to support the ALS and Exploration Offices in Environmental Control and Life Support System (ECLSS) design and studies. It aids the user in performing detailed sizing of the ECLSS for different combinations of the Exploration Life support (ELS) regenerative system technologies. This analysis tool will assist the user in performing ECLSS preliminary design and trade studies as well as system optimization efficiently and economically. The latest ALSSAT related publication in ICES 2004 detailed ALSSAT s development status including the completion of all six ELS Subsystems (ELSS), namely, the Air Management Subsystem, the Biomass Subsystem, the Food Management Subsystem, the Solid Waste Management Subsystem, the Water Management Subsystem, and the Thermal Control Subsystem and two external interfaces, including the Extravehicular Activity and the Human Accommodations. Since 2004, many more regenerative technologies in the ELSS were implemented into ALSSAT. ALSSAT has also been used for the ELS Research and Technology Development Metric Calculation for FY02 thru FY06. It was also used to conduct the Lunar Outpost Metric calculation for FY08 and was integrated as part of a Habitat Model developed at Langley Research Center to support the Constellation program. This paper will give an update on the analysis tool s current development status as well as present the analytical results of one of the trade studies that was performed.

  17. A New Analysis Tool Assessment for Rotordynamic Modeling of Gas Foil Bearings

    NASA Technical Reports Server (NTRS)

    Howard, Samuel A.; SanAndres, Luis

    2010-01-01

    Gas foil bearings offer several advantages over traditional bearing types that make them attractive for use in high-speed turbomachinery. They can operate at very high temperatures, require no lubrication supply (oil pumps, seals, etc.), exhibit very long life with no maintenance, and once operating airborne, have very low power loss. The use of gas foil bearings in high-speed turbomachinery has been accelerating in recent years, although the pace has been slow. One of the contributing factors to the slow growth has been a lack of analysis tools, benchmarked to measurements, to predict gas foil bearing behavior in rotating machinery. To address this shortcoming, NASA Glenn Research Center (GRC) has supported the development of analytical tools to predict gas foil bearing performance. One of the codes has the capability to predict rotordynamic coefficients, power loss, film thickness, structural deformation, and more. The current paper presents an assessment of the predictive capability of the code, named XLGFBTH (Texas A&M University). A test rig at GRC is used as a simulated case study to compare rotordynamic analysis using output from the code to actual rotor response as measured in the test rig. The test rig rotor is supported on two gas foil journal bearings manufactured at GRC, with all pertinent geometry disclosed. The resulting comparison shows that the rotordynamic coefficients calculated using XLGFBTH represent the dynamics of the system reasonably well, especially as they pertain to predicting critical speeds.

  18. The College Readiness Data Catalog Tool: User Guide. REL 2014-042

    ERIC Educational Resources Information Center

    Rodriguez, Sheila M.; Estacion, Angela

    2014-01-01

    As the name indicates, the College Readiness Data Catalog Tool focuses on identifying data that can indicate a student's college readiness. While college readiness indicators may also signal career readiness, many states, districts, and other entities, including the U.S. Virgin Islands (USVI), do not systematically collect career readiness…

  19. Development of the Ethical Evaluation Questionnaire: A Machiavellian, Utilitarian, and Religious Viewpoint

    ERIC Educational Resources Information Center

    Gokce, Asiye Toker

    2017-01-01

    This study aimed to develop a valid and reliable measurement tool to enhance ethical evaluation literature. The tool consists of two subscales named "Bases of ethical evaluation," and "Grounds of ethical evaluation." In order to determine the factor structure of the scales, both exploratory and confirmatory factor analyses were…

  20. Using Mathematica to Teach Process Units: A Distillation Case Study

    ERIC Educational Resources Information Center

    Rasteiro, Maria G.; Bernardo, Fernando P.; Saraiva, Pedro M.

    2005-01-01

    The question addressed here is how to integrate computational tools, namely interactive general-purpose platforms, in the teaching of process units. Mathematica has been selected as a complementary tool to teach distillation processes, with the main objective of leading students to achieve a better understanding of the physical phenomena involved…

  1. Tools to Ease Your Internet Adventures: Part I.

    ERIC Educational Resources Information Center

    Descy, Don E.

    1993-01-01

    This first of a two-part series highlights three tools that improve accessibility to Internet resources: (1) Alex, a database that accesses files in FTP (file transfer protocol) sites; (2) Archie, software that searches for file names with a user's search term; and (3) Gopher, a menu-driven program to access Internet sites. (LRW)

  2. The Effect of Guided Self-Reflection on Teachers' Technology Use

    ERIC Educational Resources Information Center

    Farber, Susan

    2010-01-01

    The purpose of this study was to pilot an instructional planning tool grounded in guided self-reflection on the instructional planning practices and instructional behaviors of a small sample of teachers. I designed the instructional planning tool, which was named the Informed Technology Integration Guide (ITIG). Participants used the instructional…

  3. Gimli: open source and high-performance biomedical name recognition

    PubMed Central

    2013-01-01

    Background Automatic recognition of biomedical names is an essential task in biomedical information extraction, presenting several complex and unsolved challenges. In recent years, various solutions have been implemented to tackle this problem. However, limitations regarding system characteristics, customization and usability still hinder their wider application outside text mining research. Results We present Gimli, an open-source, state-of-the-art tool for automatic recognition of biomedical names. Gimli includes an extended set of implemented and user-selectable features, such as orthographic, morphological, linguistic-based, conjunctions and dictionary-based. A simple and fast method to combine different trained models is also provided. Gimli achieves an F-measure of 87.17% on GENETAG and 72.23% on JNLPBA corpus, significantly outperforming existing open-source solutions. Conclusions Gimli is an off-the-shelf, ready to use tool for named-entity recognition, providing trained and optimized models for recognition of biomedical entities from scientific text. It can be used as a command line tool, offering full functionality, including training of new models and customization of the feature set and model parameters through a configuration file. Advanced users can integrate Gimli in their text mining workflows through the provided library, and extend or adapt its functionalities. Based on the underlying system characteristics and functionality, both for final users and developers, and on the reported performance results, we believe that Gimli is a state-of-the-art solution for biomedical NER, contributing to faster and better research in the field. Gimli is freely available at http://bioinformatics.ua.pt/gimli. PMID:23413997

  4. NeuronRead, an open source semi-automated tool for morphometric analysis of phase contrast and fluorescence neuronal images.

    PubMed

    Dias, Roberto A; Gonçalves, Bruno P; da Rocha, Joana F; da Cruz E Silva, Odete A B; da Silva, Augusto M F; Vieira, Sandra I

    2017-12-01

    Neurons are specialized cells of the Central Nervous System whose function is intricately related to the neuritic network they develop to transmit information. Morphological evaluation of this network and other neuronal structures is required to establish relationships between neuronal morphology and function, and may allow monitoring physiological and pathophysiologic alterations. Fluorescence-based microphotographs are the most widely used in cellular bioimaging, but phase contrast (PhC) microphotographs are easier to obtain, more affordable, and do not require invasive, complicated and disruptive techniques. Despite the various freeware tools available for fluorescence-based images analysis, few exist that can tackle the more elusive and harder-to-analyze PhC images. To surpass this, an interactive semi-automated image processing workflow was developed to easily extract relevant information (e.g. total neuritic length, average cell body area) from both PhC and fluorescence neuronal images. This workflow, named 'NeuronRead', was developed in the form of an ImageJ macro. Its robustness and adaptability were tested and validated on rat cortical primary neurons under control and differentiation inhibitory conditions. Validation included a comparison to manual determinations and to a golden standard freeware tool for fluorescence image analysis. NeuronRead was subsequently applied to PhC images of neurons at distinct differentiation days and exposed or not to DAPT, a pharmacological inhibitor of the γ-secretase enzyme, which cleaves the well-known Alzheimer's amyloid precursor protein (APP) and the Notch receptor. Data obtained confirms a neuritogenic regulatory role for γ-secretase products and validates NeuronRead as a time- and cost-effective useful monitoring tool. Copyright © 2017. Published by Elsevier Inc.

  5. Parametric analysis of plastic strain and force distribution in single pass metal spinning

    NASA Astrophysics Data System (ADS)

    Choudhary, Shashank; Tejesh, Chiruvolu Mohan; Regalla, Srinivasa Prakash; Suresh, Kurra

    2013-12-01

    Metal spinning also known as spin forming is one of the sheet metal working processes by which an axis-symmetric part can be formed from a flat sheet metal blank. Parts are produced by pressing a blunt edged tool or roller on to the blank which in turn is mounted on a rotating mandrel. This paper discusses about the setting up a 3-D finite element simulation of single pass metal spinning in LS-Dyna. Four parameters were considered namely blank thickness, roller nose radius, feed ratio and mandrel speed and the variation in forces and plastic strain were analysed using the full-factorial design of experiments (DOE) method of simulation experiments. For some of these DOE runs, physical experiments on extra deep drawing (EDD) sheet metal were carried out using En31 tool on a lathe machine. Simulation results are able to predict the zone of unsafe thinning in the sheet and high forming forces that are hint to the necessity for less-expensive and semi-automated machine tools to help the household and small scale spinning workers widely prevalent in India.

  6. Development of a knowledge management system for complex domains.

    PubMed

    Perott, André; Schader, Nils; Bruder, Ralph; Leonhardt, Jörg

    2012-01-01

    Deutsche Flugsicherung GmbH, the German Air Navigation Service Provider, follows a systematic approach, called HERA, for investigating incidents. The HERA analysis shows a distinctive occurrence of incidents in German air traffic control in which the visual perception of information plays a key role. The reasons can be partially traced back to workstation design, where basic ergonomic rules and principles are not sufficiently followed by the designers in some cases. In cooperation with the Institute of Ergonomics in Darmstadt the DFS investigated possible approaches that may support designers to implement ergonomic systems. None of the currently available tools were found to be able to meet the identified user requirements holistically. Therefore it was suggested to develop an enhanced software tool called Design Process Guide. The name Design Process Guide indicates that this tool exceeds the classic functions of currently available Knowledge Management Systems. It offers "design element" based access, shows processual and content related topics, and shows the implications of certain design decisions. Furthermore, it serves as documentation, detailing why a designer made to a decision under a particular set of conditions.

  7. Final Technical Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knio, Omar M.

    QUEST is a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, University of Southern California, Massachusetts Institute of Technology, University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The Duke effort focused on the development of algorithms and utility software for non-intrusive sparse UQ representations, and on participation in the organization of annual workshops and tutorials to disseminate UQ tools to the community, and to gather inputmore » in order to adapt approaches to the needs of SciDAC customers. In particular, fundamental developments were made in (a) multiscale stochastic preconditioners, (b) gradient-based approaches to inverse problems, (c) adaptive pseudo-spectral approximations, (d) stochastic limit cycles, and (e) sensitivity analysis tools for noisy systems. In addition, large-scale demonstrations were performed, namely in the context of ocean general circulation models.« less

  8. ChEBI in 2016: Improved services and an expanding collection of metabolites

    PubMed Central

    Hastings, Janna; Owen, Gareth; Dekker, Adriano; Ennis, Marcus; Kale, Namrata; Muthukrishnan, Venkatesh; Turner, Steve; Swainston, Neil; Mendes, Pedro; Steinbeck, Christoph

    2016-01-01

    ChEBI is a database and ontology containing information about chemical entities of biological interest. It currently includes over 46 000 entries, each of which is classified within the ontology and assigned multiple annotations including (where relevant) a chemical structure, database cross-references, synonyms and literature citations. All content is freely available and can be accessed online at http://www.ebi.ac.uk/chebi. In this update paper, we describe recent improvements and additions to the ChEBI offering. We have substantially extended our collection of endogenous metabolites for several organisms including human, mouse, Escherichia coli and yeast. Our front-end has also been reworked and updated, improving the user experience, removing our dependency on Java applets in favour of embedded JavaScript components and moving from a monthly release update to a ‘live’ website. Programmatic access has been improved by the introduction of a library, libChEBI, in Java, Python and Matlab. Furthermore, we have added two new tools, namely an analysis tool, BiNChE, and a query tool for the ontology, OntoQuery. PMID:26467479

  9. A forensic identification case and DPid - can it be a useful tool?

    PubMed Central

    de QUEIROZ, Cristhiane Leão; BOSTOCK, Ellen Marie; SANTOS, Carlos Ferreira; GUIMARÃES, Marco Aurélio; da SILVA, Ricardo Henrique Alves

    2017-01-01

    Abstract Objective The aim of this study was to show DPid as an important tool of potential application to solve cases with dental prosthesis, such as the forensic case reported, in which a skull, denture and dental records were received for analysis. Material and Methods Human identification is still challenging in various circumstances and Dental Prosthetics Identification (DPid) stores the patient’s name and prosthesis information and provides access through an embedded code in dental prosthesis or an identification card. All of this information is digitally stored on servers accessible only by dentists, laboratory technicians and patients with their own level of secure access. DPid provides a complete single-source list of all dental prosthesis features (materials and components) under complete and secure documentation used for clinical follow-up and for human identification. Results and Conclusion If DPid tool was present in this forensic case, it could have been solved without requirement of DNA exam, which confirmed the dental comparison of antemortem and postmortem records, and concluded the case as a positive identification. PMID:28678955

  10. TableViewer for Herschel Data Processing

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Schulz, B.

    2006-07-01

    The TableViewer utility is a GUI tool written in Java to support interactive data processing and analysis for the Herschel Space Observatory (Pilbratt et al. 2001). The idea was inherited from a prototype written in IDL (Schulz et al. 2005). It allows to graphically view and analyze tabular data organized in columns with equal numbers of rows. It can be run either as a standalone application, where data access is restricted to FITS (FITS 1999) files only, or it can be run from the Quick Look Analysis(QLA) or Interactive Analysis(IA) command line, from where also objects are accessible. The graphic display is very versatile, allowing plots in either linear or log scales. Zooming, panning, and changing data columns is performed rapidly using a group of navigation buttons. Selecting and de-selecting of fields of data points controls the input to simple analysis tasks like building a statistics table, or generating power spectra. The binary data stored in a TableDataset^1, a Product or in FITS files can also be displayed as tabular data, where values in individual cells can be modified. TableViewer provides several processing utilities which, besides calculation of statistics either for all channels or for selected channels, and calculation of power spectra, allows to convert/repair datasets by changing the unit name of data columns, and by modifying data values in columns with a simple calculator tool. Interactively selected data can be separated out, and modified data sets can be saved to FITS files. The tool will be very helpful especially in the early phases of Herschel data analysis when a quick access to contents of data products is important. TableDataset and Product are Java classes defined in herschel.ia.dataset.

  11. Self-Shielded Flux Cored Wire Evaluation

    DTIC Science & Technology

    1980-12-01

    5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS( ES ) Naval Surface Warfare Center CD Code 2230 - Design Integration Tools Building...ADDRESS( ES ) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release...tensile and yield strength, percent elongation, and percent reduction of area reported. This testing was performed with a Satec 400 WHVP tensile

  12. Improving Naming Abilities among Healthy Young-Old Adults Using Transcranial Direct Current Stimulation

    ERIC Educational Resources Information Center

    Lifshitz-Ben-Basat, Adi; Mashal, Nira

    2018-01-01

    Transcranial direct current stimulation (tDCS) is a noninvasive tool to facilitate brain plasticity and enhance language abilities. Our study aims to search for a potential beneficial influence of tDCS on a cognitive linguistic task of naming which found to decline during aging. A group of fifteen healthy old adults (M = 64.93 ± 5.09 years) were…

  13. Habitat Modeling and Preferences of Marine Mammals as Function of Oceanographic Characteristics: Development of Predictive Tools for Assessing the Risks and the Impacts Due to Sound Emissions

    DTIC Science & Technology

    2011-09-30

    Arianna Azzellino Piazza Leonardo da Vinci , 32, 20133 Milano, Italy phone: (+39) 02-239-964-31 fax: (+39) 02-239-964-99 email...WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Polytechnic University of Milan,Piazza Leonardo da Vinci , 32,20133 Milano, Italy

  14. Application of Factor Analysis on the Financial Ratios of Indian Cement Industry and Validation of the Results by Cluster Analysis

    NASA Astrophysics Data System (ADS)

    De, Anupam; Bandyopadhyay, Gautam; Chakraborty, B. N.

    2010-10-01

    Financial ratio analysis is an important and commonly used tool in analyzing financial health of a firm. Quite a large number of financial ratios, which can be categorized in different groups, are used for this analysis. However, to reduce number of ratios to be used for financial analysis and regrouping them into different groups on basis of empirical evidence, Factor Analysis technique is being used successfully by different researches during the last three decades. In this study Factor Analysis has been applied over audited financial data of Indian cement companies for a period of 10 years. The sample companies are listed on the Stock Exchange India (BSE and NSE). Factor Analysis, conducted over 44 variables (financial ratios) grouped in 7 categories, resulted in 11 underlying categories (factors). Each factor is named in an appropriate manner considering the factor loads and constituent variables (ratios). Representative ratios are identified for each such factor. To validate the results of Factor Analysis and to reach final conclusion regarding the representative ratios, Cluster Analysis had been performed.

  15. BnmrOffice: A Free Software for β-nmr Data Analysis

    NASA Astrophysics Data System (ADS)

    Saadaoui, Hassan

    A data-analysis framework with a graphical user interface (GUI) is developed to analyze β-nmr spectra in an automated and intuitive way. This program, named BnmrOffice is written in C++ and employs the QT libraries and tools for designing the GUI, and the CERN's Minuit optimization routines for minimization. The program runs under multiple platforms, and is available for free under the terms of the GNU GPL standards. The GUI is structured in tabs to search, plot and analyze data, along other functionalities. The user can tweak the minimization options; and fit multiple data files (or runs) using single or global fitting routines with pre-defined or new models. Currently, BnmrOffice reads TRIUMF's MUD data and ASCII files, and can be extended to other formats.

  16. The DREO Elint Browser Utility (DEBU) reference manual

    NASA Astrophysics Data System (ADS)

    Ford, Barbara; Jones, David

    1992-04-01

    An electronic intelligent database browsing tool called DEBU has been developed that allows databases such as ELP, Kilting, EWIR, and AFEWC to be reviewed and analyzed from a user-friendly environment on a personal computer. DEBU's basic function is to allow users to examine the contents of user-selected subfiles of user-selected emitters of user-selected databases. DEBU augments this functionality with support for selecting (filtering) and combining subsets of emitters by user-selected attributes such as name, parameter type, or parameter value. DEBU provides facilities for examining histograms and x-y plots of selected parameters, for doing ambiguity analysis and mode level analysis, and for generating and printing a variety of reports. A manual is provided for users of DEBU, including descriptions and illustrations of menus and windows.

  17. Integrating FMEA in a Model-Driven Methodology

    NASA Astrophysics Data System (ADS)

    Scippacercola, Fabio; Pietrantuono, Roberto; Russo, Stefano; Esper, Alexandre; Silva, Nuno

    2016-08-01

    Failure Mode and Effects Analysis (FMEA) is a well known technique for evaluating the effects of potential failures of components of a system. FMEA demands for engineering methods and tools able to support the time- consuming tasks of the analyst. We propose to make FMEA part of the design of a critical system, by integration into a model-driven methodology. We show how to conduct the analysis of failure modes, propagation and effects from SysML design models, by means of custom diagrams, which we name FMEA Diagrams. They offer an additional view of the system, tailored to FMEA goals. The enriched model can then be exploited to automatically generate FMEA worksheet and to conduct qualitative and quantitative analyses. We present a case study from a real-world project.

  18. ReMap 2018: an updated atlas of regulatory regions from an integrative analysis of DNA-binding ChIP-seq experiments.

    PubMed

    Chèneby, Jeanne; Gheorghe, Marius; Artufel, Marie; Mathelier, Anthony; Ballester, Benoit

    2018-01-04

    With this latest release of ReMap (http://remap.cisreg.eu), we present a unique collection of regulatory regions in human, as a result of a large-scale integrative analysis of ChIP-seq experiments for hundreds of transcriptional regulators (TRs) such as transcription factors, transcriptional co-activators and chromatin regulators. In 2015, we introduced the ReMap database to capture the genome regulatory space by integrating public ChIP-seq datasets, covering 237 TRs across 13 million (M) peaks. In this release, we have extended this catalog to constitute a unique collection of regulatory regions. Specifically, we have collected, analyzed and retained after quality control a total of 2829 ChIP-seq datasets available from public sources, covering a total of 485 TRs with a catalog of 80M peaks. Additionally, the updated database includes new search features for TR names as well as aliases, including cell line names and the ability to navigate the data directly within genome browsers via public track hubs. Finally, full access to this catalog is available online together with a TR binding enrichment analysis tool. ReMap 2018 provides a significant update of the ReMap database, providing an in depth view of the complexity of the regulatory landscape in human. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. MEMHDX: an interactive tool to expedite the statistical validation and visualization of large HDX-MS datasets.

    PubMed

    Hourdel, Véronique; Volant, Stevenn; O'Brien, Darragh P; Chenal, Alexandre; Chamot-Rooke, Julia; Dillies, Marie-Agnès; Brier, Sébastien

    2016-11-15

    With the continued improvement of requisite mass spectrometers and UHPLC systems, Hydrogen/Deuterium eXchange Mass Spectrometry (HDX-MS) workflows are rapidly evolving towards the investigation of more challenging biological systems, including large protein complexes and membrane proteins. The analysis of such extensive systems results in very large HDX-MS datasets for which specific analysis tools are required to speed up data validation and interpretation. We introduce a web application and a new R-package named 'MEMHDX' to help users analyze, validate and visualize large HDX-MS datasets. MEMHDX is composed of two elements. A statistical tool aids in the validation of the results by applying a mixed-effects model for each peptide, in each experimental condition, and at each time point, taking into account the time dependency of the HDX reaction and number of independent replicates. Two adjusted P-values are generated per peptide, one for the 'Change in dynamics' and one for the 'Magnitude of ΔD', and are used to classify the data by means of a 'Logit' representation. A user-friendly interface developed with Shiny by RStudio facilitates the use of the package. This interactive tool allows the user to easily and rapidly validate, visualize and compare the relative deuterium incorporation on the amino acid sequence and 3D structure, providing both spatial and temporal information. MEMHDX is freely available as a web tool at the project home page http://memhdx.c3bi.pasteur.fr CONTACT: marie-agnes.dillies@pasteur.fr or sebastien.brier@pasteur.frSupplementary information: Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  20. A Conceptual Wing Flutter Analysis Tool for Systems Analysis and Parametric Design Study

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    2003-01-01

    An interactive computer program was developed for wing flutter analysis in the conceptual design stage. The objective was to estimate flutt er instability boundaries of a typical wing, when detailed structural and aerodynamic data are not available. Effects of change in key flu tter parameters can also be estimated in order to guide the conceptual design. This userfriendly software was developed using MathCad and M atlab codes. The analysis method was based on non-dimensional paramet ric plots of two primary flutter parameters, namely Regier number and Flutter number, with normalization factors based on wing torsion stiffness, sweep, mass ratio, taper ratio, aspect ratio, center of gravit y location and pitch-inertia radius of gyration. These parametric plo ts were compiled in a Chance-Vought Corporation report from database of past experiments and wind tunnel test results. An example was prese nted for conceptual flutter analysis of outer-wing of a Blended-Wing- Body aircraft.

  1. The Challenge of Multiple Perspectives: Multiple Solution Tasks for Students Incorporating Diverse Tools and Representation Systems

    ERIC Educational Resources Information Center

    Kordaki, Maria

    2015-01-01

    This study focuses on the role of multiple solution tasks (MST) incorporating multiple learning tools and representation systems (MTRS) in encouraging each student to develop multiple perspectives on the learning concepts under study and creativity of thought. Specifically, two types of MST were used, namely tasks that allowed and demanded…

  2. Examining Wikipedia's Value as an Information Source Using the California State University-Chico Website Evaluation Guidelines

    ERIC Educational Resources Information Center

    Upchurch, John

    2011-01-01

    The purpose of this work is to examine Wikipedia's role as a tool for instruction in website evaluation. Wikipedia's purpose, structural elements and potential failings as an authoritative information source are examined. Also presented are rationales for using Wikipedia as an instructional tool, namely the overwhelming popularity of Wikipedia.…

  3. The Future of Architecture Collaborative Information Sharing: DoDAF Version 2.03 Updates

    DTIC Science & Technology

    2012-04-30

    Salamander x Select Solution Factory Select Business Solutions BPMN , UML x SimonTool Simon Labs x SimProcess CACI BPMN x System Architecture Management...for DoDAF Mega UML x Metastorm ProVision Metastorm BPMN x Naval Simulation System - 4 Aces METRON x NetViz CA x OPNET OPNET x Tool Name Vendor Primary

  4. An Ambient Awareness Tool for Supporting Supervised Collaborative Problem Solving

    ERIC Educational Resources Information Center

    Alavi, H. S.; Dillenbourg, P.

    2012-01-01

    We describe an ambient awareness tool, named "Lantern", designed for supporting the learning process in recitation sections, (i.e., when students work in small teams on the exercise sets with the help of tutors). Each team is provided with an interactive lamp that displays their work status: the exercise they are working on, if they have…

  5. Program Model Checking as a New Trend

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Visser, Willem; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper introduces a special section of STTT (International Journal on Software Tools for Technology Transfer) containing a selection of papers that were presented at the 7th International SPIN workshop, Stanford, August 30 - September 1, 2000. The workshop was named SPIN Model Checking and Software Verification, with an emphasis on model checking of programs. The paper outlines the motivation for stressing software verification, rather than only design and model verification, by presenting the work done in the Automated Software Engineering group at NASA Ames Research Center within the last 5 years. This includes work in software model checking, testing like technologies and static analysis.

  6. Logical positivism as a tool to analyse the problem of chemistry's lack of relevance in secondary school chemical education

    NASA Astrophysics Data System (ADS)

    van Aalsvoort, Joke

    2004-09-01

    Secondary school chemical education has a problem: namely, the seeming irrelevance to the pupils of chemistry. Chemical education prepares pupils for participation in society. Therefore, it must imply a model of society, of chemistry, and of the relation between them. In this article it is hypothesized that logical positivism currently offers this model. Logical positivism is a philosophy of science that creates a divide between science and society. It is therefore further hypothesized that the adoption of logical positivism causes chemistry's lack of relevance in chemical education. Both hypotheses could be confirmed by an analysis of a grade nine course.

  7. Multi-target drugs: the trend of drug research and development.

    PubMed

    Lu, Jin-Jian; Pan, Wei; Hu, Yuan-Jia; Wang, Yi-Tao

    2012-01-01

    Summarizing the status of drugs in the market and examining the trend of drug research and development is important in drug discovery. In this study, we compared the drug targets and the market sales of the new molecular entities approved by the U.S. Food and Drug Administration from January 2000 to December 2009. Two networks, namely, the target-target and drug-drug networks, have been set up using the network analysis tools. The multi-target drugs have much more potential, as shown by the network visualization and the market trends. We discussed the possible reasons and proposed the rational strategies for drug research and development in the future.

  8. Patient-specific bone modeling and analysis: the role of integration and automation in clinical adoption.

    PubMed

    Zadpoor, Amir A; Weinans, Harrie

    2015-03-18

    Patient-specific analysis of bones is considered an important tool for diagnosis and treatment of skeletal diseases and for clinical research aimed at understanding the etiology of skeletal diseases and the effects of different types of treatment on their progress. In this article, we discuss how integration of several important components enables accurate and cost-effective patient-specific bone analysis, focusing primarily on patient-specific finite element (FE) modeling of bones. First, the different components are briefly reviewed. Then, two important aspects of patient-specific FE modeling, namely integration of modeling components and automation of modeling approaches, are discussed. We conclude with a section on validation of patient-specific modeling results, possible applications of patient-specific modeling procedures, current limitations of the modeling approaches, and possible areas for future research. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. The Kinematic Analysis of Flat Leverage Mechanism of the Third Class

    NASA Astrophysics Data System (ADS)

    Zhauyt, A.; Mamatova, G.; Abdugaliyeva, G.; Alipov, K.; Sakenova, A.; Alimbetov, A.

    2017-10-01

    It is necessary to make link mechanisms calculation to the strength at designing of flat link mechanisms of high class after definition of block diagrams and link linear sizes i.e. it is rationally to choose their forms and to determine the section sizes. The algorithm of the definition of dimension of link mechanism lengths of high classes (MHC) and their metric parameters at successive approach is offered in this work. It this paper educational and research software named GIM is presented. This software has been developed with the aim of approaching the difficulties students usually encounter when facing up to kinematic analysis of mechanisms. A deep understanding of the kinematic analysis is necessary to go a step further into design and synthesis of mechanisms. In order to support and complement the theoretical lectures, GIM software is used during the practical exercises, serving as an educational complementary tool reinforcing the knowledge acquired by the students.

  10. Untargeted Identification of Wood Type-Specific Markers in Particulate Matter from Wood Combustion.

    PubMed

    Weggler, Benedikt A; Ly-Verdu, Saray; Jennerwein, Maximilian; Sippula, Olli; Reda, Ahmed A; Orasche, Jürgen; Gröger, Thomas; Jokiniemi, Jorma; Zimmermann, Ralf

    2016-09-20

    Residential wood combustion emissions are one of the major global sources of particulate and gaseous organic pollutants. However, the detailed chemical compositions of these emissions are poorly characterized due to their highly complex molecular compositions, nonideal combustion conditions, and sample preparation steps. In this study, the particulate organic emissions from a masonry heater using three types of wood logs, namely, beech, birch, and spruce, were chemically characterized using thermal desorption in situ derivatization coupled to a GCxGC-ToF/MS system. Untargeted data analyses were performed using the comprehensive measurements. Univariate and multivariate chemometric tools, such as analysis of variance (ANOVA), principal component analysis (PCA), and ANOVA simultaneous component analysis (ASCA), were used to reduce the data to highly significant and wood type-specific features. This study reveals substances not previously considered in the literature as meaningful markers for differentiation among wood types.

  11. An R package for the design, analysis and operation of reservoir systems

    NASA Astrophysics Data System (ADS)

    Turner, Sean; Ng, Jia Yi; Galelli, Stefano

    2016-04-01

    We present a new R package - named "reservoir" - which has been designed for rapid and easy routing of runoff through storage. The package comprises well-established tools for capacity design (e.g., the sequent peak algorithm), performance analysis (storage-yield-reliability and reliability-resilience-vulnerability analysis) and release policy optimization (Stochastic Dynamic Programming). Operating rules can be optimized for water supply, flood control and amenity objectives, as well as for maximum hydropower production. Storage-depth-area relationships are in-built, allowing users to incorporate evaporation from the reservoir surface. We demonstrate the capabilities of the software for global studies using thousands of reservoirs from the Global Reservoir and Dam (GRanD) database fed by historical monthly inflow time series from a 0.5 degree gridded global runoff dataset. The package is freely available through the Comprehensive R Archive Network (CRAN).

  12. An integrative system biology approach to unravel potential drug candidates for multiple age related disorders.

    PubMed

    Srivastava, Isha; Khurana, Pooja; Yadav, Mohini; Hasija, Yasha

    2017-12-01

    Aging, though an inevitable part of life, is becoming a worldwide social and economic problem. Healthy aging is usually marked by low probability of age related disorders. Good therapeutic approaches are still in need to cure age related disorders. Occurrence of more than one ARD in an individual, expresses the need of discovery of such target proteins, which can affect multiple ARDs. Advanced scientific and medical research technologies throughout last three decades have arrived to the point where lots of key molecular determinants affect human disorders can be examined thoroughly. In this study, we designed and executed an approach to prioritize drugs that may target multiple age related disorders. Our methodology, focused on the analysis of biological pathways and protein protein interaction networks that may contribute to the pharmacology of age related disorders, included various steps such as retrieval and analysis of data, protein-protein interaction network analysis, and statistical and comparative analysis of topological coefficients, pathway, and functional enrichment analysis, and identification of drug-target proteins. We assume that the identified molecular determinants may be prioritized for further screening as novel drug targets to cure multiple ARDs. Based on the analysis, an online tool named as 'ARDnet' has been developed to construct and demonstrate ARD interactions at the level of PPI, ARDs and ARDs protein interaction, ARDs pathway interaction and drug-target interaction. The tool is freely made available at http://genomeinformatics.dtu.ac.in/ARDNet/Index.html. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. On analyzing free-response data on location level

    NASA Astrophysics Data System (ADS)

    Bandos, Andriy I.; Obuchowski, Nancy A.

    2017-03-01

    Free-response ROC (FROC) data are typically collected when primary question of interest is focused on the proportions of the correct detection-localization of known targets and frequencies of false positive responses, which can be multiple per subject (image). These studies are particularly relevant for CAD and related applications. The fundamental tool of the location-level FROC analysis is the FROC curve. Although there are many methods of FROC analysis, as we describe in this work, some of the standard and popular approaches, while important, are not suitable for analyzing specifically the location-level FROC performance as summarized by the FROC curve. Analysis of the FROC curve, on the other hand, might not be straightforward. Recently we developed an approach for the location-level analysis of the FROC data using the well-known tools for clustered ROC analysis. In the current work, based on previously developed concepts, and using specific examples, we demonstrate the key reasons why specifically location-level FROC performance cannot be fully addressed by the common approaches as well as illustrate the proposed solution. Specifically, we consider the two most salient FROC approaches, namely JAFROC and the area under the exponentially transformed FROC curve (AFE) and show that clearly superior FROC curves can have lower values for these indices. We describe the specific features that make these approaches inconsistent with FROC curves. This work illustrates some caveats for using the common approaches for location-level FROC analysis and provides guidelines for the appropriate assessment or comparison of FROC systems.

  14. MATILDA Version-2: Rough Earth TIALD Model for Laser Probabilistic Risk Assessment in Hilly Terrain - Part II

    DTIC Science & Technology

    2017-07-28

    Approved for public release; distribution unlimited. PA Case No: TSRL- PA-2017-0228 Air Force Research Laboratory 711th Human Performance Wing Airman...PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Air Force Research Laboratory Engility Corp 8. PERFORMING ORGANIZATION...United States (US) Air Force Research Laboratory (AFRL) have collaborated to develop a US-UK laser range safety tool, the Military Advanced Technology

  15. X-ray Observations of the Sun: Solar Flares and their Impact on the Geophysical Space

    DTIC Science & Technology

    2012-07-01

    Michele Piana Universita’ di Genova Dipartimento Di Matematica Via Dodecaneso 35 Genova, Italy 16146 EOARD Grant 09-3050 Report...ORGANIZATION NAME(S) AND ADDRESS(ES) Universita’ di Genova Dipartimento Di Matematica Via Dodecaneso 35 Genova, Italy 16146 8. PERFORMING...Piana, DIpartimento di Matematica , Universita’ di Genova Scientific Report The aim of the present project was to apply computational tools based on

  16. Targeting Transcription Elongation Machinery for Breast Cancer Therapy

    DTIC Science & Technology

    2016-05-01

    Luo CONTRACTING ORGANIZATION: University of California, Berkeley Berkeley, CA 94704 REPORT DATE: May 2016 TYPE OF REPORT: Annual PREPARED FOR...ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER University of California, Berkeley BERKELEY, CA 94704 9. SPONSORING...molecules. We have employed the CRISPR /Cas9 genome-editing tool to knock out the gene encoding the SEC component AFF4 or knock in a mutant cyclin T1 (AAG

  17. Targeting Transcription Elongation Machinery for Breast Cancer Therapy

    DTIC Science & Technology

    2016-05-01

    Zhou CONTRACTING ORGANIZATION: University of California, Berkeley Berkeley, CA 94704 REPORT DATE: May 2016 TYPE OF REPORT: Annual Report...PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER AND ADDRESS(ES) University of California, Berkeley Berkeley, CA ...without affecting the Brd4 or PTEFb molecules. We have employed the CRISPR /Cas9 genome-editing tool to knock out the gene encoding the SEC component AFF4

  18. European Upper Atmosphere Server DIAS - Final Conference/ Abstract

    DTIC Science & Technology

    2007-01-10

    PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) INGV - Istituto Nazionale di Geofisica e Vulcanologia (Nat Instit Geophysics, Volcanology) Via di Vigna...organised by the Istituto Nazionale di Geofisica e Vulcanologia (INGV), Rome, Italy. D6.8 Report on the Final Conference - 3 - In general, the DIAS...the Istituto Nazionale di Geofisica e Vulcanologia in Rome focused on the general overview of scientific and technical tools adopted by the DIAS

  19. Current Capabilities, Issues, and Trends in LMSs and Authoring Tools

    DTIC Science & Technology

    2009-08-18

    architecture  Embedded best-practice design principles  Support for immersive learning technologies  Support for social media 8 LMSs LMS Functionality is... Learning System Multimedia content Application demos VOIP Real-time Collaboration technologies from Adobe Connect Pro, WebEx, LiveMeeting, & Centra...ORGANIZATION NAME(S) AND ADDRESS(ES) Advanced Decision Learning (ADL),ADL Co-Lab,1901 N. Beauregard Street Suite 600,Alexandria,VA,22311 8

  20. Absenteeism Management

    DTIC Science & Technology

    1995-01-01

    1995 Ship Production Symposium Paper No. 24: Absenteeism Manage- ment U.S. DEPARTMENT OF THE NAVY CARDEROCK DIVISION, NAVAL SURFACE WARFARE CENTER...Paper No. 24: Absenteeism Management 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER...5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Surface Warfare Center CD Code 2230 - Design Integration Tools Bldg

  1. NASA Administrator Sean O'Keefe speaking at the AirSAR 2004 Mesoamerica hangar naming ceremony

    NASA Image and Video Library

    2004-03-03

    NASA Administrator Sean O'Keefe speaking at the AirSAR 2004 Mesoamerica hangar naming ceremony. AirSAR 2004 Mesoamerica is a three-week expedition by an international team of scientists that will use an all-weather imaging tool, called the Airborne Synthetic Aperture Radar (AirSAR), in a mission ranging from the tropical rain forests of Central America to frigid Antarctica.

  2. Software Estimation: Developing an Accurate, Reliable Method

    DTIC Science & Technology

    2011-08-01

    Lake, CA ,93555- 6110 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S...Activity, the systems engineering team is responsible for system and software requirements. 2 . Process Dashboard is a software planning and tracking tool... CA 93555- 6110 760-939-6989 Brad Hodgins is an interim TSP Mentor Coach, SEI-Authorized TSP Coach, SEI-Certified PSP/TSP Instructor, and SEI

  3. phyloXML: XML for evolutionary biology and comparative genomics

    PubMed Central

    Han, Mira V; Zmasek, Christian M

    2009-01-01

    Background Evolutionary trees are central to a wide range of biological studies. In many of these studies, tree nodes and branches need to be associated (or annotated) with various attributes. For example, in studies concerned with organismal relationships, tree nodes are associated with taxonomic names, whereas tree branches have lengths and oftentimes support values. Gene trees used in comparative genomics or phylogenomics are usually annotated with taxonomic information, genome-related data, such as gene names and functional annotations, as well as events such as gene duplications, speciations, or exon shufflings, combined with information related to the evolutionary tree itself. The data standards currently used for evolutionary trees have limited capacities to incorporate such annotations of different data types. Results We developed a XML language, named phyloXML, for describing evolutionary trees, as well as various associated data items. PhyloXML provides elements for commonly used items, such as branch lengths, support values, taxonomic names, and gene names and identifiers. By using "property" elements, phyloXML can be adapted to novel and unforeseen use cases. We also developed various software tools for reading, writing, conversion, and visualization of phyloXML formatted data. Conclusion PhyloXML is an XML language defined by a complete schema in XSD that allows storing and exchanging the structures of evolutionary trees as well as associated data. More information about phyloXML itself, the XSD schema, as well as tools implementing and supporting phyloXML, is available at . PMID:19860910

  4. Using Workflows to Explore and Optimise Named Entity Recognition for Chemistry

    PubMed Central

    Kolluru, BalaKrishna; Hawizy, Lezan; Murray-Rust, Peter; Tsujii, Junichi; Ananiadou, Sophia

    2011-01-01

    Chemistry text mining tools should be interoperable and adaptable regardless of system-level implementation, installation or even programming issues. We aim to abstract the functionality of these tools from the underlying implementation via reconfigurable workflows for automatically identifying chemical names. To achieve this, we refactored an established named entity recogniser (in the chemistry domain), OSCAR and studied the impact of each component on the net performance. We developed two reconfigurable workflows from OSCAR using an interoperable text mining framework, U-Compare. These workflows can be altered using the drag-&-drop mechanism of the graphical user interface of U-Compare. These workflows also provide a platform to study the relationship between text mining components such as tokenisation and named entity recognition (using maximum entropy Markov model (MEMM) and pattern recognition based classifiers). Results indicate that, for chemistry in particular, eliminating noise generated by tokenisation techniques lead to a slightly better performance than others, in terms of named entity recognition (NER) accuracy. Poor tokenisation translates into poorer input to the classifier components which in turn leads to an increase in Type I or Type II errors, thus, lowering the overall performance. On the Sciborg corpus, the workflow based system, which uses a new tokeniser whilst retaining the same MEMM component, increases the F-score from 82.35% to 84.44%. On the PubMed corpus, it recorded an F-score of 84.84% as against 84.23% by OSCAR. PMID:21633495

  5. Using workflows to explore and optimise named entity recognition for chemistry.

    PubMed

    Kolluru, Balakrishna; Hawizy, Lezan; Murray-Rust, Peter; Tsujii, Junichi; Ananiadou, Sophia

    2011-01-01

    Chemistry text mining tools should be interoperable and adaptable regardless of system-level implementation, installation or even programming issues. We aim to abstract the functionality of these tools from the underlying implementation via reconfigurable workflows for automatically identifying chemical names. To achieve this, we refactored an established named entity recogniser (in the chemistry domain), OSCAR and studied the impact of each component on the net performance. We developed two reconfigurable workflows from OSCAR using an interoperable text mining framework, U-Compare. These workflows can be altered using the drag-&-drop mechanism of the graphical user interface of U-Compare. These workflows also provide a platform to study the relationship between text mining components such as tokenisation and named entity recognition (using maximum entropy Markov model (MEMM) and pattern recognition based classifiers). Results indicate that, for chemistry in particular, eliminating noise generated by tokenisation techniques lead to a slightly better performance than others, in terms of named entity recognition (NER) accuracy. Poor tokenisation translates into poorer input to the classifier components which in turn leads to an increase in Type I or Type II errors, thus, lowering the overall performance. On the Sciborg corpus, the workflow based system, which uses a new tokeniser whilst retaining the same MEMM component, increases the F-score from 82.35% to 84.44%. On the PubMed corpus, it recorded an F-score of 84.84% as against 84.23% by OSCAR.

  6. Categorization for Faces and Tools—Two Classes of Objects Shaped by Different Experience—Differs in Processing Timing, Brain Areas Involved, and Repetition Effects

    PubMed Central

    Kozunov, Vladimir; Nikolaeva, Anastasia; Stroganova, Tatiana A.

    2018-01-01

    The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se, or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140–170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210–220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions. Furthermore, we hypothesized and tested whether activity within face and tool-specific binding-related patterns would demonstrate oppositely acting effects following procedural perceptual learning. We found that activity in the ventral, face-specific network increased following the stimuli repetition. In contrast, tool processing in the dorsal network adapted by reducing its activity over the repetition period. Altogether, we have demonstrated that activity associated with visual processing of faces and tools during the categorization stage differ in processing timing, brain areas involved, and in their dynamics underlying stimuli learning. PMID:29379426

  7. ISAMBARD: an open-source computational environment for biomolecular analysis, modelling and design.

    PubMed

    Wood, Christopher W; Heal, Jack W; Thomson, Andrew R; Bartlett, Gail J; Ibarra, Amaurys Á; Brady, R Leo; Sessions, Richard B; Woolfson, Derek N

    2017-10-01

    The rational design of biomolecules is becoming a reality. However, further computational tools are needed to facilitate and accelerate this, and to make it accessible to more users. Here we introduce ISAMBARD, a tool for structural analysis, model building and rational design of biomolecules. ISAMBARD is open-source, modular, computationally scalable and intuitive to use. These features allow non-experts to explore biomolecular design in silico. ISAMBARD addresses a standing issue in protein design, namely, how to introduce backbone variability in a controlled manner. This is achieved through the generalization of tools for parametric modelling, describing the overall shape of proteins geometrically, and without input from experimentally determined structures. This will allow backbone conformations for entire folds and assemblies not observed in nature to be generated de novo, that is, to access the 'dark matter of protein-fold space'. We anticipate that ISAMBARD will find broad applications in biomolecular design, biotechnology and synthetic biology. A current stable build can be downloaded from the python package index (https://pypi.python.org/pypi/isambard/) with development builds available on GitHub (https://github.com/woolfson-group/) along with documentation, tutorial material and all the scripts used to generate the data described in this paper. d.n.woolfson@bristol.ac.uk or chris.wood@bristol.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  8. Decoding the genome with an integrative analysis tool: combinatorial CRM Decoder.

    PubMed

    Kang, Keunsoo; Kim, Joomyeong; Chung, Jae Hoon; Lee, Daeyoup

    2011-09-01

    The identification of genome-wide cis-regulatory modules (CRMs) and characterization of their associated epigenetic features are fundamental steps toward the understanding of gene regulatory networks. Although integrative analysis of available genome-wide information can provide new biological insights, the lack of novel methodologies has become a major bottleneck. Here, we present a comprehensive analysis tool called combinatorial CRM decoder (CCD), which utilizes the publicly available information to identify and characterize genome-wide CRMs in a species of interest. CCD first defines a set of the epigenetic features which is significantly associated with a set of known CRMs as a code called 'trace code', and subsequently uses the trace code to pinpoint putative CRMs throughout the genome. Using 61 genome-wide data sets obtained from 17 independent mouse studies, CCD successfully catalogued ∼12 600 CRMs (five distinct classes) including polycomb repressive complex 2 target sites as well as imprinting control regions. Interestingly, we discovered that ∼4% of the identified CRMs belong to at least two different classes named 'multi-functional CRM', suggesting their functional importance for regulating spatiotemporal gene expression. From these examples, we show that CCD can be applied to any potential genome-wide datasets and therefore will shed light on unveiling genome-wide CRMs in various species.

  9. Complex Dynamics of Equatorial Scintillation

    NASA Astrophysics Data System (ADS)

    Piersanti, Mirko; Materassi, Massimo; Forte, Biagio; Cicone, Antonio

    2017-04-01

    Radio power scintillation, namely highly irregular fluctuations of the power of trans-ionospheric GNSS signals, is the effect of ionospheric plasma turbulence. The scintillation patterns on radio signals crossing the medium inherit the ionospheric turbulence characteristics of inter-scale coupling, local randomness and large time variability. On this basis, the remote sensing of local features of the turbulent plasma is feasible by studying radio scintillation induced by the ionosphere. The distinctive character of intermittent turbulent media depends on the fluctuations on the space- and time-scale statistical properties of the medium. Hence, assessing how the signal fluctuation properties vary under different Helio-Geophysical conditions will help to understand the corresponding dynamics of the turbulent medium crossed by the signal. Data analysis tools, provided by complex system science, appear to be best fitting to study the response of a turbulent medium, as the Earth's equatorial ionosphere, to the non-linear forcing exerted by the Solar Wind (SW). In particular we used the Adaptive Local Iterative Filtering, the Wavelet analysis and the Information theory data analysis tool. We have analysed the radio scintillation and ionospheric fluctuation data at low latitude focusing on the time and space multi-scale variability and on the causal relationship between forcing factors from the SW environment and the ionospheric response.

  10. Model diagnostics in reduced-rank estimation

    PubMed Central

    Chen, Kun

    2016-01-01

    Reduced-rank methods are very popular in high-dimensional multivariate analysis for conducting simultaneous dimension reduction and model estimation. However, the commonly-used reduced-rank methods are not robust, as the underlying reduced-rank structure can be easily distorted by only a few data outliers. Anomalies are bound to exist in big data problems, and in some applications they themselves could be of the primary interest. While naive residual analysis is often inadequate for outlier detection due to potential masking and swamping, robust reduced-rank estimation approaches could be computationally demanding. Under Stein's unbiased risk estimation framework, we propose a set of tools, including leverage score and generalized information score, to perform model diagnostics and outlier detection in large-scale reduced-rank estimation. The leverage scores give an exact decomposition of the so-called model degrees of freedom to the observation level, which lead to exact decomposition of many commonly-used information criteria; the resulting quantities are thus named information scores of the observations. The proposed information score approach provides a principled way of combining the residuals and leverage scores for anomaly detection. Simulation studies confirm that the proposed diagnostic tools work well. A pattern recognition example with hand-writing digital images and a time series analysis example with monthly U.S. macroeconomic data further demonstrate the efficacy of the proposed approaches. PMID:28003860

  11. Model diagnostics in reduced-rank estimation.

    PubMed

    Chen, Kun

    2016-01-01

    Reduced-rank methods are very popular in high-dimensional multivariate analysis for conducting simultaneous dimension reduction and model estimation. However, the commonly-used reduced-rank methods are not robust, as the underlying reduced-rank structure can be easily distorted by only a few data outliers. Anomalies are bound to exist in big data problems, and in some applications they themselves could be of the primary interest. While naive residual analysis is often inadequate for outlier detection due to potential masking and swamping, robust reduced-rank estimation approaches could be computationally demanding. Under Stein's unbiased risk estimation framework, we propose a set of tools, including leverage score and generalized information score, to perform model diagnostics and outlier detection in large-scale reduced-rank estimation. The leverage scores give an exact decomposition of the so-called model degrees of freedom to the observation level, which lead to exact decomposition of many commonly-used information criteria; the resulting quantities are thus named information scores of the observations. The proposed information score approach provides a principled way of combining the residuals and leverage scores for anomaly detection. Simulation studies confirm that the proposed diagnostic tools work well. A pattern recognition example with hand-writing digital images and a time series analysis example with monthly U.S. macroeconomic data further demonstrate the efficacy of the proposed approaches.

  12. BioSurfDB: knowledge and algorithms to support biosurfactants and biodegradation studies

    PubMed Central

    Oliveira, Jorge S.; Araújo, Wydemberg; Lopes Sales, Ana Isabela; de Brito Guerra, Alaine; da Silva Araújo, Sinara Carla; de Vasconcelos, Ana Tereza Ribeiro; Agnez-Lima, Lucymara F.; Freitas, Ana Teresa

    2015-01-01

    Crude oil extraction, transportation and use provoke the contamination of countless ecosystems. Therefore, bioremediation through surfactants mobilization or biodegradation is an important subject, both economically and environmentally. Bioremediation research had a great boost with the recent advances in Metagenomics, as it enabled the sequencing of uncultured microorganisms providing new insights on surfactant-producing and/or oil-degrading bacteria. Many research studies are making available genomic data from unknown organisms obtained from metagenomics analysis of oil-contaminated environmental samples. These new datasets are presently demanding the development of new tools and data repositories tailored for the biological analysis in a context of bioremediation data analysis. This work presents BioSurfDB, www.biosurfdb.org, a curated relational information system integrating data from: (i) metagenomes; (ii) organisms; (iii) biodegradation relevant genes; proteins and their metabolic pathways; (iv) bioremediation experiments results, with specific pollutants treatment efficiencies by surfactant producing organisms; and (v) a biosurfactant-curated list, grouped by producing organism, surfactant name, class and reference. The main goal of this repository is to gather information on the characterization of biological compounds and mechanisms involved in biosurfactant production and/or biodegradation and make it available in a curated way and associated with a number of computational tools to support studies of genomic and metagenomic data. Database URL: www.biosurfdb.org PMID:25833955

  13. Development of materials for the rapid manufacture of die cast tooling

    NASA Astrophysics Data System (ADS)

    Hardro, Peter Jason

    The focus of this research is to develop a material composition that can be processed by rapid prototyping (RP) in order to produce tooling for the die casting process. Where these rapidly produced tools will be superior to traditional tooling production methods by offering one or more of the following advantages: reduced tooling cost, shortened tooling creation time, reduced man-hours for tool creation, increased tool life, and shortened die casting cycle time. By utilizing RP's additive build process and vast material selection, there was a prospect that die cast tooling may be produced quicker and with superior material properties. To this end, the material properties that influence die life and cycle time were determined, and a list of materials that fulfill these "optimal" properties were highlighted. Physical testing was conducted in order to grade the processability of each of the material systems and to optimize the manufacturing process for the downselected material system. Sample specimens were produced and microscopy techniques were utilized to determine a number of physical properties of the material system. Additionally, a benchmark geometry was selected and die casting dies were produced from traditional tool materials (H13 steel) and techniques (machining) and from the newly developed materials and RP techniques (selective laser sintering (SLS) and laser engineered net shaping (LENS)). Once the tools were created, a die cast alloy was selected and a preset number of parts were shot into each tool. During tool creation, the manufacturing time and cost was closely monitored and an economic model was developed to compare traditional tooling to RP tooling. This model allows one to determine, in the early design stages, when it is advantageous to implement RP tooling and when traditional tooling would be best. The results of the physical testing and economic analysis has shown that RP tooling is able to achieve a number of the research objectives, namely, reduce tooling cost, shorten tooling creation time, and reduce the man-hours needed for tool creation. Though identifying the appropriate time to use RP tooling appears to be the most important aspect in achieving successful implementation.

  14. Analytically Quantifying Gains in the Test and Evaluation Process through Capabilities-Based Analysis

    DTIC Science & Technology

    2011-09-01

    Evaluation Process through Capabilities-Based Analysis 5. FUNDING NUMBERS 6. AUTHOR(S) Eric J. Lednicky 7. PERFORMING ORGANIZATION NAME(S) AND...ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943-5000 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING /MONITORING AGENCY NAME(S...14 C. MEASURES OF EFFECTIVENESS / MEASURES OF PERFORMANCE

  15. SentiHealth-Cancer: A sentiment analysis tool to help detecting mood of patients in online social networks.

    PubMed

    Rodrigues, Ramon Gouveia; das Dores, Rafael Marques; Camilo-Junior, Celso G; Rosa, Thierson Couto

    2016-01-01

    Cancer is a critical disease that affects millions of people and families around the world. In 2012 about 14.1 million new cases of cancer occurred globally. Because of many reasons like the severity of some cases, the side effects of some treatments and death of other patients, cancer patients tend to be affected by serious emotional disorders, like depression, for instance. Thus, monitoring the mood of the patients is an important part of their treatment. Many cancer patients are users of online social networks and many of them take part in cancer virtual communities where they exchange messages commenting about their treatment or giving support to other patients in the community. Most of these communities are of public access and thus are useful sources of information about the mood of patients. Based on that, Sentiment Analysis methods can be useful to automatically detect positive or negative mood of cancer patients by analyzing their messages in these online communities. The objective of this work is to present a Sentiment Analysis tool, named SentiHealth-Cancer (SHC-pt), that improves the detection of emotional state of patients in Brazilian online cancer communities, by inspecting their posts written in Portuguese language. The SHC-pt is a sentiment analysis tool which is tailored specifically to detect positive, negative or neutral messages of patients in online communities of cancer patients. We conducted a comparative study of the proposed method with a set of general-purpose sentiment analysis tools adapted to this context. Different collections of posts were obtained from two cancer communities in Facebook. Additionally, the posts were analyzed by sentiment analysis tools that support the Portuguese language (Semantria and SentiStrength) and by the tool SHC-pt, developed based on the method proposed in this paper called SentiHealth. Moreover, as a second alternative to analyze the texts in Portuguese, the collected texts were automatically translated into English, and submitted to sentiment analysis tools that do not support the Portuguese language (AlchemyAPI and Textalytics) and also to Semantria and SentiStrength, using the English option of these tools. Six experiments were conducted with some variations and different origins of the collected posts. The results were measured using the following metrics: precision, recall, F1-measure and accuracy The proposed tool SHC-pt reached the best averages for accuracy and F1-measure (harmonic mean between recall and precision) in the three sentiment classes addressed (positive, negative and neutral) in all experimental settings. Moreover, the worst accuracy value (58%) achieved by SHC-pt in any experiment is 11.53% better than the greatest accuracy (52%) presented by other addressed tools. Finally, the worst average F1 (48.46%) reached by SHC-pt in any experiment is 4.14% better than the greatest average F1 (46.53%) achieved by other addressed tools. Thus, even when we compare the SHC-pt results in complex scenario versus others in easier scenario the SHC-pt is better. This paper presents two contributions. First, it proposes the method SentiHealth to detect the mood of cancer patients that are also users of communities of patients in online social networks. Second, it presents an instantiated tool from the method, called SentiHealth-Cancer (SHC-pt), dedicated to automatically analyze posts in communities of cancer patients, based on SentiHealth. This context-tailored tool outperformed other general-purpose sentiment analysis tools at least in the cancer context. This suggests that the SentiHealth method could be instantiated as other disease-based tools during future works, for instance SentiHealth-HIV, SentiHealth-Stroke and SentiHealth-Sclerosis. Copyright © 2015. Published by Elsevier Ireland Ltd.

  16. The HDF Product Designer - Interoperability in the First Mile

    NASA Astrophysics Data System (ADS)

    Lee, H.; Jelenak, A.; Habermann, T.

    2014-12-01

    Interoperable data have been a long-time goal in many scientific communities. The recent growth in analysis, visualization and mash-up applications that expect data stored in a standardized manner has brought the interoperability issue to the fore. On the other hand, producing interoperable data is often regarded as a sideline task in a typical research team for which resources are not readily available. The HDF Group is developing a software tool aimed at lessening the burden of creating data in standards-compliant, interoperable HDF5 files. The tool, named HDF Product Designer, lowers the threshold needed to design such files by providing a user interface that combines the rich HDF5 feature set with applicable metadata conventions. Users can quickly devise new HDF5 files while at the same time seamlessly incorporating the latest best practices and conventions from their community. That is what the term interoperability in the first mile means: enabling generation of interoperable data in HDF5 files from the onset of their production. The tool also incorporates collaborative features, allowing team approach in the file design, as well as easy transfer of best practices as they are being developed. The current state of the tool and the plans for future development will be presented. Constructive input from interested parties is always welcome.

  17. The EarthKAM project: creating space imaging tools for teaching and learning

    NASA Astrophysics Data System (ADS)

    Dodson, Holly; Levin, Paula; Ride, Sally; Souviney, Randall

    2000-07-01

    The EarthKAM Project is a NASA-supported partnership of secondary and university students with Earth Science and educational researchers. This report describes an ongoing series of activities that more effectively integrate Earth images into classroom instruction. In this project, students select and analyze images of the Earth taken during Shuttle flights and use the tools of modern science (computers, data analysis tools and the Internet) to disseminate the images and results of their research. A related study, the Visualizing Earth Project, explores in greater detail the cognitive aspects of image processing and the educational potential of visualizations in science teaching and learning. The content and organization of the EarthKAM datasystem of images and metadata are also described. An associated project is linking this datasystem of images with the Getty Thesaurus of Geographic Names, which will allow users to access a wide range of geographic and political information for the regions shown in EarthKAM images. Another project will provide tools for automated feature extraction from EarthKAM images. In order to make EarthKAM resources available to a larger number of schools, the next important goal is to create an integrated datasystem that combines iterative resource validation and publication, with multimedia management of instructional materials.

  18. A set of high quality colour images with Spanish norms for seven relevant psycholinguistic variables: the Nombela naming test.

    PubMed

    Moreno-Martinez, Francisco Javier; Montoro, Pedro R; Laws, Keith R

    2011-05-01

    This paper presents a new corpus of 140 high quality colour images belonging to 14 subcategories and covering a range of naming difficulty. One hundred and six Spanish speakers named the items and provided data for several psycholinguistic variables: age of acquisition, familiarity, manipulability, name agreement, typicality and visual complexity. Furthermore, we also present lexical frequency data derived internet search hits. Apart from the large number of variables evaluated, these stimuli present an important advantage with respect to other comparable image corpora in so far as naming performance in healthy individuals is less prone to ceiling effect problems. Reliability and validity indexes showed that our items display similar psycholinguistic characteristics to those of other corpora. In sum, this set of ecologically valid stimuli provides a useful tool for scientists engaged in cognitive and neuroscience-based research.

  19. LiPD and CSciBox: A Case Study in Why Data Standards are Important for Paleoscience

    NASA Astrophysics Data System (ADS)

    Weiss, I.; Bradley, E.; McKay, N.; Emile-Geay, J.; de Vesine, L. R.; Anderson, K. A.; White, J. W. C.; Marchitto, T. M., Jr.

    2016-12-01

    CSciBox [1] is an integrated software system that helps geoscientists build and evaluate age models. Its user chooses from a number of built-in analysis tools, composing them into an analysis workflow and applying it to paleoclimate proxy datasets. CSciBox employs modern database technology to store both the data and the analysis results in an easily accessible and searchable form, and offers the user access to the computational toolbox, the data, and the results via a graphical user interface and a sophisticated plotter. Standards are a staple of modern life, and underlie any form of automation. Without data standards, it is difficult, if not impossible, to construct effective computer tools for paleoscience analysis. The LiPD (Linked Paleo Data) framework [2] enables the storage of both data and metadata in systematic, meaningful, machine-readable ways. LiPD has been a primary enabler of CSciBox's goals of usability, interoperability, and reproducibility. Building LiPD capabilities into CSciBox's importer, for instance, eliminated the need to ask the user about file formats, variable names, relationships between columns in the input file, etc. Building LiPD capabilities into the exporter facilitated the storage of complete details about the input data-provenance, preprocessing steps, etc.-as well as full descriptions of any analyses that were performed using the CSciBox tool, along with citations to appropriate references. This comprehensive collection of data and metadata, which is all linked together in a semantically meaningful, machine-readable way, not only completely documents the analyses and makes them reproducible. It also enables interoperability with any other software system that employs the LiPD standard. [1] www.cs.colorado.edu/ lizb/cscience.html[2] McKay & Emile-Geay, Climate of the Past 12:1093 (2016)

  20. An open source software for analysis of dynamic contrast enhanced magnetic resonance images: UMMPerfusion revisited.

    PubMed

    Zöllner, Frank G; Daab, Markus; Sourbron, Steven P; Schad, Lothar R; Schoenberg, Stefan O; Weisser, Gerald

    2016-01-14

    Perfusion imaging has become an important image based tool to derive the physiological information in various applications, like tumor diagnostics and therapy, stroke, (cardio-) vascular diseases, or functional assessment of organs. However, even after 20 years of intense research in this field, perfusion imaging still remains a research tool without a broad clinical usage. One problem is the lack of standardization in technical aspects which have to be considered for successful quantitative evaluation; the second problem is a lack of tools that allow a direct integration into the diagnostic workflow in radiology. Five compartment models, namely, a one compartment model (1CP), a two compartment exchange (2CXM), a two compartment uptake model (2CUM), a two compartment filtration model (2FM) and eventually the extended Toft's model (ETM) were implemented as plugin for the DICOM workstation OsiriX. Moreover, the plugin has a clean graphical user interface and provides means for quality management during the perfusion data analysis. Based on reference test data, the implementation was validated against a reference implementation. No differences were found in the calculated parameters. We developed open source software to analyse DCE-MRI perfusion data. The software is designed as plugin for the DICOM Workstation OsiriX. It features a clean GUI and provides a simple workflow for data analysis while it could also be seen as a toolbox providing an implementation of several recent compartment models to be applied in research tasks. Integration into the infrastructure of a radiology department is given via OsiriX. Results can be saved automatically and reports generated automatically during data analysis ensure certain quality control.

  1. Modelling and interpreting spectral energy distributions of galaxies with BEAGLE

    NASA Astrophysics Data System (ADS)

    Chevallard, Jacopo; Charlot, Stéphane

    2016-10-01

    We present a new-generation tool to model and interpret spectral energy distributions (SEDs) of galaxies, which incorporates in a consistent way the production of radiation and its transfer through the interstellar and intergalactic media. This flexible tool, named BEAGLE (for BayEsian Analysis of GaLaxy sEds), allows one to build mock galaxy catalogues as well as to interpret any combination of photometric and spectroscopic galaxy observations in terms of physical parameters. The current version of the tool includes versatile modelling of the emission from stars and photoionized gas, attenuation by dust and accounting for different instrumental effects, such as spectroscopic flux calibration and line spread function. We show a first application of the BEAGLE tool to the interpretation of broad-band SEDs of a published sample of ˜ 10^4 galaxies at redshifts 0.1 ≲ z ≲ 8. We find that the constraints derived on photometric redshifts using this multipurpose tool are comparable to those obtained using public, dedicated photometric-redshift codes and quantify this result in a rigorous statistical way. We also show how the post-processing of BEAGLE output data with the PYTHON extension PYP-BEAGLE allows the characterization of systematic deviations between models and observations, in particular through posterior predictive checks. The modular design of the BEAGLE tool allows easy extensions to incorporate, for example, the absorption by neutral galactic and circumgalactic gas, and the emission from an active galactic nucleus, dust and shock-ionized gas. Information about public releases of the BEAGLE tool will be maintained on http://www.jacopochevallard.org/beagle.

  2. Novel near-infrared spectrum analysis tool: Synergy adaptive moving window model based on immune clone algorithm.

    PubMed

    Wang, Shenghao; Zhang, Yuyan; Cao, Fuyi; Pei, Zhenying; Gao, Xuewei; Zhang, Xu; Zhao, Yong

    2018-02-13

    This paper presents a novel spectrum analysis tool named synergy adaptive moving window modeling based on immune clone algorithm (SA-MWM-ICA) considering the tedious and inconvenient labor involved in the selection of pre-processing methods and spectral variables by prior experience. In this work, immune clone algorithm is first introduced into the spectrum analysis field as a new optimization strategy, covering the shortage of the relative traditional methods. Based on the working principle of the human immune system, the performance of the quantitative model is regarded as antigen, and a special vector corresponding to the above mentioned antigen is regarded as antibody. The antibody contains a pre-processing method optimization region which is created by 11 decimal digits, and a spectrum variable optimization region which is formed by some moving windows with changeable width and position. A set of original antibodies are created by modeling with this algorithm. After calculating the affinity of these antibodies, those with high affinity will be selected to clone. The regulation for cloning is that the higher the affinity, the more copies will be. In the next step, another import operation named hyper-mutation is applied to the antibodies after cloning. Moreover, the regulation for hyper-mutation is that the lower the affinity, the more possibility will be. Several antibodies with high affinity will be created on the basis of these steps. Groups of simulated dataset, gasoline near-infrared spectra dataset, and soil near-infrared spectra dataset are employed to verify and illustrate the performance of SA-MWM-ICA. Analysis results show that the performance of the quantitative models adopted by SA-MWM-ICA are better especially for structures with relatively complex spectra than traditional models such as partial least squares (PLS), moving window PLS (MWPLS), genetic algorithm PLS (GAPLS), and pretreatment method classification and adjustable parameter changeable size moving window PLS (CA-CSMWPLS). The selected pre-processing methods and spectrum variables are easily explained. The proposed method will converge in few generations and can be used not only for near-infrared spectroscopy analysis but also for other similar spectral analysis, such as infrared spectroscopy. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. The Induction of Chaos in Electronic Circuits Final Report-October 1, 2001

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R.M.Wheat, Jr.

    2003-04-01

    This project, now known by the name ''Chaos in Electronic Circuits,'' was originally tasked as a two-year project to examine various ''fault'' or ''non-normal'' operational states of common electronic circuits with some focus on determining the feasibility of exploiting these states. Efforts over the two-year duration of this project have been dominated by the study of the chaotic behavior of electronic circuits. These efforts have included setting up laboratory space and hardware for conducting laboratory tests and experiments, acquiring and developing computer simulation and analysis capabilities, conducting literature surveys, developing test circuitry and computer models to exercise and test ourmore » capabilities, and experimenting with and studying the use of RF injection as a means of inducing chaotic behavior in electronics. An extensive array of nonlinear time series analysis tools have been developed and integrated into a package named ''After Acquisition'' (AA), including capabilities such as Delayed Coordinate Embedding Mapping (DCEM), Time Resolved (3-D) Fourier Transform, and several other phase space re-creation methods. Many computer models have been developed for Spice and for the ATP (Alternative Transients Program), modeling the several working circuits that have been developed for use in the laboratory. And finally, methods of induction of chaos in electronic circuits have been explored.« less

  4. Positioning Tool Validation Report

    DTIC Science & Technology

    1999-11-01

    U.S. Coast Guard Research and Development Center 1082 Shennecossett Road, Groton, CT 06340-6096 Report No. CG-D-06-00 Positioning Tool...standard, specification, or regulation. Marc B. Mandler, Ph.D. Technical Director United States Coast Guard Research & Development Center 1082 ...Validation Report 7. Author(s) Jay Spalding i. Performing Organization Name and Address U.S. Coast Guard Research and Development Center 1082

  5. A New Roman World: Using Virtual Reality Technology as a Critical Teaching Tool.

    ERIC Educational Resources Information Center

    Kuo, Elaine W.; Levis, Marc R.

    The purpose of this study is to examine how technology, namely virtual reality (VR), can be developed as a critical pedagogical tool. More specifically, the study explores whether the use of VR can challenge the traditional lecture format and make the classroom a more student-centered environment. In this instance, VR is defined as a set of…

  6. The Electronic View Box: a software tool for radiation therapy treatment verification.

    PubMed

    Bosch, W R; Low, D A; Gerber, R L; Michalski, J M; Graham, M V; Perez, C A; Harms, W B; Purdy, J A

    1995-01-01

    We have developed a software tool for interactively verifying treatment plan implementation. The Electronic View Box (EVB) tool copies the paradigm of current practice but does so electronically. A portal image (online portal image or digitized port film) is displayed side by side with a prescription image (digitized simulator film or digitally reconstructed radiograph). The user can measure distances between features in prescription and portal images and "write" on the display, either to approve the image or to indicate required corrective actions. The EVB tool also provides several features not available in conventional verification practice using a light box. The EVB tool has been written in ANSI C using the X window system. The tool makes use of the Virtual Machine Platform and Foundation Library specifications of the NCI-sponsored Radiation Therapy Planning Tools Collaborative Working Group for portability into an arbitrary treatment planning system that conforms to these specifications. The present EVB tool is based on an earlier Verification Image Review tool, but with a substantial redesign of the user interface. A graphical user interface prototyping system was used in iteratively refining the tool layout to allow rapid modifications of the interface in response to user comments. Features of the EVB tool include 1) hierarchical selection of digital portal images based on physician name, patient name, and field identifier; 2) side-by-side presentation of prescription and portal images at equal magnification and orientation, and with independent grayscale controls; 3) "trace" facility for outlining anatomical structures; 4) "ruler" facility for measuring distances; 5) zoomed display of corresponding regions in both images; 6) image contrast enhancement; and 7) communication of portal image evaluation results (approval, block modification, repeat image acquisition, etc.). The EVB tool facilitates the rapid comparison of prescription and portal images and permits electronic communication of corrections in port shape and positioning.

  7. Measuring Nepotism through Shared Last Names: Are We Really Moving from Opinions to Facts?

    PubMed Central

    Ferlazzo, Fabio; Sdoia, Stefano

    2012-01-01

    Nepotistic practices are detrimental for academia. An analysis of shared last names among academics was recently proposed to measure the diffusion of nepotism, the results of which have had a huge resonance. This method was thus proposed to orient the decisions of policy makers concerning cuts and funding. Because of the social relevance of this issue, the validity of this method must be assessed. Thus, we compared results from an analysis of Italian and United Kingdom academic last names, and of Italian last and given names. The results strongly suggest that the analysis of shared last names is not a measure of nepotism, as it is largely affected by social capital, professional networking and demographic effects, whose contribution is difficult to assess. Thus, the analysis of shared last names is not useful for guiding research policy. PMID:22937063

  8. Evolvix BEST Names for semantic reproducibility across code2brain interfaces.

    PubMed

    Loewe, Laurence; Scheuer, Katherine S; Keel, Seth A; Vyas, Vaibhav; Liblit, Ben; Hanlon, Bret; Ferris, Michael C; Yin, John; Dutra, Inês; Pietsch, Anthony; Javid, Christine G; Moog, Cecilia L; Meyer, Jocelyn; Dresel, Jerdon; McLoone, Brian; Loberger, Sonya; Movaghar, Arezoo; Gilchrist-Scott, Morgaine; Sabri, Yazeed; Sescleifer, Dave; Pereda-Zorrilla, Ivan; Zietlow, Andrew; Smith, Rodrigo; Pietenpol, Samantha; Goldfinger, Jacob; Atzen, Sarah L; Freiberg, Erika; Waters, Noah P; Nusbaum, Claire; Nolan, Erik; Hotz, Alyssa; Kliman, Richard M; Mentewab, Ayalew; Fregien, Nathan; Loewe, Martha

    2017-01-01

    Names in programming are vital for understanding the meaning of code and big data. We define code2brain (C2B) interfaces as maps in compilers and brains between meaning and naming syntax, which help to understand executable code. While working toward an Evolvix syntax for general-purpose programming that makes accurate modeling easy for biologists, we observed how names affect C2B quality. To protect learning and coding investments, C2B interfaces require long-term backward compatibility and semantic reproducibility (accurate reproduction of computational meaning from coder-brains to reader-brains by code alone). Semantic reproducibility is often assumed until confusing synonyms degrade modeling in biology to deciphering exercises. We highlight empirical naming priorities from diverse individuals and roles of names in different modes of computing to show how naming easily becomes impossibly difficult. We present the Evolvix BEST (Brief, Explicit, Summarizing, Technical) Names concept for reducing naming priority conflicts, test it on a real challenge by naming subfolders for the Project Organization Stabilizing Tool system, and provide naming questionnaires designed to facilitate C2B debugging by improving names used as keywords in a stabilizing programming language. Our experiences inspired us to develop Evolvix using a flipped programming language design approach with some unexpected features and BEST Names at its core. © 2016 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.

  9. A Tool that Uses the SAS (registered trademark) PRX Functions to Fix Delimited Text Files

    DTIC Science & Technology

    2015-07-07

    service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries...indicates USA registration. Other brand and product names are trademarks of their respective companies. 20 Distribution A: Approved for public release; distribution is unlimited. Case Number: 88ABW-2015-1635, 31 Mar 2015 ...including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services , Directorate for Information

  10. Tools for Specification Validation and Understanding.

    DTIC Science & Technology

    1983-12-01

    MICROCOP RESOUTIO 1.8CAR NATIONA _________ JilSTADARDS 1963 , :;,- .-..... ...... , .. ... .-.... . . .. .....-. -.-.. . . ... . . .. .. .. . ..11...that Change the World 20 2.2.5 Changing a Relation 20 2.2.6 Changing the Type of an Object 21 2.2.7 Creating and Destroying Objects 22 2.2.8 Compound ...verb, or it is a compound name (e.g. MoveShip or ReplaceLine) where the first element of the compound corresponds to the verb. If a name is not a

  11. DR-Integrator: a new analytic tool for integrating DNA copy number and gene expression data.

    PubMed

    Salari, Keyan; Tibshirani, Robert; Pollack, Jonathan R

    2010-02-01

    DNA copy number alterations (CNA) frequently underlie gene expression changes by increasing or decreasing gene dosage. However, only a subset of genes with altered dosage exhibit concordant changes in gene expression. This subset is likely to be enriched for oncogenes and tumor suppressor genes, and can be identified by integrating these two layers of genome-scale data. We introduce DNA/RNA-Integrator (DR-Integrator), a statistical software tool to perform integrative analyses on paired DNA copy number and gene expression data. DR-Integrator identifies genes with significant correlations between DNA copy number and gene expression, and implements a supervised analysis that captures genes with significant alterations in both DNA copy number and gene expression between two sample classes. DR-Integrator is freely available for non-commercial use from the Pollack Lab at http://pollacklab.stanford.edu/ and can be downloaded as a plug-in application to Microsoft Excel and as a package for the R statistical computing environment. The R package is available under the name 'DRI' at http://cran.r-project.org/. An example analysis using DR-Integrator is included as supplemental material. Supplementary data are available at Bioinformatics online.

  12. Weighing Evidence "Steampunk" Style via the Meta-Analyser.

    PubMed

    Bowden, Jack; Jackson, Chris

    2016-10-01

    The funnel plot is a graphical visualization of summary data estimates from a meta-analysis, and is a useful tool for detecting departures from the standard modeling assumptions. Although perhaps not widely appreciated, a simple extension of the funnel plot can help to facilitate an intuitive interpretation of the mathematics underlying a meta-analysis at a more fundamental level, by equating it to determining the center of mass of a physical system. We used this analogy to explain the concepts of weighing evidence and of biased evidence to a young audience at the Cambridge Science Festival, without recourse to precise definitions or statistical formulas and with a little help from Sherlock Holmes! Following on from the science fair, we have developed an interactive web-application (named the Meta-Analyser) to bring these ideas to a wider audience. We envisage that our application will be a useful tool for researchers when interpreting their data. First, to facilitate a simple understanding of fixed and random effects modeling approaches; second, to assess the importance of outliers; and third, to show the impact of adjusting for small study bias. This final aim is realized by introducing a novel graphical interpretation of the well-known method of Egger regression.

  13. An integrated tool to support engineers for WMSDs risk assessment during the assembly line balancing.

    PubMed

    Di Benedetto, Raffaele; Fanti, Michele

    2012-01-01

    This paper wants to present an integrated approach to Line Balancing and Risk Assessment and a Software Tool named ErgoAnalysis that makes it easy to control the whole production process and produces a Risk Index for the actual work tasks in an Assembly Line. Assembly Line Balancing, or simply Line Balancing, is the problem of assigning operations to workstations along an assembly line, in such a way that the assignment be optimal in some sense. Assembly lines are characterized by production constraints and restrictions due to several aspects such as the nature of the product and the flow of orders. To be able to respond effectively to the needs of production, companies need to frequently change the workload and production models. Each manufacturing process might be quite different from another. To optimize very specific operations, assembly line balancing might utilize a number of methods and the Engineer must consider ergonomic constraints, in order to reduce the risk of WMDSs. Risk Assessment may result very expensive because the Engineer must evaluate it at every change. ErgoAnalysis can reduce cost and improve effectiveness in Risk Assessment during the Line Balancing.

  14. phenoVein—A Tool for Leaf Vein Segmentation and Analysis1[OPEN

    PubMed Central

    Pflugfelder, Daniel; Huber, Gregor; Scharr, Hanno; Hülskamp, Martin; Koornneef, Maarten; Jahnke, Siegfried

    2015-01-01

    Precise measurements of leaf vein traits are an important aspect of plant phenotyping for ecological and genetic research. Here, we present a powerful and user-friendly image analysis tool named phenoVein. It is dedicated to automated segmenting and analyzing of leaf veins in images acquired with different imaging modalities (microscope, macrophotography, etc.), including options for comfortable manual correction. Advanced image filtering emphasizes veins from the background and compensates for local brightness inhomogeneities. The most important traits being calculated are total vein length, vein density, piecewise vein lengths and widths, areole area, and skeleton graph statistics, like the number of branching or ending points. For the determination of vein widths, a model-based vein edge estimation approach has been implemented. Validation was performed for the measurement of vein length, vein width, and vein density of Arabidopsis (Arabidopsis thaliana), proving the reliability of phenoVein. We demonstrate the power of phenoVein on a set of previously described vein structure mutants of Arabidopsis (hemivenata, ondulata3, and asymmetric leaves2-101) compared with wild-type accessions Columbia-0 and Landsberg erecta-0. phenoVein is freely available as open-source software. PMID:26468519

  15. Envelope analysis of rotating machine vibrations in variable speed conditions: A comprehensive treatment

    NASA Astrophysics Data System (ADS)

    Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.

    2017-02-01

    Nowadays, the vibration analysis of rotating machine signals is a well-established methodology, rooted on powerful tools offered, in particular, by the theory of cyclostationary (CS) processes. Among them, the squared envelope spectrum (SES) is probably the most popular to detect random CS components which are typical symptoms, for instance, of rolling element bearing faults. Recent researches are shifted towards the extension of existing CS tools - originally devised in constant speed conditions - to the case of variable speed conditions. Many of these works combine the SES with computed order tracking after some preprocessing steps. The principal object of this paper is to organize these dispersed researches into a structured comprehensive framework. Three original features are furnished. First, a model of rotating machine signals is introduced which sheds light on the various components to be expected in the SES. Second, a critical comparison is made of three sophisticated methods, namely, the improved synchronous average, the cepstrum prewhitening, and the generalized synchronous average, used for suppressing the deterministic part. Also, a general envelope enhancement methodology which combines the latter two techniques with a time-domain filtering operation is revisited. All theoretical findings are experimentally validated on simulated and real-world vibration signals.

  16. Industry Application ECCS / LOCA Integrated Cladding/Emergency Core Cooling System Performance: Demonstration of LOTUS-Baseline Coupled Analysis of the South Texas Plant Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hongbin; Szilard, Ronaldo; Epiney, Aaron

    Under the auspices of the DOE LWRS Program RISMC Industry Application ECCS/LOCA, INL has engaged staff from both South Texas Project (STP) and the Texas A&M University (TAMU) to produce a generic pressurized water reactor (PWR) model including reactor core, clad/fuel design and systems thermal hydraulics based on the South Texas Project (STP) nuclear power plant, a 4-Loop Westinghouse PWR. A RISMC toolkit, named LOCA Toolkit for the U.S. (LOTUS), has been developed for use in this generic PWR plant model to assess safety margins for the proposed NRC 10 CFR 50.46c rule, Emergency Core Cooling System (ECCS) performance duringmore » LOCA. This demonstration includes coupled analysis of core design, fuel design, thermalhydraulics and systems analysis, using advanced risk analysis tools and methods to investigate a wide range of results. Within this context, a multi-physics best estimate plus uncertainty (MPBEPU) methodology framework is proposed.« less

  17. Fatigue Analysis of Rotating Parts. A Case Study for a Belt Driven Pulley

    NASA Astrophysics Data System (ADS)

    Sandu, Ionela; Tabacu, Stefan; Ducu, Catalin

    2017-10-01

    The present study is focused on the life estimation of a rotating part as a component of an engine assembly namely the pulley of the coolant pump. The goal of the paper is to develop a model, supported by numerical analysis, capable to predict the lifetime of the part. Starting from functional drawing, CAD Model and technical specifications of the part a numerical model was developed. MATLAB code was used to develop a tool to apply the load over the selected area. The numerical analysis was performed in two steps. The first simulation concerned the inertia relief due to rotational motion about the shaft (of the pump). Results from this simulation were saved and the stress - strain state used as initial conditions for the analysis with the load applied. The lifetime of a good part was estimated. A defect was created in order to investigate the influence over the working requirements. It was found that there is little influence with respect to the prescribed lifetime.

  18. Image analysis technique as a tool to identify morphological changes in Trametes versicolor pellets according to exopolysaccharide or laccase production.

    PubMed

    Tavares, Ana P M; Silva, Rui P; Amaral, António L; Ferreira, Eugénio C; Xavier, Ana M R B

    2014-02-01

    Image analysis technique was applied to identify morphological changes of pellets from white-rot fungus Trametes versicolor on agitated submerged cultures during the production of exopolysaccharide (EPS) or ligninolytic enzymes. Batch tests with four different experimental conditions were carried out. Two different culture media were used, namely yeast medium or Trametes defined medium and the addition of lignolytic inducers as xylidine or pulp and paper industrial effluent were evaluated. Laccase activity, EPS production, and final biomass contents were determined for batch assays and the pellets morphology was assessed by image analysis techniques. The obtained data allowed establishing the choice of the metabolic pathways according to the experimental conditions, either for laccase enzymatic production in the Trametes defined medium, or for EPS production in the rich Yeast Medium experiments. Furthermore, the image processing and analysis methodology allowed for a better comprehension of the physiological phenomena with respect to the corresponding pellets morphological stages.

  19. Family-Based Benchmarking of Copy Number Variation Detection Software.

    PubMed

    Nutsua, Marcel Elie; Fischer, Annegret; Nebel, Almut; Hofmann, Sylvia; Schreiber, Stefan; Krawczak, Michael; Nothnagel, Michael

    2015-01-01

    The analysis of structural variants, in particular of copy-number variations (CNVs), has proven valuable in unraveling the genetic basis of human diseases. Hence, a large number of algorithms have been developed for the detection of CNVs in SNP array signal intensity data. Using the European and African HapMap trio data, we undertook a comparative evaluation of six commonly used CNV detection software tools, namely Affymetrix Power Tools (APT), QuantiSNP, PennCNV, GLAD, R-gada and VEGA, and assessed their level of pair-wise prediction concordance. The tool-specific CNV prediction accuracy was assessed in silico by way of intra-familial validation. Software tools differed greatly in terms of the number and length of the CNVs predicted as well as the number of markers included in a CNV. All software tools predicted substantially more deletions than duplications. Intra-familial validation revealed consistently low levels of prediction accuracy as measured by the proportion of validated CNVs (34-60%). Moreover, up to 20% of apparent family-based validations were found to be due to chance alone. Software using Hidden Markov models (HMM) showed a trend to predict fewer CNVs than segmentation-based algorithms albeit with greater validity. PennCNV yielded the highest prediction accuracy (60.9%). Finally, the pairwise concordance of CNV prediction was found to vary widely with the software tools involved. We recommend HMM-based software, in particular PennCNV, rather than segmentation-based algorithms when validity is the primary concern of CNV detection. QuantiSNP may be used as an additional tool to detect sets of CNVs not detectable by the other tools. Our study also reemphasizes the need for laboratory-based validation, such as qPCR, of CNVs predicted in silico.

  20. Charting taxonomic knowledge through ontologies and ranking algorithms

    NASA Astrophysics Data System (ADS)

    Huber, Robert; Klump, Jens

    2009-04-01

    Since the inception of geology as a modern science, paleontologists have described a large number of fossil species. This makes fossilized organisms an important tool in the study of stratigraphy and past environments. Since taxonomic classifications of organisms, and thereby their names, change frequently, the correct application of this tool requires taxonomic expertise in finding correct synonyms for a given species name. Much of this taxonomic information has already been published in journals and books where it is compiled in carefully prepared synonymy lists. Because this information is scattered throughout the paleontological literature, it is difficult to find and sometimes not accessible. Also, taxonomic information in the literature is often difficult to interpret for non-taxonomists looking for taxonomic synonymies as part of their research. The highly formalized structure makes Open Nomenclature synonymy lists ideally suited for computer aided identification of taxonomic synonyms. Because a synonymy list is a list of citations related to a taxon name, its bibliographic nature allows the application of bibliometric techniques to calculate the impact of synonymies and taxonomic concepts. TaxonRank is a ranking algorithm based on bibliometric analysis and Internet page ranking algorithms. TaxonRank uses published synonymy list data stored in TaxonConcept, a taxonomic information system. The basic ranking algorithm has been modified to include a measure of confidence on species identification based on the Open Nomenclature notation used in synonymy list, as well as other synonymy specific criteria. The results of our experiments show that the output of the proposed ranking algorithm gives a good estimate of the impact a published taxonomic concept has on the taxonomic opinions in the geological community. Also, our results show that treating taxonomic synonymies as part of on an ontology is a way to record and manage taxonomic knowledge, and thus contribute to the preservation our scientific heritage.

  1. SERS study of transformation of phenylalanine to tyrosine under particle irradiation

    NASA Astrophysics Data System (ADS)

    Zhang, Jingjing; Huang, Qing; Yao, Guohua; Ke, Zhigang; Zhang, Hong; Lu, Yilin

    2014-08-01

    Surface enhanced Raman scattering or spectroscopy (SERS) is a very powerful analytical tool which has been widely applied in many scientific research and application fields. It is therefore also very intriguing for us to introduce SERS technique in the radiobiological research, where in many cases only a very few of biomolecules are subjected to changes which however can lead to significant biological effects. The radiation induced biochemical reactions are normally very sophisticated with different substances produced in the system, so currently it is still a big challenge for SERS to analyze such a mixture system which contains multiple analytes. In this context, this work aimed to establish and consolidate the feasibility of SERS as an effective tool in radiation chemistry, and this purpose, we employed SERS as a sensitive probe to a known process, namely, the oxidation of phenylalanine (Phe) under particle irradiation, where the energetic particles were obtained from either plasma discharge or electron-beam. During the irradiation, three types of tyrosine (Tyr), namely, p-Tyr, m-Tyr and o-Tyr were produced, and all these tyrosine isomers together with Phe could be identified and measured based on the SERS spectral analysis of the corresponding enhanced characteristic signals, namely, 1002 cm-1 for Phe, 1161 cm-1 for p-Tyr, 990 cm-1 for m-Tyr, and 970 cm-1 for o-Tyr, respectively. The estimation of the quantities of different tyrosine isomers were also given and verified by conventional method such as high performance liquid chromatography (HPLC). As for comparison of different ways of particle irradiation, our results also indicated that electron-beam irradiation was more efficient for converting Phe into Tyr than plasma discharge treatment, confirming the role of hydroxyl radicals in the Phe-Tyr conformation. Therefore, our work has not only demonstrated that SERS can be successfully applied in the radiobiological study, but also given insights into the mechanism for the interaction between particle radiation and biological systems.

  2. rSalvador: An R Package for the Fluctuation Experiment

    PubMed Central

    Zheng, Qi

    2017-01-01

    The past few years have seen a surge of novel applications of the Luria-Delbrück fluctuation assay protocol in bacterial research. Appropriate analysis of fluctuation assay data often requires computational methods that are unavailable in the popular web tool FALCOR. This paper introduces an R package named rSalvador to bring improvements to the field. The paper focuses on rSalvador’s capabilities to alleviate three kinds of problems found in recent investigations: (i) resorting to partial plating without properly accounting for the effects of partial plating; (ii) conducting attendant fitness assays without incorporating mutants’ relative fitness in subsequent data analysis; and (iii) comparing mutation rates using methods that are in general inapplicable to fluctuation assay data. In addition, the paper touches on rSalvador’s capabilities to estimate sample size and the difficulties related to parameter nonidentifiability. PMID:29084818

  3. Multivariate analysis of nystatin and metronidazole in a semi-solid matrix by means of diffuse reflectance NIR spectroscopy and PLS regression.

    PubMed

    Baratieri, Sabrina C; Barbosa, Juliana M; Freitas, Matheus P; Martins, José A

    2006-01-23

    A multivariate method of analysis of nystatin and metronidazole in a semi-solid matrix, based on diffuse reflectance NIR measurements and partial least squares regression, is reported. The product, a vaginal cream used in the antifungal and antibacterial treatment, is usually, quantitatively analyzed through microbiological tests (nystatin) and HPLC technique (metronidazole), according to pharmacopeial procedures. However, near infrared spectroscopy has demonstrated to be a valuable tool for content determination, given the rapidity and scope of the method. In the present study, it was successfully applied in the prediction of nystatin (even in low concentrations, ca. 0.3-0.4%, w/w, which is around 100,000 IU/5g) and metronidazole contents, as demonstrated by some figures of merit, namely linearity, precision (mean and repeatability) and accuracy.

  4. Compressive Network Analysis

    PubMed Central

    Jiang, Xiaoye; Yao, Yuan; Liu, Han; Guibas, Leonidas

    2014-01-01

    Modern data acquisition routinely produces massive amounts of network data. Though many methods and models have been proposed to analyze such data, the research of network data is largely disconnected with the classical theory of statistical learning and signal processing. In this paper, we present a new framework for modeling network data, which connects two seemingly different areas: network data analysis and compressed sensing. From a nonparametric perspective, we model an observed network using a large dictionary. In particular, we consider the network clique detection problem and show connections between our formulation with a new algebraic tool, namely Randon basis pursuit in homogeneous spaces. Such a connection allows us to identify rigorous recovery conditions for clique detection problems. Though this paper is mainly conceptual, we also develop practical approximation algorithms for solving empirical problems and demonstrate their usefulness on real-world datasets. PMID:25620806

  5. Development of efficient and cost-effective distributed hydrological modeling tool MWEasyDHM based on open-source MapWindow GIS

    NASA Astrophysics Data System (ADS)

    Lei, Xiaohui; Wang, Yuhui; Liao, Weihong; Jiang, Yunzhong; Tian, Yu; Wang, Hao

    2011-09-01

    Many regions are still threatened with frequent floods and water resource shortage problems in China. Consequently, the task of reproducing and predicting the hydrological process in watersheds is hard and unavoidable for reducing the risks of damage and loss. Thus, it is necessary to develop an efficient and cost-effective hydrological tool in China as many areas should be modeled. Currently, developed hydrological tools such as Mike SHE and ArcSWAT (soil and water assessment tool based on ArcGIS) show significant power in improving the precision of hydrological modeling in China by considering spatial variability both in land cover and in soil type. However, adopting developed commercial tools in such a large developing country comes at a high cost. Commercial modeling tools usually contain large numbers of formulas, complicated data formats, and many preprocessing or postprocessing steps that may make it difficult for the user to carry out simulation, thus lowering the efficiency of the modeling process. Besides, commercial hydrological models usually cannot be modified or improved to be suitable for some special hydrological conditions in China. Some other hydrological models are open source, but integrated into commercial GIS systems. Therefore, by integrating hydrological simulation code EasyDHM, a hydrological simulation tool named MWEasyDHM was developed based on open-source MapWindow GIS, the purpose of which is to establish the first open-source GIS-based distributed hydrological model tool in China by integrating modules of preprocessing, model computation, parameter estimation, result display, and analysis. MWEasyDHM provides users with a friendly manipulating MapWindow GIS interface, selectable multifunctional hydrological processing modules, and, more importantly, an efficient and cost-effective hydrological simulation tool. The general construction of MWEasyDHM consists of four major parts: (1) a general GIS module for hydrological analysis, (2) a preprocessing module for modeling inputs, (3) a model calibration module, and (4) a postprocessing module. The general GIS module for hydrological analysis is developed on the basis of totally open-source GIS software, MapWindow, which contains basic GIS functions. The preprocessing module is made up of three submodules including a DEM-based submodule for hydrological analysis, a submodule for default parameter calculation, and a submodule for the spatial interpolation of meteorological data. The calibration module contains parallel computation, real-time computation, and visualization. The postprocessing module includes model calibration and model results spatial visualization using tabular form and spatial grids. MWEasyDHM makes it possible for efficient modeling and calibration of EasyDHM, and promises further development of cost-effective applications in various watersheds.

  6. RefEx, a reference gene expression dataset as a web tool for the functional analysis of genes.

    PubMed

    Ono, Hiromasa; Ogasawara, Osamu; Okubo, Kosaku; Bono, Hidemasa

    2017-08-29

    Gene expression data are exponentially accumulating; thus, the functional annotation of such sequence data from metadata is urgently required. However, life scientists have difficulty utilizing the available data due to its sheer magnitude and complicated access. We have developed a web tool for browsing reference gene expression pattern of mammalian tissues and cell lines measured using different methods, which should facilitate the reuse of the precious data archived in several public databases. The web tool is called Reference Expression dataset (RefEx), and RefEx allows users to search by the gene name, various types of IDs, chromosomal regions in genetic maps, gene family based on InterPro, gene expression patterns, or biological categories based on Gene Ontology. RefEx also provides information about genes with tissue-specific expression, and the relative gene expression values are shown as choropleth maps on 3D human body images from BodyParts3D. Combined with the newly incorporated Functional Annotation of Mammals (FANTOM) dataset, RefEx provides insight regarding the functional interpretation of unfamiliar genes. RefEx is publicly available at http://refex.dbcls.jp/.

  7. Analysis of the comprehensibility of chemical hazard communication tools at the industrial workplace.

    PubMed

    Ta, Goh Choo; Mokhtar, Mazlin Bin; Mohd Mokhtar, Hj Anuar Bin; Ismail, Azmir Bin; Abu Yazid, Mohd Fadhil Bin Hj

    2010-01-01

    Chemical classification and labelling systems may be roughly similar from one country to another but there are significant differences too. In order to harmonize various chemical classification systems and ultimately provide consistent chemical hazard communication tools worldwide, the Globally Harmonized System of Classification and Labelling of Chemicals (GHS) was endorsed by the United Nations Economic and Social Council (ECOSOC). Several countries, including Japan, Taiwan, Korea and Malaysia, are now in the process of implementing GHS. It is essential to ascertain the comprehensibility of chemical hazard communication tools that are described in the GHS documents, namely the chemical labels and Safety Data Sheets (SDS). Comprehensibility Testing (CT) was carried out with a mixed group of industrial workers in Malaysia (n=150) and factors that influence the comprehensibility were analysed using one-way ANOVA. The ability of the respondents to retrieve information from the SDS was also tested in this study. The findings show that almost all the GHS pictograms meet the ISO comprehension criteria and it is concluded that the underlying core elements that enhance comprehension of GHS pictograms and which are also essential in developing competent persons in the use of SDS are training and education.

  8. RefEx, a reference gene expression dataset as a web tool for the functional analysis of genes

    PubMed Central

    Ono, Hiromasa; Ogasawara, Osamu; Okubo, Kosaku; Bono, Hidemasa

    2017-01-01

    Gene expression data are exponentially accumulating; thus, the functional annotation of such sequence data from metadata is urgently required. However, life scientists have difficulty utilizing the available data due to its sheer magnitude and complicated access. We have developed a web tool for browsing reference gene expression pattern of mammalian tissues and cell lines measured using different methods, which should facilitate the reuse of the precious data archived in several public databases. The web tool is called Reference Expression dataset (RefEx), and RefEx allows users to search by the gene name, various types of IDs, chromosomal regions in genetic maps, gene family based on InterPro, gene expression patterns, or biological categories based on Gene Ontology. RefEx also provides information about genes with tissue-specific expression, and the relative gene expression values are shown as choropleth maps on 3D human body images from BodyParts3D. Combined with the newly incorporated Functional Annotation of Mammals (FANTOM) dataset, RefEx provides insight regarding the functional interpretation of unfamiliar genes. RefEx is publicly available at http://refex.dbcls.jp/. PMID:28850115

  9. Feasibility of Prostate Cancer Diagnosis by Transrectal Photo-acoustic Imaging

    DTIC Science & Technology

    2013-03-01

    prostate. Transrectal ultrasound has been used as a guiding tool to direct tissue needle biopsy for prostate cancer diagnosis; it cannot be utilized for...tool currently available for prostate cancer detection; needle biopsy is the current practice for diagnosis of the disease, aiming randomly in the...developing an integrated approach between ultrasound and optical tomography, namely, transrectal ultrasound - guided diffuse optical tomography (TRUS

  10. A Structural Health Monitoring Software Tool for Optimization, Diagnostics and Prognostics

    DTIC Science & Technology

    2011-01-01

    A Structural Health Monitoring Software Tool for Optimization, Diagnostics and Prognostics Seth S . Kessler1, Eric B. Flynn2, Christopher T...technology more accessible, and commercially practical. 1. INTRODUCTION Currently successful laboratory non- destructive testing and monitoring...PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES

  11. Healthcare decision-tools a growing Web trend: three-pronged public relations campaign heightens presence, recognition for online healthcare information provider.

    PubMed

    2006-01-01

    Schwartz Communications, LLC, executes a successful PR campaign to position Subimo, a provider of online healthcare decision tools, as a leader in the industry that touts names such as WebMD.com and HealthGrades.com. Through a three-pronged media relations strategy, Schwartz and Subimo together branded the company as an industry thought-leader.

  12. Burn Injury Assessment Tool with Morphable 3D Human Body Models

    DTIC Science & Technology

    2017-04-21

    waist, arms and legs measurements) as stored in most anthropometry databases . To improve on bum area estimations, the bum tool will allow the user to...different algorithm for morphing that relies on searching of an extensive anthropometric database , which is created from thousands of randomly...interpolation methods are required. Develop Patient Database : Patient data entered (name, gender, age, anthropometric measurements), collected (photographic

  13. ThinTool: a spreadsheet model to evaluate fuel reduction thinning cost, net energy output, and nutrient impacts

    Treesearch

    Sang-Kyun Han; Han-Sup Han; William J. Elliot; Edward M. Bilek

    2017-01-01

    We developed a spreadsheet-based model, named ThinTool, to evaluate the cost of mechanical fuel reduction thinning including biomass removal, to predict net energy output, and to assess nutrient impacts from thinning treatments in northern California and southern Oregon. A combination of literature reviews, field-based studies, and contractor surveys was used to...

  14. Double Dutch: A Tool for Designing Combinatorial Libraries of Biological Systems.

    PubMed

    Roehner, Nicholas; Young, Eric M; Voigt, Christopher A; Gordon, D Benjamin; Densmore, Douglas

    2016-06-17

    Recently, semirational approaches that rely on combinatorial assembly of characterized DNA components have been used to engineer biosynthetic pathways. In practice, however, it is not practical to assemble and test millions of pathway variants in order to elucidate how different DNA components affect the behavior of a pathway. To address this challenge, we apply a rigorous mathematical approach known as design of experiments (DOE) that can be used to construct empirical models of system behavior without testing all variants. To support this approach, we have developed a tool named Double Dutch, which uses a formal grammar and heuristic algorithms to automate the process of DOE library design. Compared to designing by hand, Double Dutch enables users to more efficiently and scalably design libraries of pathway variants that can be used in a DOE framework and uniquely provides a means to flexibly balance design considerations of statistical analysis, construction cost, and risk of homologous recombination, thereby demonstrating the utility of automating decision making when faced with complex design trade-offs.

  15. Techniques for Soundscape Retrieval and Synthesis

    NASA Astrophysics Data System (ADS)

    Mechtley, Brandon Michael

    The study of acoustic ecology is concerned with the manner in which life interacts with its environment as mediated through sound. As such, a central focus is that of the soundscape: the acoustic environment as perceived by a listener. This dissertation examines the application of several computational tools in the realms of digital signal processing, multimedia information retrieval, and computer music synthesis to the analysis of the soundscape. Namely, these tools include a) an open source software library, Sirens, which can be used for the segmentation of long environmental field recordings into individual sonic events and compare these events in terms of acoustic content, b) a graph-based retrieval system that can use these measures of acoustic similarity and measures of semantic similarity using the lexical database WordNet to perform both text-based retrieval and automatic annotation of environmental sounds, and c) new techniques for the dynamic, realtime parametric morphing of multiple field recordings, informed by the geographic paths along which they were recorded.

  16. Use of an Electronic Tongue System and Fuzzy Logic to Analyze Water Samples

    NASA Astrophysics Data System (ADS)

    Braga, Guilherme S.; Paterno, Leonardo G.; Fonseca, Fernando J.

    2009-05-01

    An electronic tongue (ET) system incorporating 8 chemical sensors was used in combination with two pattern recognition tools, namely principal component analysis (PCA) and Fuzzy logic for discriminating/classification of water samples from different sources (tap, distilled and three brands of mineral water). The Fuzzy program exhibited a higher accuracy than the PCA and allowed the ET to classify correctly 4 in 5 types of water. Exception was made for one brand of mineral water which was sometimes misclassified as tap water. On the other hand, the PCA grouped water samples in three clusters, one with the distilled water; a second with tap water and one brand of mineral water, and the third with the other two other brands of mineral water. Samples in the second and third clusters could not be distinguished. Nevertheless, close grouping between repeated tests indicated that the ET system response is reproducible. The potential use of the Fuzzy logic as the data processing tool in combination with an electronic tongue system is discussed.

  17. BEASTling: A software tool for linguistic phylogenetics using BEAST 2

    PubMed Central

    Forkel, Robert; Kaiping, Gereon A.; Atkinson, Quentin D.

    2017-01-01

    We present a new open source software tool called BEASTling, designed to simplify the preparation of Bayesian phylogenetic analyses of linguistic data using the BEAST 2 platform. BEASTling transforms comparatively short and human-readable configuration files into the XML files used by BEAST to specify analyses. By taking advantage of Creative Commons-licensed data from the Glottolog language catalog, BEASTling allows the user to conveniently filter datasets using names for recognised language families, to impose monophyly constraints so that inferred language trees are backward compatible with Glottolog classifications, or to assign geographic location data to languages for phylogeographic analyses. Support for the emerging cross-linguistic linked data format (CLDF) permits easy incorporation of data published in cross-linguistic linked databases into analyses. BEASTling is intended to make the power of Bayesian analysis more accessible to historical linguists without strong programming backgrounds, in the hopes of encouraging communication and collaboration between those developing computational models of language evolution (who are typically not linguists) and relevant domain experts. PMID:28796784

  18. BEASTling: A software tool for linguistic phylogenetics using BEAST 2.

    PubMed

    Maurits, Luke; Forkel, Robert; Kaiping, Gereon A; Atkinson, Quentin D

    2017-01-01

    We present a new open source software tool called BEASTling, designed to simplify the preparation of Bayesian phylogenetic analyses of linguistic data using the BEAST 2 platform. BEASTling transforms comparatively short and human-readable configuration files into the XML files used by BEAST to specify analyses. By taking advantage of Creative Commons-licensed data from the Glottolog language catalog, BEASTling allows the user to conveniently filter datasets using names for recognised language families, to impose monophyly constraints so that inferred language trees are backward compatible with Glottolog classifications, or to assign geographic location data to languages for phylogeographic analyses. Support for the emerging cross-linguistic linked data format (CLDF) permits easy incorporation of data published in cross-linguistic linked databases into analyses. BEASTling is intended to make the power of Bayesian analysis more accessible to historical linguists without strong programming backgrounds, in the hopes of encouraging communication and collaboration between those developing computational models of language evolution (who are typically not linguists) and relevant domain experts.

  19. New risk metrics and mathematical tools for risk analysis: Current and future challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skandamis, Panagiotis N., E-mail: pskan@aua.gr; Andritsos, Nikolaos, E-mail: pskan@aua.gr; Psomas, Antonios, E-mail: pskan@aua.gr

    The current status of the food safety supply world wide, has led Food and Agriculture Organization (FAO) and World Health Organization (WHO) to establishing Risk Analysis as the single framework for building food safety control programs. A series of guidelines and reports that detail out the various steps in Risk Analysis, namely Risk Management, Risk Assessment and Risk Communication is available. The Risk Analysis approach enables integration between operational food management systems, such as Hazard Analysis Critical Control Points, public health and governmental decisions. To do that, a series of new Risk Metrics has been established as follows: i) themore » Appropriate Level of Protection (ALOP), which indicates the maximum numbers of illnesses in a population per annum, defined by quantitative risk assessments, and used to establish; ii) Food Safety Objective (FSO), which sets the maximum frequency and/or concentration of a hazard in a food at the time of consumption that provides or contributes to the ALOP. Given that ALOP is rather a metric of the public health tolerable burden (it addresses the total ‘failure’ that may be handled at a national level), it is difficult to be interpreted into control measures applied at the manufacturing level. Thus, a series of specific objectives and criteria for performance of individual processes and products have been established, all of them assisting in the achievement of FSO and hence, ALOP. In order to achieve FSO, tools quantifying the effect of processes and intrinsic properties of foods on survival and growth of pathogens are essential. In this context, predictive microbiology and risk assessment have offered an important assistance to Food Safety Management. Predictive modelling is the basis of exposure assessment and the development of stochastic and kinetic models, which are also available in the form of Web-based applications, e.g., COMBASE and Microbial Responses Viewer), or introduced into user-friendly softwares, (e.g., Seafood Spoilage Predictor) have evolved the use of information systems in the food safety management. Such tools are updateable with new food-pathogen specific models containing cardinal parameters and multiple dependent variables, including plate counts, concentration of metabolic products, or even expression levels of certain genes. Then, these tools may further serve as decision-support tools which may assist in product logistics, based on their scientifically-based and “momentary” expressed spoilage and safety level.« less

  20. New risk metrics and mathematical tools for risk analysis: Current and future challenges

    NASA Astrophysics Data System (ADS)

    Skandamis, Panagiotis N.; Andritsos, Nikolaos; Psomas, Antonios; Paramythiotis, Spyridon

    2015-01-01

    The current status of the food safety supply world wide, has led Food and Agriculture Organization (FAO) and World Health Organization (WHO) to establishing Risk Analysis as the single framework for building food safety control programs. A series of guidelines and reports that detail out the various steps in Risk Analysis, namely Risk Management, Risk Assessment and Risk Communication is available. The Risk Analysis approach enables integration between operational food management systems, such as Hazard Analysis Critical Control Points, public health and governmental decisions. To do that, a series of new Risk Metrics has been established as follows: i) the Appropriate Level of Protection (ALOP), which indicates the maximum numbers of illnesses in a population per annum, defined by quantitative risk assessments, and used to establish; ii) Food Safety Objective (FSO), which sets the maximum frequency and/or concentration of a hazard in a food at the time of consumption that provides or contributes to the ALOP. Given that ALOP is rather a metric of the public health tolerable burden (it addresses the total `failure' that may be handled at a national level), it is difficult to be interpreted into control measures applied at the manufacturing level. Thus, a series of specific objectives and criteria for performance of individual processes and products have been established, all of them assisting in the achievement of FSO and hence, ALOP. In order to achieve FSO, tools quantifying the effect of processes and intrinsic properties of foods on survival and growth of pathogens are essential. In this context, predictive microbiology and risk assessment have offered an important assistance to Food Safety Management. Predictive modelling is the basis of exposure assessment and the development of stochastic and kinetic models, which are also available in the form of Web-based applications, e.g., COMBASE and Microbial Responses Viewer), or introduced into user-friendly softwares, (e.g., Seafood Spoilage Predictor) have evolved the use of information systems in the food safety management. Such tools are updateable with new food-pathogen specific models containing cardinal parameters and multiple dependent variables, including plate counts, concentration of metabolic products, or even expression levels of certain genes. Then, these tools may further serve as decision-support tools which may assist in product logistics, based on their scientifically-based and "momentary" expressed spoilage and safety level.

  1. Peeling Back the Layers

    NASA Technical Reports Server (NTRS)

    2004-01-01

    NASA's Mars Exploration Rover Spirit took this panoramic camera image of the rock target named 'Mazatzal' on sol 77 (March 22, 2004). It is a close-up look at the rock face and the targets that will be brushed and ground by the rock abrasion tool in upcoming sols.

    Mazatzal, like most rocks on Earth and Mars, has layers of material near its surface that provide clues about the history of the rock. Scientists believe that the top layer of Mazatzal is actually a coating of dust and possibly even salts. Under this light coating may be a more solid portion of the rock that has been chemically altered by weathering. Past this layer is the unaltered rock, which may give scientists the best information about how Mazatzal was formed.

    Because each layer reveals information about the formation and subsequent history of Mazatzal, it is important that scientists get a look at each of them. For this reason, they have developed a multi-part strategy to use the rock abrasion tool to systematically peel back Mazatzal's layers and analyze what's underneath with the rover's microscopic imager, and its Moessbauer and alpha particle X-ray spectrometers.

    The strategy began on sol 77 when scientists used the microscopic imager to get a closer look at targets on Mazatzal named 'New York,' 'Illinois' and 'Arizona.' These rock areas were targeted because they posed the best opportunity for successfully using the rock abrasion tool; Arizona also allowed for a close-up look at a range of tones. On sol 78, Spirit's rock abrasion tool will do a light brushing on the Illinois target to preserve some of the surface layers. Then, a brushing of the New York target should remove the top coating of any dust and salts and perhaps reveal the chemically altered rock underneath. Finally, on sol 79, the rock abrasion tool will be commanded to grind into the New York target, which will give scientists the best chance of observing Mazatzal's interior.

    The Mazatzal targets were named after the home states of some of the rock abrasion tool and science team members.

  2. An alternative approach based on artificial neural networks to study controlled drug release.

    PubMed

    Reis, Marcus A A; Sinisterra, Rubén D; Belchior, Jadson C

    2004-02-01

    An alternative methodology based on artificial neural networks is proposed to be a complementary tool to other conventional methods to study controlled drug release. Two systems are used to test the approach; namely, hydrocortisone in a biodegradable matrix and rhodium (II) butyrate complexes in a bioceramic matrix. Two well-established mathematical models are used to simulate different release profiles as a function of fundamental properties; namely, diffusion coefficient (D), saturation solubility (C(s)), drug loading (A), and the height of the device (h). The models were tested, and the results show that these fundamental properties can be predicted after learning the experimental or model data for controlled drug release systems. The neural network results obtained after the learning stage can be considered to quantitatively predict ideal experimental conditions. Overall, the proposed methodology was shown to be efficient for ideal experiments, with a relative average error of <1% in both tests. This approach can be useful for the experimental analysis to simulate and design efficient controlled drug-release systems. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association

  3. Performance analysis of SiGe double-gate N-MOSFET

    NASA Astrophysics Data System (ADS)

    Singh, A.; Kapoor, D.; Sharma, R.

    2017-04-01

    The major purpose of this paper is to find an alternative configuration that not only minimizes the limitations of single-gate (SG) MOSFETs but also provides the better replacement for future technology. In this paper, the electrical characteristics of SiGe double-gate N-MOSFET are demonstrated and compared with electrical characteristics of Si double-gate N-MOSFET. Furthermore, in this paper the electrical characteristics of Si double-gate N-MOSFET are demonstrated and compared with electrical characteristics of Si single-gate N-MOSFET. The simulations are carried out for the device at different operational voltages using Cogenda Visual TCAD tool. Moreover, we have designed its structure and studied both {I}{{d}}{-}{V}{{g}} characteristics for different voltages namely 0.05, 0.1, 0.5, 0.8, 1 and 1.5 V and {I}{{d}}{-}{V}{{d}} characteristics for different voltages namely 0.1, 0.5, 1 and 1.5 V at work functions 4.5, 4.6 and 4.8 eV for this structure. The performance parameters investigated in this paper are threshold voltage, DIBL, subthreshold slope, GIDL, volume inversion and MMCR.

  4. Investigation of Neural-Immune Profiling, Transcriptomics and Proteomics and Clinical Tools in Assessing Navy Dolphin Health

    DTIC Science & Technology

    2007-12-21

    Evaluation of brominated flame retardants in relationship to bottlenose dolphin immunity. The Toxicologist (Supplement to Toxicological Sciences) 2006; 90(S-1...Form Approved REPORT DOCUMENTATION PAGE OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average I hour...Aquarium & Institute for Exploration 55 Coogan Blvd. Mystic, CT 06355 9. SPONSORING I MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S

  5. Nominate Today | NREL

    Science.gov Websites

    energy technologies, analytical tools, and financing to guide their organizations and communities in candidates and/or organizations. If you see this don't fill out this input box. Your Name Your Email

  6. Phrase Mining of Textual Data to Analyze Extracellular Matrix Protein Patterns Across Cardiovascular Disease.

    PubMed

    Liem, David Alexandre; Murali, Sanjana; Sigdel, Dibakar; Shi, Yu; Wang, Xuan; Shen, Jiaming; Choi, Howard; Caufield, J Harry; Wang, Wei; Ping, Peipei; Han, Jiawei

    2018-05-18

    Extracellular matrix (ECM) proteins have been shown to play important roles regulating multiple biological processes in an array of organ systems, including the cardiovascular system. By using a novel bioinformatics text-mining tool, we studied six categories of cardiovascular disease (CVD), namely ischemic heart disease (IHD), cardiomyopathies (CM), cerebrovascular accident (CVA), congenital heart disease (CHD), arrhythmias (ARR), and valve disease (VD), anticipating novel ECM protein-disease and protein-protein relationships hidden within vast quantities of textual data. We conducted a phrase-mining analysis, delineating the relationships of 709 ECM proteins with the six groups of CVDs reported in 1,099,254 abstracts. The technology pipeline known as Context-aware Semantic Online Analytical Processing (CaseOLAP) was applied to semantically rank the association of proteins to each and all six CVDs, performing analyses to quantify each protein-disease relationship. We performed principal component analysis and hierarchical clustering of the data, where each protein is visualized as a six dimensional vector. We found that ECM proteins display variable degrees of association with the six CVDs; certain CVDs share groups of associated proteins whereas others have divergent protein associations. We identified 82 ECM proteins sharing associations with all six CVDs. Our bioinformatics analysis ascribed distinct ECM pathways (via Reactome) from this subset of proteins, namely insulin-like growth factor regulation and interleukin-4 and interleukin-13 signaling, suggesting their contribution to the pathogenesis of all six CVDs. Finally, we performed hierarchical clustering analysis and identified protein clusters associated with a targeted CVD; analyses revealed unexpected insights underlying ECM-pathogenesis of CVDs.

  7. A Dual-Beam Irradiation Facility for a Novel Hybrid Cancer Therapy

    NASA Astrophysics Data System (ADS)

    Sabchevski, Svilen Petrov; Idehara, Toshitaka; Ishiyama, Shintaro; Miyoshi, Norio; Tatsukawa, Toshiaki

    2013-01-01

    In this paper we present the main ideas and discuss both the feasibility and the conceptual design of a novel hybrid technique and equipment for an experimental cancer therapy based on the simultaneous and/or sequential application of two beams, namely a beam of neutrons and a CW (continuous wave) or intermittent sub-terahertz wave beam produced by a gyrotron for treatment of cancerous tumors. The main simulation tools for the development of the computer aided design (CAD) of the prospective experimental facility for clinical trials and study of such new medical technology are briefly reviewed. Some tasks for a further continuation of this feasibility analysis are formulated as well.

  8. Creating User-Friendly Tools for Data Analysis and Visualization in K-12 Classrooms: A Fortran Dinosaur Meets Generation Y

    NASA Technical Reports Server (NTRS)

    Chambers, L. H.; Chaudhury, S.; Page, M. T.; Lankey, A. J.; Doughty, J.; Kern, Steven; Rogerson, Tina M.

    2008-01-01

    During the summer of 2007, as part of the second year of a NASA-funded project in partnership with Christopher Newport University called SPHERE (Students as Professionals Helping Educators Research the Earth), a group of undergraduate students spent 8 weeks in a research internship at or near NASA Langley Research Center. Three students from this group formed the Clouds group along with a NASA mentor (Chambers), and the brief addition of a local high school student fulfilling a mentorship requirement. The Clouds group was given the task of exploring and analyzing ground-based cloud observations obtained by K-12 students as part of the Students' Cloud Observations On-Line (S'COOL) Project, and the corresponding satellite data. This project began in 1997. The primary analysis tools developed for it were in FORTRAN, a computer language none of the students were familiar with. While they persevered through computer challenges and picky syntax, it eventually became obvious that this was not the most fruitful approach for a project aimed at motivating K-12 students to do their own data analysis. Thus, about halfway through the summer the group shifted its focus to more modern data analysis and visualization tools, namely spreadsheets and Google(tm) Earth. The result of their efforts, so far, is two different Excel spreadsheets and a Google(tm) Earth file. The spreadsheets are set up to allow participating classrooms to paste in a particular dataset of interest, using the standard S'COOL format, and easily perform a variety of analyses and comparisons of the ground cloud observation reports and their correspondence with the satellite data. This includes summarizing cloud occurrence and cloud cover statistics, and comparing cloud cover measurements from the two points of view. A visual classification tool is also provided to compare the cloud levels reported from the two viewpoints. This provides a statistical counterpart to the existing S'COOL data visualization tool, which is used for individual ground-to-satellite correspondences. The Google(tm) Earth file contains a set of placemarks and ground overlays to show participating students the area around their school that the satellite is measuring. This approach will be automated and made interactive by the S'COOL database expert and will also be used to help refine the latitude/longitude location of the participating schools. Once complete, these new data analysis tools will be posted on the S'COOL website for use by the project participants in schools around the US and the world.

  9. Longitudinal adoption rates of complex decision support tools in primary care.

    PubMed

    McCullagh, Lauren; Mann, Devin; Rosen, Lisa; Kannry, Joseph; McGinn, Thomas

    2014-12-01

    Translating research findings into practice promises to standardise care. Translation includes the integration of evidence-based guidelines at the point of care, discerning the best methods to disseminate research findings and models to sustain the implementation of best practices.By applying usability testing to clinical decision support(CDS) design, overall adoption rates of 60% can be realised.What has not been examined is how long adoption rates are sustained and the characteristics associated with long-term use. We conducted secondary analysis to decipher the factors impacting sustained use of CD Stools. This study was a secondary data analysis from a clinical trial conducted at an academic institution in New York City. Study data was identified patients electronic health records (EHR). The trial was to test the implementation of an integrated clinical prediction rule(iCPR) into the EHR. The primary outcome variable was iCPR tool acceptance of the tool. iCPR tool completion and iCPR smartest completion were additional outcome variables of interest. The secondary aim was to examine user characteristics associated with iCPR tool use in later time periods. Characteristics of interest included age, resident year, use of electronic health records (yes/no) and use of best practice alerts (BPA) (yes/no). Generalised linear mixed models (GLiMM) were used to compare iCPR use over time for each outcome of interest: namely, iCPR acceptance, iCPR completion and iCPR smartset completion.GLiMM was also used to examine resident characteristics associated with iCPR tool use in later time periods; specifically, intermediate and long-term (ie, 90+days). The tool was accepted, on average, 82.18% in the first 90 days (short-term period). The use decreases to 56.07% and 45.61% in intermediate and long-term time periods, respectively. There was a significant association between iCPR tool completion and time periods(p<0.0001). There was no significant difference in iCPR tool completion between resident encounters in the intermediate and long-term periods (p<0.6627). There was a significant association between iCPR smartset completion and time periods (p<0.0021). There were no significant associations between iCPR smartest completion and any of the four predictors of interest. We examined the frequencies of components of the iCPR tool being accepted over time by individual clinicians. Rates of adoption of the different components of the tool decreased substantially over time. The data suggest that over time and prolonged exposure to CDS tools, providers are less likely to utilise the tool. It is not clear if it is fatigue with the CDS tool, acquired knowledge of the clinical prediction rule, or gained clinical experience and gestalt that are influencing adoption rates. Further analysis of individual adoption rates over time and the impact it has on clinical outcomes should be conducted.

  10. Tool use disorders after left brain damage.

    PubMed

    Baumard, Josselin; Osiurak, François; Lesourd, Mathieu; Le Gall, Didier

    2014-01-01

    In this paper we review studies that investigated tool use disorders in left-brain damaged (LBD) patients over the last 30 years. Four tasks are classically used in the field of apraxia: Pantomime of tool use, single tool use, real tool use and mechanical problem solving. Our aim was to address two issues, namely, (1) the role of mechanical knowledge in real tool use and (2) the cognitive mechanisms underlying pantomime of tool use, a task widely employed by clinicians and researchers. To do so, we extracted data from 36 papers and computed the difference between healthy subjects and LBD patients. On the whole, pantomime of tool use is the most difficult task and real tool use is the easiest one. Moreover, associations seem to appear between pantomime of tool use, real tool use and mechanical problem solving. These results suggest that the loss of mechanical knowledge is critical in LBD patients, even if all of those tasks (and particularly pantomime of tool use) might put differential demands on semantic memory and working memory.

  11. Tool use disorders after left brain damage

    PubMed Central

    Baumard, Josselin; Osiurak, François; Lesourd, Mathieu; Le Gall, Didier

    2014-01-01

    In this paper we review studies that investigated tool use disorders in left-brain damaged (LBD) patients over the last 30 years. Four tasks are classically used in the field of apraxia: Pantomime of tool use, single tool use, real tool use and mechanical problem solving. Our aim was to address two issues, namely, (1) the role of mechanical knowledge in real tool use and (2) the cognitive mechanisms underlying pantomime of tool use, a task widely employed by clinicians and researchers. To do so, we extracted data from 36 papers and computed the difference between healthy subjects and LBD patients. On the whole, pantomime of tool use is the most difficult task and real tool use is the easiest one. Moreover, associations seem to appear between pantomime of tool use, real tool use and mechanical problem solving. These results suggest that the loss of mechanical knowledge is critical in LBD patients, even if all of those tasks (and particularly pantomime of tool use) might put differential demands on semantic memory and working memory. PMID:24904487

  12. 77 FR 39208 - Information Collection: Ride-Along Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-02

    ... service, and provide LE&I personnel a recruitment tool. A rider shall complete two forms in order to participate. Form FS-5300-33 asks for the participant's name, address, social security number, driver's...

  13. Naming and categorizing objects: task differences modulate the polarity of semantic effects in the picture-word interference paradigm.

    PubMed

    Hantsch, Ansgar; Jescheniak, Jörg D; Mädebach, Andreas

    2012-07-01

    The picture-word interference paradigm is a prominent tool for studying lexical retrieval during speech production. When participants name the pictures, interference from semantically related distractor words has regularly been shown. By contrast, when participants categorize the pictures, facilitation from semantically related distractors has typically been found. In the extant studies, however, differences in the task instructions (naming vs. categorizing) were confounded with the response level: While responses in naming were typically located at the basic level (e.g., "dog"), responses were located at the superordinate level in categorization (e.g., "animal"). The present study avoided this confound by having participants respond at the basic level in both naming and categorization, using the same pictures, distractors, and verbal responses. Our findings confirm the polarity reversal of the semantic effects--that is, semantic interference in naming, and semantic facilitation in categorization. These findings show that the polarity reversal of the semantic effect is indeed due to the different tasks and is not an artifact of the different response levels used in previous studies. Implications for current models of language production are discussed.

  14. Tutorial videos of bioinformatics resources: online distribution trial in Japan named TogoTV.

    PubMed

    Kawano, Shin; Ono, Hiromasa; Takagi, Toshihisa; Bono, Hidemasa

    2012-03-01

    In recent years, biological web resources such as databases and tools have become more complex because of the enormous amounts of data generated in the field of life sciences. Traditional methods of distributing tutorials include publishing textbooks and posting web documents, but these static contents cannot adequately describe recent dynamic web services. Due to improvements in computer technology, it is now possible to create dynamic content such as video with minimal effort and low cost on most modern computers. The ease of creating and distributing video tutorials instead of static content improves accessibility for researchers, annotators and curators. This article focuses on online video repositories for educational and tutorial videos provided by resource developers and users. It also describes a project in Japan named TogoTV (http://togotv.dbcls.jp/en/) and discusses the production and distribution of high-quality tutorial videos, which would be useful to viewer, with examples. This article intends to stimulate and encourage researchers who develop and use databases and tools to distribute how-to videos as a tool to enhance product usability.

  15. Tutorial videos of bioinformatics resources: online distribution trial in Japan named TogoTV

    PubMed Central

    Kawano, Shin; Ono, Hiromasa; Takagi, Toshihisa

    2012-01-01

    In recent years, biological web resources such as databases and tools have become more complex because of the enormous amounts of data generated in the field of life sciences. Traditional methods of distributing tutorials include publishing textbooks and posting web documents, but these static contents cannot adequately describe recent dynamic web services. Due to improvements in computer technology, it is now possible to create dynamic content such as video with minimal effort and low cost on most modern computers. The ease of creating and distributing video tutorials instead of static content improves accessibility for researchers, annotators and curators. This article focuses on online video repositories for educational and tutorial videos provided by resource developers and users. It also describes a project in Japan named TogoTV (http://togotv.dbcls.jp/en/) and discusses the production and distribution of high-quality tutorial videos, which would be useful to viewer, with examples. This article intends to stimulate and encourage researchers who develop and use databases and tools to distribute how-to videos as a tool to enhance product usability. PMID:21803786

  16. Toolpack mathematical software development environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osterweil, L.

    1982-07-21

    The purpose of this research project was to produce a well integrated set of tools for the support of numerical computation. The project entailed the specification, design and implementation of both a diversity of tools and an innovative tool integration mechanism. This large configuration of tightly integrated tools comprises an environment for numerical software development, and has been named Toolpack/IST (Integrated System of Tools). Following the creation of this environment in prototype form, the environment software was readied for widespread distribution by transitioning it to a development organization for systematization, documentation and distribution. It is expected that public release ofmore » Toolpack/IST will begin imminently and will provide a basis for evaluation of the innovative software approaches taken as well as a uniform set of development tools for the numerical software community.« less

  17. Listeriomics: an Interactive Web Platform for Systems Biology of Listeria

    PubMed Central

    Koutero, Mikael; Tchitchek, Nicolas; Cerutti, Franck; Lechat, Pierre; Maillet, Nicolas; Hoede, Claire; Chiapello, Hélène; Gaspin, Christine

    2017-01-01

    ABSTRACT As for many model organisms, the amount of Listeria omics data produced has recently increased exponentially. There are now >80 published complete Listeria genomes, around 350 different transcriptomic data sets, and 25 proteomic data sets available. The analysis of these data sets through a systems biology approach and the generation of tools for biologists to browse these various data are a challenge for bioinformaticians. We have developed a web-based platform, named Listeriomics, that integrates different tools for omics data analyses, i.e., (i) an interactive genome viewer to display gene expression arrays, tiling arrays, and sequencing data sets along with proteomics and genomics data sets; (ii) an expression and protein atlas that connects every gene, small RNA, antisense RNA, or protein with the most relevant omics data; (iii) a specific tool for exploring protein conservation through the Listeria phylogenomic tree; and (iv) a coexpression network tool for the discovery of potential new regulations. Our platform integrates all the complete Listeria species genomes, transcriptomes, and proteomes published to date. This website allows navigation among all these data sets with enriched metadata in a user-friendly format and can be used as a central database for systems biology analysis. IMPORTANCE In the last decades, Listeria has become a key model organism for the study of host-pathogen interactions, noncoding RNA regulation, and bacterial adaptation to stress. To study these mechanisms, several genomics, transcriptomics, and proteomics data sets have been produced. We have developed Listeriomics, an interactive web platform to browse and correlate these heterogeneous sources of information. Our website will allow listeriologists and microbiologists to decipher key regulation mechanism by using a systems biology approach. PMID:28317029

  18. Simulation-Based Analysis of Reentry Dynamics for the Sharp Atmospheric Entry Vehicle

    NASA Technical Reports Server (NTRS)

    Tillier, Clemens Emmanuel

    1998-01-01

    This thesis describes the analysis of the reentry dynamics of a high-performance lifting atmospheric entry vehicle through numerical simulation tools. The vehicle, named SHARP, is currently being developed by the Thermal Protection Materials and Systems branch of NASA Ames Research Center, Moffett Field, California. The goal of this project is to provide insight into trajectory tradeoffs and vehicle dynamics using simulation tools that are powerful, flexible, user-friendly and inexpensive. Implemented Using MATLAB and SIMULINK, these tools are developed with an eye towards further use in the conceptual design of the SHARP vehicle's trajectory and flight control systems. A trajectory simulator is used to quantify the entry capabilities of the vehicle subject to various operational constraints. Using an aerodynamic database computed by NASA and a model of the earth, the simulator generates the vehicle trajectory in three-dimensional space based on aerodynamic angle inputs. Requirements for entry along the SHARP aerothermal performance constraint are evaluated for different control strategies. Effect of vehicle mass on entry parameters is investigated, and the cross range capability of the vehicle is evaluated. Trajectory results are presented and interpreted. A six degree of freedom simulator builds on the trajectory simulator and provides attitude simulation for future entry controls development. A Newtonian aerodynamic model including control surfaces and a mass model are developed. A visualization tool for interpreting simulation results is described. Control surfaces are roughly sized. A simple controller is developed to fly the vehicle along its aerothermal performance constraint using aerodynamic flaps for control. This end-to-end demonstration proves the suitability of the 6-DOF simulator for future flight control system development. Finally, issues surrounding real-time simulation with hardware in the loop are discussed.

  19. Software architecture and design of the web services facilitating climate model diagnostic analysis

    NASA Astrophysics Data System (ADS)

    Pan, L.; Lee, S.; Zhang, J.; Tang, B.; Zhai, C.; Jiang, J. H.; Wang, W.; Bao, Q.; Qi, M.; Kubar, T. L.; Teixeira, J.

    2015-12-01

    Climate model diagnostic analysis is a computationally- and data-intensive task because it involves multiple numerical model outputs and satellite observation data that can both be high resolution. We have built an online tool that facilitates this process. The tool is called Climate Model Diagnostic Analyzer (CMDA). It employs the web service technology and provides a web-based user interface. The benefits of these choices include: (1) No installation of any software other than a browser, hence it is platform compatable; (2) Co-location of computation and big data on the server side, and small results and plots to be downloaded on the client side, hence high data efficiency; (3) multi-threaded implementation to achieve parallel performance on multi-core servers; and (4) cloud deployment so each user has a dedicated virtual machine. In this presentation, we will focus on the computer science aspects of this tool, namely the architectural design, the infrastructure of the web services, the implementation of the web-based user interface, the mechanism of provenance collection, the approach to virtualization, and the Amazon Cloud deployment. As an example, We will describe our methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). Another example is the use of Docker, a light-weight virtualization container, to distribute and deploy CMDA onto an Amazon EC2 instance. Our tool of CMDA has been successfully used in the 2014 Summer School hosted by the JPL Center for Climate Science. Students had positive feedbacks in general and we will report their comments. An enhanced version of CMDA with several new features, some requested by the 2014 students, will be used in the 2015 Summer School soon.

  20. Naming, the Formation of Stimulus Classes, and Applied Behavior Analysis.

    ERIC Educational Resources Information Center

    Stromer, Robert; And Others

    1996-01-01

    This review of research discusses how children with autism may acquire equivalence classes after learning to supply a common oral name to each stimulus in a potential class. A proposed methodology for researching referent naming and class formation, analysis of stimulus classes, and generalization is offered. (CR)

  1. Personal Name Identification in the Practice of Digital Repositories

    ERIC Educational Resources Information Center

    Xia, Jingfeng

    2006-01-01

    Purpose: To propose improvements to the identification of authors' names in digital repositories. Design/methodology/approach: Analysis of current name authorities in digital resources, particularly in digital repositories, and analysis of some features of existing repository applications. Findings: This paper finds that the variations of authors'…

  2. 'Remixing Rasmussen': The evolution of Accimaps within systemic accident analysis.

    PubMed

    Waterson, Patrick; Jenkins, Daniel P; Salmon, Paul M; Underwood, Peter

    2017-03-01

    Throughout Jens Rasmussen's career there has been a continued emphasis on the development of methods, techniques and tools for accident analysis and investigation. In this paper we focus on the evolution and development of one specific example, namely Accimaps and their use for accident analysis. We describe the origins of Accimaps followed by a review of 27 studies which have applied and adapted Accimaps over the period 2000-2015 to a range of domains and types of accident. Aside from demonstrating the versatility and popularity of the method, part of the motivation for the review of the use of Accimaps is to address the question of what constitutes a sound, usable, valid and reliable approach to systemic accident analysis. The findings from the review demonstrate continuity with the work carried out by Rasmussen, as well as significant variation (e.g., changes to the Accimap, used of additional theoretical and practice-oriented perspectives on safety). We conclude the paper with some speculations regarding future extension and adaptation of the Accimap approach including the possibility of using hybrid models for accident analysis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. "Who Can Help Me Fix This Toy?" The Distinction between Causal Knowledge and Word Knowledge Guides Preschoolers' Selective Requests for Information

    ERIC Educational Resources Information Center

    Kushnir, Tamar; Vredenburgh, Christopher; Schneider, Lauren A.

    2013-01-01

    Preschoolers use outcomes of actions to infer causal properties of objects. We asked whether they also use them to infer others' causal abilities and knowledge. In Experiment 1, preschoolers saw 2 informants, 2 tools, and 2 broken toys. One informant (the "labeler") knew the names of the tools, but his actions failed to activate the toys. The…

  4. Comparing the Effect of Blogging as well as Pen-and-Paper on the Essay Writing Performance of Iranian Graduate Students

    ERIC Educational Resources Information Center

    Kashani, Hajar; Mahmud, Rosnaini Binti; Kalajahi, Seyed Ali Rezvani

    2013-01-01

    In today's world, there are lots of methods in language teaching in general and teaching writing in particular. Using two different tools in writing essays and conducting a study to compare the effectiveness of these two tools namely blog and pen-and-paper was the basis of this study. This study used a quantitative true experimental design aimed…

  5. Performance Assessment Tools for Distance Learning and Simulation: Knowledge, Models and Tools to Improve the Effectiveness of Naval Distance Learning

    DTIC Science & Technology

    2006-06-01

    34/> <!-- someapplet.imi -- > <property narne="appletcontent"’ value="somecontent jar"!> <!- - someapplet.lml -- > <property narne="jnlpcodebasedir" value= gizmo "!> gizmo ...property name="jnlphref"’ value="gizmo.jnlp",’> 0!- - gizmo.jnlp -- > <property narne="jnlptitle"’ value="Nifty Gizmo !/> <!-- nifty gizmo

  6. Cyber Security Audit and Attack Detection Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, Dale

    2012-05-31

    This goal of this project was to develop cyber security audit and attack detection tools for industrial control systems (ICS). Digital Bond developed and released a tool named Bandolier that audits ICS components commonly used in the energy sector against an optimal security configuration. The Portaledge Project developed a capability for the PI Historian, the most widely used Historian in the energy sector, to aggregate security events and detect cyber attacks.

  7. Catalog of Resources for Education in Ada (Trade Name) and Software Engineering (CREASE). Version 4.0.

    DTIC Science & Technology

    1986-05-01

    offering the course is a company. Name and Address of offeror: Tachyon Corporation 2725 Congress Street Suite 2H San Diego, CA 92110 Offeror’s...Background: Tachyon Corporation specializes in Ada software quality assurance, computer hosted instruction and information retrieval systems, authoring tools...easy to use (on-line help) and can look up or search for terms. Tachyon Corporation 20 CDURSE OFFERINGS 2.2. Lecture/Seminar Courses 2.2.1. Company

  8. Logistics and the Fight -- Lessons from Napoleon

    DTIC Science & Technology

    2011-04-07

    Napoleon N/A Sb. GRANT NUMBER N/A Sc. PROGRAM ELEMENT NUMBER N/A 6. AUTHOR( S ) 5d. PROJECT NUMBER LCDR Sean W. Toole, SC, USN N/A Se. TASK NUMBER N...A Sf. WORK UNIT NUMBER N/A 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION USMC Command and Staff College REPORT...NUMBER Marine Corps University N/A 2076 South Street Quantico, VA 22134-5068 9. SPONSORING/MONITORING AGENCY NAME( S ) AND ADDRESS(ES) 10. SPONSOR

  9. Deep Mapping of Teuthivorous Whales and Their Prey Fields

    DTIC Science & Technology

    2016-01-01

    0.05 m), heading (±0.1⁰), pitch (±0.3⁰) and roll (±0.3⁰). Level flight is especially important for these acoustic sensors making measurements 600 m...5d. PROJECT NUMBER RC-2112 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Oregon...active acoustic measurements now allow us to use this powerful remote sensing tool to assess squid behavior and distribution in water depths up to

  10. A Generic Fusion Tool on Command Control of C4ISR Simulations

    DTIC Science & Technology

    2006-12-01

    Sarı, Şeref Paşalıoğlu TUBITAK , Marmara Research Center, Information Technologies Research Institute Gebze, Kocaeli TURKEY {name.surname...bte.mam.gov.tr; www.mam.gov.tr Cüneyd Fırat C2Tech A.Ş., TUBITAK , TEKSEB, C 210, Gebze, Kocaeli TURKEY firat@ctech.com.tr; www.ctech.com.tr ABSTRACT We...NAME(S) AND ADDRESS(ES) TUBITAK , Marmara Research Center, Information Technologies Research Institute Gebze, Kocaeli TURKEY 8. PERFORMING

  11. Federal Logistics Information System (FLIS) Procedures Manual. Volume 3. Development and Maintenance of Item Logistics Data Tools.

    DTIC Science & Technology

    1995-01-01

    Act and Regulations of Food Safety and index number or a foreign prototype number shall Inspection Service, USDA. consist of the basic name Dye followed...DYE, INDATHRENE BLUE GCD (c) In order to comply with USDA labeling DYE, PONTACYLE CARMINE requirements for meat and poultry food products, the 2B number...name actions will include a written justification which supports baker’ cthe request technically and procedurally. DLER’ S See CAP, FOOD See FIIG

  12. A Tool for Empirical Forecasting of Major Flares, Coronal Mass Ejections, and Solar Particle Events from a Proxy of Active-Region Free Magnetic Energy

    DTIC Science & Technology

    2011-04-07

    Center, Huntsville, Alabama , USA. 2Physics Department, University of Alabama in Huntsville, Huntsville, Alabama , USA. 3Center for Space Plasma and...Aeronomic Research, University of Alabama in Huntsville, Huntsville, Alabama , USA. SPACE WEATHER, VOL. 9, S04003, doi:10.1029/2009SW000537, 2011...PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University of of Alabama in Huntsville,Center for Space Plasma and Aeronomic Research,Huntsville,AL,35899

  13. Mr. John Danilovich, US Ambassador to Costa Rica, and NASA Administrator Sean O'Keefe at the AirSAR 2004 Mesoamerica hangar naming ceremony

    NASA Image and Video Library

    2004-03-03

    Mr. John Danilovich, US Ambassador to Costa Rica, and NASA Administrator Sean O'Keefe at the AirSAR 2004 Mesoamerica hangar naming ceremony. AirSAR 2004 Mesoamerica is a three-week expedition by an international team of scientists that will use an all-weather imaging tool, called the Airborne Synthetic Aperture Radar (AirSAR), in a mission ranging from the tropical rain forests of Central America to frigid Antarctica.

  14. NASA Administrator Sean O'Keefe making a presentation to Fernando Gutierrez during the AirSAR 2004 hangar naming ceremony

    NASA Image and Video Library

    2004-03-03

    NASA Administrator Sean O'Keefe making a presentation to Fernando Gutierrez, Costa Rican Minister of Science and Technology(MICIT), during the AirSAR 2004 Mesoamerica hangar naming ceremony. AirSAR 2004 Mesoamerica is a three-week expedition by an international team of scientists that will use an all-weather imaging tool, called the Airborne Synthetic Aperture Radar (AirSAR), in a mission ranging from the tropical rain forests of Central America to frigid Antarctica.

  15. Multiscale visual quality assessment for cluster analysis with self-organizing maps

    NASA Astrophysics Data System (ADS)

    Bernard, Jürgen; von Landesberger, Tatiana; Bremm, Sebastian; Schreck, Tobias

    2011-01-01

    Cluster analysis is an important data mining technique for analyzing large amounts of data, reducing many objects to a limited number of clusters. Cluster visualization techniques aim at supporting the user in better understanding the characteristics and relationships among the found clusters. While promising approaches to visual cluster analysis already exist, these usually fall short of incorporating the quality of the obtained clustering results. However, due to the nature of the clustering process, quality plays an important aspect, as for most practical data sets, typically many different clusterings are possible. Being aware of clustering quality is important to judge the expressiveness of a given cluster visualization, or to adjust the clustering process with refined parameters, among others. In this work, we present an encompassing suite of visual tools for quality assessment of an important visual cluster algorithm, namely, the Self-Organizing Map (SOM) technique. We define, measure, and visualize the notion of SOM cluster quality along a hierarchy of cluster abstractions. The quality abstractions range from simple scalar-valued quality scores up to the structural comparison of a given SOM clustering with output of additional supportive clustering methods. The suite of methods allows the user to assess the SOM quality on the appropriate abstraction level, and arrive at improved clustering results. We implement our tools in an integrated system, apply it on experimental data sets, and show its applicability.

  16. DCGL v2.0: an R package for unveiling differential regulation from differential co-expression.

    PubMed

    Yang, Jing; Yu, Hui; Liu, Bao-Hong; Zhao, Zhongming; Liu, Lei; Ma, Liang-Xiao; Li, Yi-Xue; Li, Yuan-Yuan

    2013-01-01

    Differential co-expression analysis (DCEA) has emerged in recent years as a novel, systematic investigation into gene expression data. While most DCEA studies or tools focus on the co-expression relationships among genes, some are developing a potentially more promising research domain, differential regulation analysis (DRA). In our previously proposed R package DCGL v1.0, we provided functions to facilitate basic differential co-expression analyses; however, the output from DCGL v1.0 could not be translated into differential regulation mechanisms in a straightforward manner. To advance from DCEA to DRA, we upgraded the DCGL package from v1.0 to v2.0. A new module named "Differential Regulation Analysis" (DRA) was designed, which consists of three major functions: DRsort, DRplot, and DRrank. DRsort selects differentially regulated genes (DRGs) and differentially regulated links (DRLs) according to the transcription factor (TF)-to-target information. DRrank prioritizes the TFs in terms of their potential relevance to the phenotype of interest. DRplot graphically visualizes differentially co-expressed links (DCLs) and/or TF-to-target links in a network context. In addition to these new modules, we streamlined the codes from v1.0. The evaluation results proved that our differential regulation analysis is able to capture the regulators relevant to the biological subject. With ample functions to facilitate differential regulation analysis, DCGL v2.0 was upgraded from a DCEA tool to a DRA tool, which may unveil the underlying differential regulation from the observed differential co-expression. DCGL v2.0 can be applied to a wide range of gene expression data in order to systematically identify novel regulators that have not yet been documented as critical. DCGL v2.0 package is available at http://cran.r-project.org/web/packages/DCGL/index.html or at our project home page http://lifecenter.sgst.cn/main/en/dcgl.jsp.

  17. Comparison of discriminant analysis methods: Application to occupational exposure to particulate matter

    NASA Astrophysics Data System (ADS)

    Ramos, M. Rosário; Carolino, E.; Viegas, Carla; Viegas, Sandra

    2016-06-01

    Health effects associated with occupational exposure to particulate matter have been studied by several authors. In this study were selected six industries of five different areas: Cork company 1, Cork company 2, poultry, slaughterhouse for cattle, riding arena and production of animal feed. The measurements tool was a portable device for direct reading. This tool provides information on the particle number concentration for six different diameters, namely 0.3 µm, 0.5 µm, 1 µm, 2.5 µm, 5 µm and 10 µm. The focus on these features is because they might be more closely related with adverse health effects. The aim is to identify the particles that better discriminate the industries, with the ultimate goal of classifying industries regarding potential negative effects on workers' health. Several methods of discriminant analysis were applied to data of occupational exposure to particulate matter and compared with respect to classification accuracy. The selected methods were linear discriminant analyses (LDA); linear quadratic discriminant analysis (QDA), robust linear discriminant analysis with selected estimators (MLE (Maximum Likelihood Estimators), MVE (Minimum Volume Elipsoid), "t", MCD (Minimum Covariance Determinant), MCD-A, MCD-B), multinomial logistic regression and artificial neural networks (ANN). The predictive accuracy of the methods was accessed through a simulation study. ANN yielded the highest rate of classification accuracy in the data set under study. Results indicate that the particle number concentration of diameter size 0.5 µm is the parameter that better discriminates industries.

  18. Predicting cell viability within tissue scaffolds under equiaxial strain: multi-scale finite element model of collagen-cardiomyocytes constructs.

    PubMed

    Elsaadany, Mostafa; Yan, Karen Chang; Yildirim-Ayan, Eda

    2017-06-01

    Successful tissue engineering and regenerative therapy necessitate having extensive knowledge about mechanical milieu in engineered tissues and the resident cells. In this study, we have merged two powerful analysis tools, namely finite element analysis and stochastic analysis, to understand the mechanical strain within the tissue scaffold and residing cells and to predict the cell viability upon applying mechanical strains. A continuum-based multi-length scale finite element model (FEM) was created to simulate the physiologically relevant equiaxial strain exposure on cell-embedded tissue scaffold and to calculate strain transferred to the tissue scaffold (macro-scale) and residing cells (micro-scale) upon various equiaxial strains. The data from FEM were used to predict cell viability under various equiaxial strain magnitudes using stochastic damage criterion analysis. The model validation was conducted through mechanically straining the cardiomyocyte-encapsulated collagen constructs using a custom-built mechanical loading platform (EQUicycler). FEM quantified the strain gradients over the radial and longitudinal direction of the scaffolds and the cells residing in different areas of interest. With the use of the experimental viability data, stochastic damage criterion, and the average cellular strains obtained from multi-length scale models, cellular viability was predicted and successfully validated. This methodology can provide a great tool to characterize the mechanical stimulation of bioreactors used in tissue engineering applications in providing quantification of mechanical strain and predicting cellular viability variations due to applied mechanical strain.

  19. Virtual Machine Language

    NASA Technical Reports Server (NTRS)

    Grasso, Christopher; Page, Dennis; O'Reilly, Taifun; Fteichert, Ralph; Lock, Patricia; Lin, Imin; Naviaux, Keith; Sisino, John

    2005-01-01

    Virtual Machine Language (VML) is a mission-independent, reusable software system for programming for spacecraft operations. Features of VML include a rich set of data types, named functions, parameters, IF and WHILE control structures, polymorphism, and on-the-fly creation of spacecraft commands from calculated values. Spacecraft functions can be abstracted into named blocks that reside in files aboard the spacecraft. These named blocks accept parameters and execute in a repeatable fashion. The sizes of uplink products are minimized by the ability to call blocks that implement most of the command steps. This block approach also enables some autonomous operations aboard the spacecraft, such as aerobraking, telemetry conditional monitoring, and anomaly response, without developing autonomous flight software. Operators on the ground write blocks and command sequences in a concise, high-level, human-readable programming language (also called VML ). A compiler translates the human-readable blocks and command sequences into binary files (the operations products). The flight portion of VML interprets the uplinked binary files. The ground subsystem of VML also includes an interactive sequence- execution tool hosted on workstations, which runs sequences at several thousand times real-time speed, affords debugging, and generates reports. This tool enables iterative development of blocks and sequences within times of the order of seconds.

  20. A Phylogeny-Based Global Nomenclature System and Automated Annotation Tool for H1 Hemagglutinin Genes from Swine Influenza A Viruses

    PubMed Central

    Macken, Catherine A.; Lewis, Nicola S.; Van Reeth, Kristien; Brown, Ian H.; Swenson, Sabrina L.; Simon, Gaëlle; Saito, Takehiko; Berhane, Yohannes; Ciacci-Zanella, Janice; Pereda, Ariel; Davis, C. Todd; Donis, Ruben O.; Webby, Richard J.

    2016-01-01

    ABSTRACT The H1 subtype of influenza A viruses (IAVs) has been circulating in swine since the 1918 human influenza pandemic. Over time, and aided by further introductions from nonswine hosts, swine H1 viruses have diversified into three genetic lineages. Due to limited global data, these H1 lineages were named based on colloquial context, leading to a proliferation of inconsistent regional naming conventions. In this study, we propose rigorous phylogenetic criteria to establish a globally consistent nomenclature of swine H1 virus hemagglutinin (HA) evolution. These criteria applied to a data set of 7,070 H1 HA sequences led to 28 distinct clades as the basis for the nomenclature. We developed and implemented a web-accessible annotation tool that can assign these biologically informative categories to new sequence data. The annotation tool assigned the combined data set of 7,070 H1 sequences to the correct clade more than 99% of the time. Our analyses indicated that 87% of the swine H1 viruses from 2010 to the present had HAs that belonged to 7 contemporary cocirculating clades. Our nomenclature and web-accessible classification tool provide an accurate method for researchers, diagnosticians, and health officials to assign clade designations to HA sequences. The tool can be updated readily to track evolving nomenclature as new clades emerge, ensuring continued relevance. A common global nomenclature facilitates comparisons of IAVs infecting humans and pigs, within and between regions, and can provide insight into the diversity of swine H1 influenza virus and its impact on vaccine strain selection, diagnostic reagents, and test performance, thereby simplifying communication of such data. IMPORTANCE A fundamental goal in the biological sciences is the definition of groups of organisms based on evolutionary history and the naming of those groups. For influenza A viruses (IAVs) in swine, understanding the hemagglutinin (HA) genetic lineage of a circulating strain aids in vaccine antigen selection and allows for inferences about vaccine efficacy. Previous reporting of H1 virus HA in swine relied on colloquial names, frequently with incriminating and stigmatizing geographic toponyms, making comparisons between studies challenging. To overcome this, we developed an adaptable nomenclature using measurable criteria for historical and contemporary evolutionary patterns of H1 global swine IAVs. We also developed a web-accessible tool that classifies viruses according to this nomenclature. This classification system will aid agricultural production and pandemic preparedness through the identification of important changes in swine IAVs and provides terminology enabling discussion of swine IAVs in a common context among animal and human health initiatives. PMID:27981236

  1. Qualitative Examination of Children's Naming Skills through Test Adaptations.

    ERIC Educational Resources Information Center

    Fried-Oken, Melanie

    1987-01-01

    The Double Administration Naming Technique assists clinicians in obtaining qualitative information about a client's visual confrontation naming skills through administration of a standard naming test; readministration of the same test; identification of single and double errors; cuing for double naming errors; and qualitative analysis of naming…

  2. Impact of translation on named-entity recognition in radiology texts

    PubMed Central

    Pedro, Vasco

    2017-01-01

    Abstract Radiology reports describe the results of radiography procedures and have the potential of being a useful source of information which can bring benefits to health care systems around the world. One way to automatically extract information from the reports is by using Text Mining tools. The problem is that these tools are mostly developed for English and reports are usually written in the native language of the radiologist, which is not necessarily English. This creates an obstacle to the sharing of Radiology information between different communities. This work explores the solution of translating the reports to English before applying the Text Mining tools, probing the question of what translation approach should be used. We created MRRAD (Multilingual Radiology Research Articles Dataset), a parallel corpus of Portuguese research articles related to Radiology and a number of alternative translations (human, automatic and semi-automatic) to English. This is a novel corpus which can be used to move forward the research on this topic. Using MRRAD we studied which kind of automatic or semi-automatic translation approach is more effective on the Named-entity recognition task of finding RadLex terms in the English version of the articles. Considering the terms extracted from human translations as our gold standard, we calculated how similar to this standard were the terms extracted using other translations. We found that a completely automatic translation approach using Google leads to F-scores (between 0.861 and 0.868, depending on the extraction approach) similar to the ones obtained through a more expensive semi-automatic translation approach using Unbabel (between 0.862 and 0.870). To better understand the results we also performed a qualitative analysis of the type of errors found in the automatic and semi-automatic translations. Database URL: https://github.com/lasigeBioTM/MRRAD PMID:29220455

  3. Development and validation of a questionnaire for evaluation of students' attitudes towards family medicine.

    PubMed

    Šter, Marija Petek; Švab, Igor; Klemenc-Ketiš, Zalika; Kersnik, Janko

    2015-03-01

    The development of the EURACT (European Academy of Teachers in General Practice) Educational Agenda helped many family medicine departments in development of clerkship and the aims and objectives of family medicine teaching. Our aims were to develop and validate a tool for assessment of students' attitudes towards family medicine and to evaluate the impact of the clerkship on students' attitudes regarding the competences of family doctor. In the pilot study, experienced family doctors were asked to describe their attitudes towards family medicine by using the Educational Agenda as a template for brainstorming. The statements were paraphrased and developed into a 164-items questionnaire, which was administered to 176 final-year students in academic year 2007/08. The third phase consisted of development of a final tool using statistical analysis, which resulted in the 60-items questionnaire in six domains which was used for the evaluation of students' attitudes. At the beginning of the clerkship, person-centred care and holistic approach scored lower than the other competences. Students' attitudes regarding the competences at the end of 7 weeks clerkship in family medicine were more positive, with exception of the competence regarding primary care management. The students who named family medicine as his or her future career choice, found holistic approach as more important than the students who did not name it as their future career. With the decision tree, which included students' attitudes to the competences of family medicine, we can successfully predict the future career choice in family medicine in 93.5% of the students. This study reports on the first attempt to develop a valid and reliable tool for measuring attitudes towards family medicine based on EURACT Educational Agenda. The questionnaire could be used for evaluating changes of students' attitudes in undergraduate curricula and for prediction of students' preferences regarding their future professional career in family medicine.

  4. Fast probabilistic file fingerprinting for big data

    PubMed Central

    2013-01-01

    Background Biological data acquisition is raising new challenges, both in data analysis and handling. Not only is it proving hard to analyze the data at the rate it is generated today, but simply reading and transferring data files can be prohibitively slow due to their size. This primarily concerns logistics within and between data centers, but is also important for workstation users in the analysis phase. Common usage patterns, such as comparing and transferring files, are proving computationally expensive and are tying down shared resources. Results We present an efficient method for calculating file uniqueness for large scientific data files, that takes less computational effort than existing techniques. This method, called Probabilistic Fast File Fingerprinting (PFFF), exploits the variation present in biological data and computes file fingerprints by sampling randomly from the file instead of reading it in full. Consequently, it has a flat performance characteristic, correlated with data variation rather than file size. We demonstrate that probabilistic fingerprinting can be as reliable as existing hashing techniques, with provably negligible risk of collisions. We measure the performance of the algorithm on a number of data storage and access technologies, identifying its strengths as well as limitations. Conclusions Probabilistic fingerprinting may significantly reduce the use of computational resources when comparing very large files. Utilisation of probabilistic fingerprinting techniques can increase the speed of common file-related workflows, both in the data center and for workbench analysis. The implementation of the algorithm is available as an open-source tool named pfff, as a command-line tool as well as a C library. The tool can be downloaded from http://biit.cs.ut.ee/pfff. PMID:23445565

  5. Grape RNA-Seq analysis pipeline environment

    PubMed Central

    Knowles, David G.; Röder, Maik; Merkel, Angelika; Guigó, Roderic

    2013-01-01

    Motivation: The avalanche of data arriving since the development of NGS technologies have prompted the need for developing fast, accurate and easily automated bioinformatic tools capable of dealing with massive datasets. Among the most productive applications of NGS technologies is the sequencing of cellular RNA, known as RNA-Seq. Although RNA-Seq provides similar or superior dynamic range than microarrays at similar or lower cost, the lack of standard and user-friendly pipelines is a bottleneck preventing RNA-Seq from becoming the standard for transcriptome analysis. Results: In this work we present a pipeline for processing and analyzing RNA-Seq data, that we have named Grape (Grape RNA-Seq Analysis Pipeline Environment). Grape supports raw sequencing reads produced by a variety of technologies, either in FASTA or FASTQ format, or as prealigned reads in SAM/BAM format. A minimal Grape configuration consists of the file location of the raw sequencing reads, the genome of the species and the corresponding gene and transcript annotation. Grape first runs a set of quality control steps, and then aligns the reads to the genome, a step that is omitted for prealigned read formats. Grape next estimates gene and transcript expression levels, calculates exon inclusion levels and identifies novel transcripts. Grape can be run on a single computer or in parallel on a computer cluster. It is distributed with specific mapping and quantification tools, but given its modular design, any tool supporting popular data interchange formats can be integrated. Availability: Grape can be obtained from the Bioinformatics and Genomics website at: http://big.crg.cat/services/grape. Contact: david.gonzalez@crg.eu or roderic.guigo@crg.eu PMID:23329413

  6. Open-Source Sequence Clustering Methods Improve the State Of the Art.

    PubMed

    Kopylova, Evguenia; Navas-Molina, Jose A; Mercier, Céline; Xu, Zhenjiang Zech; Mahé, Frédéric; He, Yan; Zhou, Hong-Wei; Rognes, Torbjørn; Caporaso, J Gregory; Knight, Rob

    2016-01-01

    Sequence clustering is a common early step in amplicon-based microbial community analysis, when raw sequencing reads are clustered into operational taxonomic units (OTUs) to reduce the run time of subsequent analysis steps. Here, we evaluated the performance of recently released state-of-the-art open-source clustering software products, namely, OTUCLUST, Swarm, SUMACLUST, and SortMeRNA, against current principal options (UCLUST and USEARCH) in QIIME, hierarchical clustering methods in mothur, and USEARCH's most recent clustering algorithm, UPARSE. All the latest open-source tools showed promising results, reporting up to 60% fewer spurious OTUs than UCLUST, indicating that the underlying clustering algorithm can vastly reduce the number of these derived OTUs. Furthermore, we observed that stringent quality filtering, such as is done in UPARSE, can cause a significant underestimation of species abundance and diversity, leading to incorrect biological results. Swarm, SUMACLUST, and SortMeRNA have been included in the QIIME 1.9.0 release. IMPORTANCE Massive collections of next-generation sequencing data call for fast, accurate, and easily accessible bioinformatics algorithms to perform sequence clustering. A comprehensive benchmark is presented, including open-source tools and the popular USEARCH suite. Simulated, mock, and environmental communities were used to analyze sensitivity, selectivity, species diversity (alpha and beta), and taxonomic composition. The results demonstrate that recent clustering algorithms can significantly improve accuracy and preserve estimated diversity without the application of aggressive filtering. Moreover, these tools are all open source, apply multiple levels of multithreading, and scale to the demands of modern next-generation sequencing data, which is essential for the analysis of massive multidisciplinary studies such as the Earth Microbiome Project (EMP) (J. A. Gilbert, J. K. Jansson, and R. Knight, BMC Biol 12:69, 2014, http://dx.doi.org/10.1186/s12915-014-0069-1).

  7. Curation of inhibitor-target data: process and impact on pathway analysis.

    PubMed

    Devidas, Sreenivas

    2009-01-01

    The past decade has seen a significant emergence in the availability and use of pathway analysis tools. The workflow that is supported by most of the pathway analysis tools is limited to either of the following: a. a network of genes based on the input data set, or b. the resultant network filtered down by a few criteria such as (but not limited to) i. disease association of the genes in the network; ii. targets known to be the target of one or more launched drugs; iii. targets known to be the target of one or more compounds in clinical trials; and iv. targets reasonably known to be potential candidate or clinical biomarkers. Almost all the tools in use today are biased towards the biological side and contain little, if any, information on the chemical inhibitors associated with the components of a given biological network. The limitation resides as follows: The fact that the number of inhibitors that have been published or patented is probably several fold (probably greater than 10-fold) more than the number of published protein-protein interactions. Curation of such data is both expensive and time consuming and could impact ROI significantly. The non-standardization associated with protein and gene names makes mapping reasonably non-straightforward. The number of patented and published inhibitors across target classes increases by over a million per year. Therefore, keeping the databases current becomes a monumental problem. Modifications required in the product architectures to accommodate chemistry-related content. GVK Bio has, over the past 7 years, curated the compound-target data that is necessary for the addition of such compound-centric workflows. This chapter focuses on identification, curation and utility of such data.

  8. Removal of muscle artifact from EEG data: comparison between stochastic (ICA and CCA) and deterministic (EMD and wavelet-based) approaches

    NASA Astrophysics Data System (ADS)

    Safieddine, Doha; Kachenoura, Amar; Albera, Laurent; Birot, Gwénaël; Karfoul, Ahmad; Pasnicu, Anca; Biraben, Arnaud; Wendling, Fabrice; Senhadji, Lotfi; Merlet, Isabelle

    2012-12-01

    Electroencephalographic (EEG) recordings are often contaminated with muscle artifacts. This disturbing myogenic activity not only strongly affects the visual analysis of EEG, but also most surely impairs the results of EEG signal processing tools such as source localization. This article focuses on the particular context of the contamination epileptic signals (interictal spikes) by muscle artifact, as EEG is a key diagnosis tool for this pathology. In this context, our aim was to compare the ability of two stochastic approaches of blind source separation, namely independent component analysis (ICA) and canonical correlation analysis (CCA), and of two deterministic approaches namely empirical mode decomposition (EMD) and wavelet transform (WT) to remove muscle artifacts from EEG signals. To quantitatively compare the performance of these four algorithms, epileptic spike-like EEG signals were simulated from two different source configurations and artificially contaminated with different levels of real EEG-recorded myogenic activity. The efficiency of CCA, ICA, EMD, and WT to correct the muscular artifact was evaluated both by calculating the normalized mean-squared error between denoised and original signals and by comparing the results of source localization obtained from artifact-free as well as noisy signals, before and after artifact correction. Tests on real data recorded in an epileptic patient are also presented. The results obtained in the context of simulations and real data show that EMD outperformed the three other algorithms for the denoising of data highly contaminated by muscular activity. For less noisy data, and when spikes arose from a single cortical source, the myogenic artifact was best corrected with CCA and ICA. Otherwise when spikes originated from two distinct sources, either EMD or ICA offered the most reliable denoising result for highly noisy data, while WT offered the better denoising result for less noisy data. These results suggest that the performance of muscle artifact correction methods strongly depend on the level of data contamination, and of the source configuration underlying EEG signals. Eventually, some insights into the numerical complexity of these four algorithms are given.

  9. Design, Optimization and Evaluation of Integrally Stiffened Al 7050 Panel with Curved Stiffeners

    NASA Technical Reports Server (NTRS)

    Slemp, Wesley C. H.; Bird, R. Keith; Kapania, Rakesh K.; Havens, David; Norris, Ashley; Olliffe, Robert

    2011-01-01

    A curvilinear stiffened panel was designed, manufactured, and tested in the Combined Load Test Fixture at NASA Langley Research Center. The panel was optimized for minimum mass subjected to constraints on buckling load, yielding, and crippling or local stiffener failure using a new analysis tool named EBF3PanelOpt. The panel was designed for a combined compression-shear loading configuration that is a realistic load case for a typical aircraft wing panel. The panel was loaded beyond buckling and strains and out-of-plane displacements were measured. The experimental data were compared with the strains and out-of-plane deflections from a high fidelity nonlinear finite element analysis and linear elastic finite element analysis of the panel/test-fixture assembly. The numerical results indicated that the panel buckled at the linearly elastic buckling eigenvalue predicted for the panel/test-fixture assembly. The experimental strains prior to buckling compared well with both the linear and nonlinear finite element model.

  10. Estimate the contribution of incubation parameters influence egg hatchability using multiple linear regression analysis

    PubMed Central

    Khalil, Mohamed H.; Shebl, Mostafa K.; Kosba, Mohamed A.; El-Sabrout, Karim; Zaki, Nesma

    2016-01-01

    Aim: This research was conducted to determine the most affecting parameters on hatchability of indigenous and improved local chickens’ eggs. Materials and Methods: Five parameters were studied (fertility, early and late embryonic mortalities, shape index, egg weight, and egg weight loss) on four strains, namely Fayoumi, Alexandria, Matrouh, and Montazah. Multiple linear regression was performed on the studied parameters to determine the most influencing one on hatchability. Results: The results showed significant differences in commercial and scientific hatchability among strains. Alexandria strain has the highest significant commercial hatchability (80.70%). Regarding the studied strains, highly significant differences in hatching chick weight among strains were observed. Using multiple linear regression analysis, fertility made the greatest percent contribution (71.31%) to hatchability, and the lowest percent contributions were made by shape index and egg weight loss. Conclusion: A prediction of hatchability using multiple regression analysis could be a good tool to improve hatchability percentage in chickens. PMID:27651666

  11. Methods for extracting social network data from chatroom logs

    NASA Astrophysics Data System (ADS)

    Osesina, O. Isaac; McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.; Bartley, Cecilia; Tudoreanu, M. Eduard

    2012-06-01

    Identifying social network (SN) links within computer-mediated communication platforms without explicit relations among users poses challenges to researchers. Our research aims to extract SN links in internet chat with multiple users engaging in synchronous overlapping conversations all displayed in a single stream. We approached this problem using three methods which build on previous research. Response-time analysis builds on temporal proximity of chat messages; word context usage builds on keywords analysis and direct addressing which infers links by identifying the intended message recipient from the screen name (nickname) referenced in the message [1]. Our analysis of word usage within the chat stream also provides contexts for the extracted SN links. To test the capability of our methods, we used publicly available data from Internet Relay Chat (IRC), a real-time computer-mediated communication (CMC) tool used by millions of people around the world. The extraction performances of individual methods and their hybrids were assessed relative to a ground truth (determined a priori via manual scoring).

  12. Design rainfall depth estimation through two regional frequency analysis methods in Hanjiang River Basin, China

    NASA Astrophysics Data System (ADS)

    Xu, Yue-Ping; Yu, Chaofeng; Zhang, Xujie; Zhang, Qingqing; Xu, Xiao

    2012-02-01

    Hydrological predictions in ungauged basins are of significant importance for water resources management. In hydrological frequency analysis, regional methods are regarded as useful tools in estimating design rainfall/flood for areas with only little data available. The purpose of this paper is to investigate the performance of two regional methods, namely the Hosking's approach and the cokriging approach, in hydrological frequency analysis. These two methods are employed to estimate 24-h design rainfall depths in Hanjiang River Basin, one of the largest tributaries of Yangtze River, China. Validation is made through comparing the results to those calculated from the provincial handbook approach which uses hundreds of rainfall gauge stations. Also for validation purpose, five hypothetically ungauged sites from the middle basin are chosen. The final results show that compared to the provincial handbook approach, the Hosking's approach often overestimated the 24-h design rainfall depths while the cokriging approach most of the time underestimated. Overall, the Hosking' approach produced more accurate results than the cokriging approach.

  13. Object-oriented approach to fast display of electrophysiological data under MS-windows.

    PubMed

    Marion-Poll, F

    1995-12-01

    Microcomputers provide neuroscientists an alternative to a host of laboratory equipment to record and analyze electrophysiological data. Object-oriented programming tools bring an essential link between custom needs for data acquisition and analysis with general software packages. In this paper, we outline the layout of basic objects that display and manipulate electrophysiological data files. Visual inspection of the recordings is a basic requirement of any data analysis software. We present an approach that allows flexible and fast display of large data sets. This approach involves constructing an intermediate representation of the data in order to lower the number of actual points displayed while preserving the aspect of the data. The second group of objects is related to the management of lists of data files. Typical experiments designed to test the biological activity of pharmacological products include scores of files. Data manipulation and analysis are facilitated by creating multi-document objects that include the names of all experiment files. Implementation steps of both objects are described for an MS-Windows hosted application.

  14. Prediction of sweetness and amino acid content in soybean crops from hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Monteiro, Sildomar Takahashi; Minekawa, Yohei; Kosugi, Yukio; Akazawa, Tsuneya; Oda, Kunio

    Hyperspectral image data provides a powerful tool for non-destructive crop analysis. This paper investigates a hyperspectral image data-processing method to predict the sweetness and amino acid content of soybean crops. Regression models based on artificial neural networks were developed in order to calculate the level of sucrose, glucose, fructose, and nitrogen concentrations, which can be related to the sweetness and amino acid content of vegetables. A performance analysis was conducted comparing regression models obtained using different preprocessing methods, namely, raw reflectance, second derivative, and principal components analysis. This method is demonstrated using high-resolution hyperspectral data of wavelengths ranging from the visible to the near infrared acquired from an experimental field of green vegetable soybeans. The best predictions were achieved using a nonlinear regression model of the second derivative transformed dataset. Glucose could be predicted with greater accuracy, followed by sucrose, fructose and nitrogen. The proposed method provides the possibility to provide relatively accurate maps predicting the chemical content of soybean crop fields.

  15. Scientific Visualization Using the Flow Analysis Software Toolkit (FAST)

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon V.; Kelaita, Paul G.; Mccabe, R. Kevin; Merritt, Fergus J.; Plessel, Todd C.; Sandstrom, Timothy A.; West, John T.

    1993-01-01

    Over the past few years the Flow Analysis Software Toolkit (FAST) has matured into a useful tool for visualizing and analyzing scientific data on high-performance graphics workstations. Originally designed for visualizing the results of fluid dynamics research, FAST has demonstrated its flexibility by being used in several other areas of scientific research. These research areas include earth and space sciences, acid rain and ozone modelling, and automotive design, just to name a few. This paper describes the current status of FAST, including the basic concepts, architecture, existing functionality and features, and some of the known applications for which FAST is being used. A few of the applications, by both NASA and non-NASA agencies, are outlined in more detail. Described in the Outlines are the goals of each visualization project, the techniques or 'tricks' used lo produce the desired results, and custom modifications to FAST, if any, done to further enhance the analysis. Some of the future directions for FAST are also described.

  16. GLOBAL REFERENCE ATMOSPHERIC MODELS FOR AEROASSIST APPLICATIONS

    NASA Technical Reports Server (NTRS)

    Duvall, Aleta; Justus, C. G.; Keller, Vernon W.

    2005-01-01

    Aeroassist is a broad category of advanced transportation technology encompassing aerocapture, aerobraking, aeroentry, precision landing, hazard detection and avoidance, and aerogravity assist. The eight destinations in the Solar System with sufficient atmosphere to enable aeroassist technology are Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and Saturn's moon Titan. Engineering-level atmospheric models for five of these targets - Earth, Mars, Titan, Neptune, and Venus - have been developed at NASA's Marshall Space Flight Center. These models are useful as tools in mission planning and systems analysis studies associated with aeroassist applications. The series of models is collectively named the Global Reference Atmospheric Model or GRAM series. An important capability of all the models in the GRAM series is their ability to simulate quasi-random perturbations for Monte Carlo analysis in developing guidance, navigation and control algorithms, for aerothermal design, and for other applications sensitive to atmospheric variability. Recent example applications are discussed.

  17. Visual analysis of large heterogeneous social networks by semantic and structural abstraction.

    PubMed

    Shen, Zeqian; Ma, Kwan-Liu; Eliassi-Rad, Tina

    2006-01-01

    Social network analysis is an active area of study beyond sociology. It uncovers the invisible relationships between actors in a network and provides understanding of social processes and behaviors. It has become an important technique in a variety of application areas such as the Web, organizational studies, and homeland security. This paper presents a visual analytics tool, OntoVis, for understanding large, heterogeneous social networks, in which nodes and links could represent different concepts and relations, respectively. These concepts and relations are related through an ontology (also known as a schema). OntoVis is named such because it uses information in the ontology associated with a social network to semantically prune a large, heterogeneous network. In addition to semantic abstraction, OntoVis also allows users to do structural abstraction and importance filtering to make large networks manageable and to facilitate analytic reasoning. All these unique capabilities of OntoVis are illustrated with several case studies.

  18. Design, Optimization, and Evaluation of A1-2139 Compression Panel with Integral T-Stiffeners

    NASA Technical Reports Server (NTRS)

    Mulani, Sameer B.; Havens, David; Norris, Ashley; Bird, R. Keith; Kapania, Rakesh K.; Olliffe, Robert

    2012-01-01

    A T-stiffened panel was designed and optimized for minimum mass subjected to constraints on buckling load, yielding, and crippling or local stiffener failure using a new analysis and design tool named EBF3PanelOpt. The panel was designed for a compression loading configuration, a realistic load case for a typical aircraft skin-stiffened panel. The panel was integrally machined from 2139 aluminum alloy plate and was tested in compression. The panel was loaded beyond buckling and strains and out-of-plane displacements were extracted from 36 strain gages and one linear variable displacement transducer. A digital photogrammetric system was used to obtain full field displacements and strains on the smooth (unstiffened) side of the panel. The experimental data were compared with the strains and out-of-plane deflections from a high-fidelity nonlinear finite element analysis.

  19. Circumnutation Tracker: novel software for investigation of circumnutation

    PubMed Central

    2014-01-01

    Background An endogenous, helical plant organ movement named circumnutation is ubiquitous in the plant kingdom. Plant shoots, stems, tendrils, leaves, and roots commonly circumnutate but their appearance is still poorly described. To support such investigations, novel software Circumnutation Tracker (CT) for spatial-temporal analysis of circumnutation has been developed. Results CT works on time-lapse video and collected circumnutation parameters: period, length, rate, shape, angle, and clockwise- and counterclockwise directions. The CT combines a filtering algorithm with a graph-based method to describe the parameters of circumnutation. The parameters of circumnutation of Helianthus annuus hypocotyls and the relationship between cotyledon arrangement and circumnutation geometry are presented here to demonstrate the CT options. Conclusions We have established that CT facilitates and accelerates analysis of circumnutation. In combination with the physiological, molecular, and genetic methods, this software may be a powerful tool also for investigations of gravitropism, biological clock, and membrane transport, i.e. processes involved in the mechanism of circumnutation.

  20. Direct Method Transcription for a Human-Class Translunar Injection Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Witzberger, Kevin E.; Zeiler, Tom

    2012-01-01

    This paper presents a new trajectory optimization software package developed in the framework of a low-to-high fidelity 3 degrees-of-freedom (DOF)/6-DOF vehicle simulation program named Mission Analysis Simulation Tool in Fortran (MASTIF) and its application to a translunar trajectory optimization problem. The functionality of the developed optimization package is implemented as a new "mode" in generalized settings to make it applicable for a general trajectory optimization problem. In doing so, a direct optimization method using collocation is employed for solving the problem. Trajectory optimization problems in MASTIF are transcribed to a constrained nonlinear programming (NLP) problem and solved with SNOPT, a commercially available NLP solver. A detailed description of the optimization software developed is provided as well as the transcription specifics for the translunar injection (TLI) problem. The analysis includes a 3-DOF trajectory TLI optimization and a 3-DOF vehicle TLI simulation using closed-loop guidance.

  1. Rdesign: A data dictionary with relational database design capabilities in Ada

    NASA Technical Reports Server (NTRS)

    Lekkos, Anthony A.; Kwok, Teresa Ting-Yin

    1986-01-01

    Data Dictionary is defined to be the set of all data attributes, which describe data objects in terms of their intrinsic attributes, such as name, type, size, format and definition. It is recognized as the data base for the Information Resource Management, to facilitate understanding and communication about the relationship between systems applications and systems data usage and to help assist in achieving data independence by permitting systems applications to access data knowledge of the location or storage characteristics of the data in the system. A research and development effort to use Ada has produced a data dictionary with data base design capabilities. This project supports data specification and analysis and offers a choice of the relational, network, and hierarchical model for logical data based design. It provides a highly integrated set of analysis and design transformation tools which range from templates for data element definition, spreadsheet for defining functional dependencies, normalization, to logical design generator.

  2. Detection of IUPAC and IUPAC-like chemical names.

    PubMed

    Klinger, Roman; Kolárik, Corinna; Fluck, Juliane; Hofmann-Apitius, Martin; Friedrich, Christoph M

    2008-07-01

    Chemical compounds like small signal molecules or other biological active chemical substances are an important entity class in life science publications and patents. Several representations and nomenclatures for chemicals like SMILES, InChI, IUPAC or trivial names exist. Only SMILES and InChI names allow a direct structure search, but in biomedical texts trivial names and Iupac like names are used more frequent. While trivial names can be found with a dictionary-based approach and in such a way mapped to their corresponding structures, it is not possible to enumerate all IUPAC names. In this work, we present a new machine learning approach based on conditional random fields (CRF) to find mentions of IUPAC and IUPAC-like names in scientific text as well as its evaluation and the conversion rate with available name-to-structure tools. We present an IUPAC name recognizer with an F(1) measure of 85.6% on a MEDLINE corpus. The evaluation of different CRF orders and offset conjunction orders demonstrates the importance of these parameters. An evaluation of hand-selected patent sections containing large enumerations and terms with mixed nomenclature shows a good performance on these cases (F(1) measure 81.5%). Remaining recognition problems are to detect correct borders of the typically long terms, especially when occurring in parentheses or enumerations. We demonstrate the scalability of our implementation by providing results from a full MEDLINE run. We plan to publish the corpora, annotation guideline as well as the conditional random field model as a UIMA component.

  3. tmBioC: improving interoperability of text-mining tools with BioC.

    PubMed

    Khare, Ritu; Wei, Chih-Hsuan; Mao, Yuqing; Leaman, Robert; Lu, Zhiyong

    2014-01-01

    The lack of interoperability among biomedical text-mining tools is a major bottleneck in creating more complex applications. Despite the availability of numerous methods and techniques for various text-mining tasks, combining different tools requires substantial efforts and time owing to heterogeneity and variety in data formats. In response, BioC is a recent proposal that offers a minimalistic approach to tool interoperability by stipulating minimal changes to existing tools and applications. BioC is a family of XML formats that define how to present text documents and annotations, and also provides easy-to-use functions to read/write documents in the BioC format. In this study, we introduce our text-mining toolkit, which is designed to perform several challenging and significant tasks in the biomedical domain, and repackage the toolkit into BioC to enhance its interoperability. Our toolkit consists of six state-of-the-art tools for named-entity recognition, normalization and annotation (PubTator) of genes (GenNorm), diseases (DNorm), mutations (tmVar), species (SR4GN) and chemicals (tmChem). Although developed within the same group, each tool is designed to process input articles and output annotations in a different format. We modify these tools and enable them to read/write data in the proposed BioC format. We find that, using the BioC family of formats and functions, only minimal changes were required to build the newer versions of the tools. The resulting BioC wrapped toolkit, which we have named tmBioC, consists of our tools in BioC, an annotated full-text corpus in BioC, and a format detection and conversion tool. Furthermore, through participation in the 2013 BioCreative IV Interoperability Track, we empirically demonstrate that the tools in tmBioC can be more efficiently integrated with each other as well as with external tools: Our experimental results show that using BioC reduces >60% in lines of code for text-mining tool integration. The tmBioC toolkit is publicly available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/tmTools/. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/tmTools/. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.

  4. An Ontology for Requesting Distant Robotic Action: A Case Study in Naming and Action Identification for Planning on the Mars Exploration Rover Mission

    NASA Technical Reports Server (NTRS)

    Wales, Roxana C.; Shalin, Valerie L.; Bass, Deborah S.

    2004-01-01

    This paper focuses on the development and use of the abbreviated names as well as an emergent ontology associated with making requests for action of a distant robotic rover during the 2003-2004 NASA Mars Exploration Rover (MER) mission, run by the Jet Propulsion Laboratory. The infancy of the domain of Martian telerobotic science, in which specialists request work from a rover moving through the landscape, as well as the need to consider the interdisciplinary teams involved in the work required an empirical approach. The formulation of this ontology is grounded in human behavior and work practice. The purpose of this paper is to identify general issues for an ontology of action (specifically for requests for action), while maintaining sensitivity to the users, tools and the work system within a specific technical domain. We found that this ontology of action must take into account a dynamic environment, changing in response to the movement of the rover, changes on the rover itself, as well as be responsive to the purposeful intent of the science requestors. Analysis of MER mission events demonstrates that the work practice and even robotic tool usage changes over time. Therefore, an ontology must adapt and represent both incremental change and revolutionary change, and the ontology can never be more than a partial agreement on the conceptualizations involved. Although examined in a rather unique technical domain, the general issues pertain to the control of any complex, distributed work system as well as the archival record of its accomplishments.

  5. Development and evaluation of an open source software tool for deidentification of pathology reports

    PubMed Central

    Beckwith, Bruce A; Mahaadevan, Rajeshwarri; Balis, Ulysses J; Kuo, Frank

    2006-01-01

    Background Electronic medical records, including pathology reports, are often used for research purposes. Currently, there are few programs freely available to remove identifiers while leaving the remainder of the pathology report text intact. Our goal was to produce an open source, Health Insurance Portability and Accountability Act (HIPAA) compliant, deidentification tool tailored for pathology reports. We designed a three-step process for removing potential identifiers. The first step is to look for identifiers known to be associated with the patient, such as name, medical record number, pathology accession number, etc. Next, a series of pattern matches look for predictable patterns likely to represent identifying data; such as dates, accession numbers and addresses as well as patient, institution and physician names. Finally, individual words are compared with a database of proper names and geographic locations. Pathology reports from three institutions were used to design and test the algorithms. The software was improved iteratively on training sets until it exhibited good performance. 1800 new pathology reports were then processed. Each report was reviewed manually before and after deidentification to catalog all identifiers and note those that were not removed. Results 1254 (69.7 %) of 1800 pathology reports contained identifiers in the body of the report. 3439 (98.3%) of 3499 unique identifiers in the test set were removed. Only 19 HIPAA-specified identifiers (mainly consult accession numbers and misspelled names) were missed. Of 41 non-HIPAA identifiers missed, the majority were partial institutional addresses and ages. Outside consultation case reports typically contain numerous identifiers and were the most challenging to deidentify comprehensively. There was variation in performance among reports from the three institutions, highlighting the need for site-specific customization, which is easily accomplished with our tool. Conclusion We have demonstrated that it is possible to create an open-source deidentification program which performs well on free-text pathology reports. PMID:16515714

  6. Development and Implementation of a Generic Analysis Template for Structural-Thermal-Optical-Performance Modeling

    NASA Technical Reports Server (NTRS)

    Scola, Salvatore; Stavely, Rebecca; Jackson, Trevor; Boyer, Charlie; Osmundsen, Jim; Turczynski, Craig; Stimson, Chad

    2016-01-01

    Performance-related effects of system level temperature changes can be a key consideration in the design of many types of optical instruments. This is especially true for space-based imagers, which may require complex thermal control systems to maintain alignment of the optical components. Structural-Thermal-Optical-Performance (STOP) analysis is a multi-disciplinary process that can be used to assess the performance of these optical systems when subjected to the expected design environment. This type of analysis can be very time consuming, which makes it difficult to use as a trade study tool early in the project life cycle. In many cases, only one or two iterations can be performed over the course of a project. This limits the design space to best practices since it may be too difficult, or take too long, to test new concepts analytically. In order to overcome this challenge, automation, and a standard procedure for performing these studies is essential. A methodology was developed within the framework of the Comet software tool that captures the basic inputs, outputs, and processes used in most STOP analyses. This resulted in a generic, reusable analysis template that can be used for design trades for a variety of optical systems. The template captures much of the upfront setup such as meshing, boundary conditions, data transfer, naming conventions, and post-processing, and therefore saves time for each subsequent project. A description of the methodology and the analysis template is presented, and results are described for a simple telescope optical system.

  7. Development and implementation of a generic analysis template for structural-thermal-optical-performance modeling

    NASA Astrophysics Data System (ADS)

    Scola, Salvatore; Stavely, Rebecca; Jackson, Trevor; Boyer, Charlie; Osmundsen, Jim; Turczynski, Craig; Stimson, Chad

    2016-09-01

    Performance-related effects of system level temperature changes can be a key consideration in the design of many types of optical instruments. This is especially true for space-based imagers, which may require complex thermal control systems to maintain alignment of the optical components. Structural-Thermal-Optical-Performance (STOP) analysis is a multi-disciplinary process that can be used to assess the performance of these optical systems when subjected to the expected design environment. This type of analysis can be very time consuming, which makes it difficult to use as a trade study tool early in the project life cycle. In many cases, only one or two iterations can be performed over the course of a project. This limits the design space to best practices since it may be too difficult, or take too long, to test new concepts analytically. In order to overcome this challenge, automation, and a standard procedure for performing these studies is essential. A methodology was developed within the framework of the Comet software tool that captures the basic inputs, outputs, and processes used in most STOP analyses. This resulted in a generic, reusable analysis template that can be used for design trades for a variety of optical systems. The template captures much of the upfront setup such as meshing, boundary conditions, data transfer, naming conventions, and post-processing, and therefore saves time for each subsequent project. A description of the methodology and the analysis template is presented, and results are described for a simple telescope optical system.

  8. Exploring the CIGALA/CALIBRA network data base for supporting space weather service over Brazil

    NASA Astrophysics Data System (ADS)

    Galera Monico, Joao Francisco; Shimabukuro, Milton; Vani, Bruno; Stuani, Vinicius

    Most of Brazil region is surrounded by equatorial anomaly northwards and southwards. Therefore, investigations related to space weather are quite important and very demanding. For example, GNSS applications are widely affected by ionospheric disturbances, a significant field within space weather. A network for continuous monitoring of ionosphere was deployed over its territory, starting on February/2011. This network was named CIGALA/CALIBRA according to the names of the two projects which originated it. Through CIGALA (Concept for Ionospheric Scintillation Mitigation for Professional GNSS in Latin America), which was funded by European Commission (EC) in the framework of the FP7-GALILEO-2009-GSA (European GNSS Agency), the first stations were deployed at Presidente Prudente, São Paulo state, at February 2011. CIGALA Project was finalized at February 2012 with eight stations distributed over the Brazilian territory. Through CALIBRA (Countering GNSS high Accuracy applications Limitations due to Ionospheric disturbances in BRAzil), which is also funded by the European Commission now in the framework of the FP7-GALILEO-2011-GSA, new stations were deployed. All monitoring stations were specifically placed at locations following geomagnetic arrangements for supporting development of ionospheric models. CALIBRA project started at November 2012 and will have two years of duration, focusing on development of new algorithms that can be applied to high accuracy GNSS techniques (RTK, PPP) in order to tackle the effects of ionospheric disturbances. All the stations have PolarRxS-PRO receivers, manufactured by Septentrio®. This multi-GNSS receiver can collect data up to 100 Hz rates, providing ionospheric indices like TEC, scintillation parameters like S4 and Sigma-Phi, and other signal metrics like locktime for all satellites and frequencies tracked. All collected data is sent to a central facility located at the Faculdade de Ciências e Tecnologia da Universidade Estadual Paulista (FCT/UNESP) in Presidente Prudente. For dealing with the large amount of data, an analysis infrastructure was also being established, which is in constant development. It is the web software named ISMR Query Tool, which provide query and visualization of the scintillation parameters, with capabilities on identifying specific behaviors of ionosphere activity through data visualization and data mining. Its web availability and user-specified features allow the users to interact with data through a simple internet connection, enlarging insights about the ionosphere according with their own previous knowledge. Information about the network, the projects and the tool can be found at the FCT/UNESP Ionosphere web portal available at http://is-cigala-calibra.fct.unesp.br/. At this contribution we will provide an overview of results extracted from the monitoring and analysis infrastructures, explaining the possibilities provided by ISMR Query Tool supporting analysis of the ionosphere and the development of models or mitigation techniques to GNSS. At this moment, at least until the end of the CALIBRA project, this service is free available to users that request access to FCT/UNESP. We also would like to discuss means of financing and keeping the service available at a minimum cost after the end of the project.

  9. Foreign Language Translation of Chemical Nomenclature by Computer

    PubMed Central

    2009-01-01

    Chemical compound names remain the primary method for conveying molecular structures between chemists and researchers. In research articles, patents, chemical catalogues, government legislation, and textbooks, the use of IUPAC and traditional compound names is universal, despite efforts to introduce more machine-friendly representations such as identifiers and line notations. Fortunately, advances in computing power now allow chemical names to be parsed and generated (read and written) with almost the same ease as conventional connection tables. A significant complication, however, is that although the vast majority of chemistry uses English nomenclature, a significant fraction is in other languages. This complicates the task of filing and analyzing chemical patents, purchasing from compound vendors, and text mining research articles or Web pages. We describe some issues with manipulating chemical names in various languages, including British, American, German, Japanese, Chinese, Spanish, Swedish, Polish, and Hungarian, and describe the current state-of-the-art in software tools to simplify the process. PMID:19239237

  10. Transcranial stimulation over the left inferior frontal gyrus increases false alarms in an associative memory task in older adults

    DOE PAGES

    Leach, Ryan C.; McCurdy, Matthew P.; Trumbo, Michael C.; ...

    2016-07-15

    Here, transcranial direct current stimulation (tDCS) is a potent ial tool for alleviating various forms of cognitive decline, including memory loss, in older adults. However, past effects of tDCS on cognitive ability have been mixed. One important potential moderator of tDCS effects is the baseline level of cognitive performance. We tested the effects of tDCS on face-name associative memory in older adults, who suffer from performance deficits in this task relative to younger adults. Stimulation was applied to the left inferior prefrontal cortex during encoding of face-name pairs, and memory was assessed with both a recognition and recall task. Asmore » a result, face–name memory performance was decreased with the use of tDCS. This result was driven by increased false alarms when recognizing rearranged face–name pairs.« less

  11. Transcranial stimulation over the left inferior frontal gyrus increases false alarms in an associative memory task in older adults

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leach, Ryan C.; McCurdy, Matthew P.; Trumbo, Michael C.

    Here, transcranial direct current stimulation (tDCS) is a potent ial tool for alleviating various forms of cognitive decline, including memory loss, in older adults. However, past effects of tDCS on cognitive ability have been mixed. One important potential moderator of tDCS effects is the baseline level of cognitive performance. We tested the effects of tDCS on face-name associative memory in older adults, who suffer from performance deficits in this task relative to younger adults. Stimulation was applied to the left inferior prefrontal cortex during encoding of face-name pairs, and memory was assessed with both a recognition and recall task. Asmore » a result, face–name memory performance was decreased with the use of tDCS. This result was driven by increased false alarms when recognizing rearranged face–name pairs.« less

  12. The Firegoose: two-way integration of diverse data from different bioinformatics web resources with desktop applications

    PubMed Central

    Bare, J Christopher; Shannon, Paul T; Schmid, Amy K; Baliga, Nitin S

    2007-01-01

    Background Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. Results The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV), and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. Conclusion The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the Firefox browser. Performing data integration in the browser allows the excellent search and navigation capabilities of the browser to be used in combination with powerful desktop tools. PMID:18021453

  13. PAnalyzer: a software tool for protein inference in shotgun proteomics.

    PubMed

    Prieto, Gorka; Aloria, Kerman; Osinalde, Nerea; Fullaondo, Asier; Arizmendi, Jesus M; Matthiesen, Rune

    2012-11-05

    Protein inference from peptide identifications in shotgun proteomics must deal with ambiguities that arise due to the presence of peptides shared between different proteins, which is common in higher eukaryotes. Recently data independent acquisition (DIA) approaches have emerged as an alternative to the traditional data dependent acquisition (DDA) in shotgun proteomics experiments. MSE is the term used to name one of the DIA approaches used in QTOF instruments. MSE data require specialized software to process acquired spectra and to perform peptide and protein identifications. However the software available at the moment does not group the identified proteins in a transparent way by taking into account peptide evidence categories. Furthermore the inspection, comparison and report of the obtained results require tedious manual intervention. Here we report a software tool to address these limitations for MSE data. In this paper we present PAnalyzer, a software tool focused on the protein inference process of shotgun proteomics. Our approach considers all the identified proteins and groups them when necessary indicating their confidence using different evidence categories. PAnalyzer can read protein identification files in the XML output format of the ProteinLynx Global Server (PLGS) software provided by Waters Corporation for their MSE data, and also in the mzIdentML format recently standardized by HUPO-PSI. Multiple files can also be read simultaneously and are considered as technical replicates. Results are saved to CSV, HTML and mzIdentML (in the case of a single mzIdentML input file) files. An MSE analysis of a real sample is presented to compare the results of PAnalyzer and ProteinLynx Global Server. We present a software tool to deal with the ambiguities that arise in the protein inference process. Key contributions are support for MSE data analysis by ProteinLynx Global Server and technical replicates integration. PAnalyzer is an easy to use multiplatform and free software tool.

  14. PAnalyzer: A software tool for protein inference in shotgun proteomics

    PubMed Central

    2012-01-01

    Background Protein inference from peptide identifications in shotgun proteomics must deal with ambiguities that arise due to the presence of peptides shared between different proteins, which is common in higher eukaryotes. Recently data independent acquisition (DIA) approaches have emerged as an alternative to the traditional data dependent acquisition (DDA) in shotgun proteomics experiments. MSE is the term used to name one of the DIA approaches used in QTOF instruments. MSE data require specialized software to process acquired spectra and to perform peptide and protein identifications. However the software available at the moment does not group the identified proteins in a transparent way by taking into account peptide evidence categories. Furthermore the inspection, comparison and report of the obtained results require tedious manual intervention. Here we report a software tool to address these limitations for MSE data. Results In this paper we present PAnalyzer, a software tool focused on the protein inference process of shotgun proteomics. Our approach considers all the identified proteins and groups them when necessary indicating their confidence using different evidence categories. PAnalyzer can read protein identification files in the XML output format of the ProteinLynx Global Server (PLGS) software provided by Waters Corporation for their MSE data, and also in the mzIdentML format recently standardized by HUPO-PSI. Multiple files can also be read simultaneously and are considered as technical replicates. Results are saved to CSV, HTML and mzIdentML (in the case of a single mzIdentML input file) files. An MSE analysis of a real sample is presented to compare the results of PAnalyzer and ProteinLynx Global Server. Conclusions We present a software tool to deal with the ambiguities that arise in the protein inference process. Key contributions are support for MSE data analysis by ProteinLynx Global Server and technical replicates integration. PAnalyzer is an easy to use multiplatform and free software tool. PMID:23126499

  15. The Firegoose: two-way integration of diverse data from different bioinformatics web resources with desktop applications.

    PubMed

    Bare, J Christopher; Shannon, Paul T; Schmid, Amy K; Baliga, Nitin S

    2007-11-19

    Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV), and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the Firefox browser. Performing data integration in the browser allows the excellent search and navigation capabilities of the browser to be used in combination with powerful desktop tools.

  16. Integrated analyses in plastics forming

    NASA Astrophysics Data System (ADS)

    Bo, Wang

    This is the thesis which explains the progress made in the analysis, simulation and testing of plastics forming. This progress can be applied to injection and compression mould design. Three activities of plastics forming have been investigated, namely filling analysis, cooling analysis and ejecting analysis. The filling section of plastics forming has been analysed and calculated by using MOLDFLOW and FILLCALC V. software. A comparing of high speed compression moulding and injection moulding has been made. The cooling section of plastics forming has been analysed by using MOLDFLOW software and a finite difference computer program. The latter program can be used as a sample program to calculate the feasibility of cooling different materials to required target temperatures under controlled cooling conditions. The application of thermal imaging has been also introduced to determine the actual process temperatures. Thermal imaging can be used as a powerful tool to analyse mould surface temperatures and to verify the mathematical model. A buckling problem for ejecting section has been modelled and calculated by PATRAN/ABAQUS finite element analysis software and tested. These calculations and analysis are applied to the special case but can be use as an example for general analysis and calculation in the ejection section of plastics forming.

  17. Software Update.

    ERIC Educational Resources Information Center

    Currents, 2000

    2000-01-01

    A chart of 40 alumni-development database systems provides information on vendor/Web site, address, contact/phone, software name, price range, minimum suggested workstation/suggested server, standard reports/reporting tools, minimum/maximum record capacity, and number of installed sites/client type. (DB)

  18. Register to Download the Automotive Deployment Options Projection Tool |

    Science.gov Websites

    . * Indicates required field If you see this don't fill out this input box. Email address * Organization name , please specify I would like to receive periodic email updates about ADOPT. Yes Submit

  19. Flexible modulation of network connectivity related to cognition in Alzheimer's disease.

    PubMed

    McLaren, Donald G; Sperling, Reisa A; Atri, Alireza

    2014-10-15

    Functional neuroimaging tools, such as fMRI methods, may elucidate the neural correlates of clinical, behavioral, and cognitive performance. Most functional imaging studies focus on regional task-related activity or resting state connectivity rather than how changes in functional connectivity across conditions and tasks are related to cognitive and behavioral performance. To investigate the promise of characterizing context-dependent connectivity-behavior relationships, this study applies the method of generalized psychophysiological interactions (gPPI) to assess the patterns of associative-memory-related fMRI hippocampal functional connectivity in Alzheimer's disease (AD) associated with performance on memory and other cognitively demanding neuropsychological tests and clinical measures. Twenty-four subjects with mild AD dementia (ages 54-82, nine females) participated in a face-name paired-associate encoding memory study. Generalized PPI analysis was used to estimate the connectivity between the hippocampus and the whole brain during encoding. The difference in hippocampal-whole brain connectivity between encoding novel and encoding repeated face-name pairs was used in multiple-regression analyses as an independent predictor for 10 behavioral, neuropsychological and clinical tests. The analysis revealed connectivity-behavior relationships that were distributed, dynamically overlapping, and task-specific within and across intrinsic networks; hippocampal-whole brain connectivity-behavior relationships were not isolated to single networks, but spanned multiple brain networks. Importantly, these spatially distributed performance patterns were unique for each measure. In general, out-of-network behavioral associations with encoding novel greater than repeated face-name pairs hippocampal-connectivity were observed in the default-mode network, while correlations with encoding repeated greater than novel face-name pairs hippocampal-connectivity were observed in the executive control network (p<0.05, cluster corrected). Psychophysiological interactions revealed significantly more extensive and robust associations between paired-associate encoding task-dependent hippocampal-whole brain connectivity and performance on memory and behavioral/clinical measures than previously revealed by standard activity-behavior analysis. Compared to resting state and task-activation methods, gPPI analyses may be more sensitive to reveal additional complementary information regarding subtle within- and between-network relations. The patterns of robust correlations between hippocampal-whole brain connectivity and behavioral measures identified here suggest that there are 'coordinated states' in the brain; that the dynamic range of these states is related to behavior and cognition; and that these states can be observed and quantified, even in individuals with mild AD. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Good God?!? Lamentations as a model for mourning the loss of the good God.

    PubMed

    Houck-Loomis, Tiffany

    2012-09-01

    This article will address the devastating psychological and social effects due to the loss of one's primary love-object, namely God in the case of faith communities and religious individuals. By using Melanie Klein's Object Relations Theory (Klein in Envy and gratitude and other works 1946/1963. The Free Press, New York, 1975a) as a way to enter the text of Lamentations, I will articulate an alternative reading that can serve as a model for Pastors and Educators to use when walking with individuals and communities through unspeakable losses. I will argue that Lamentations may be used as a tool for naming confounding depression and anxiety that stems from a damaged introjected object (one's personal God). This tool may provide individuals and communities a framework for placing anger and contempt upon God in order to re-assimilate this loved yet hated object, eventually leading toward healing and restoration of the self.

  1. A user-friendly tool for medical-related patent retrieval.

    PubMed

    Pasche, Emilie; Gobeill, Julien; Teodoro, Douglas; Gaudinat, Arnaud; Vishnyakova, Dina; Lovis, Christian; Ruch, Patrick

    2012-01-01

    Health-related information retrieval is complicated by the variety of nomenclatures available to name entities, since different communities of users will use different ways to name a same entity. We present in this report the development and evaluation of a user-friendly interactive Web application aiming at facilitating health-related patent search. Our tool, called TWINC, relies on a search engine tuned during several patent retrieval competitions, enhanced with intelligent interaction modules, such as chemical query, normalization and expansion. While the functionality of related article search showed promising performances, the ad hoc search results in fairly contrasted results. Nonetheless, TWINC performed well during the PatOlympics competition and was appreciated by intellectual property experts. This result should be balanced by the limited evaluation sample. We can also assume that it can be customized to be applied in corporate search environments to process domain and company-specific vocabularies, including non-English literature and patents reports.

  2. The Global War on Terrorism: Analytical Support, Tools and Metrics of Assessment. MORS Workshop

    DTIC Science & Technology

    2005-08-11

    is the matter of intelligence, as COL(P) Keller pointed out, we need to spend less time in the intelligence cycle on managing information and...models, decision aids: "named things " * Methodologies: potentially useful things "* Resources: databases, people, books? * Meta-data on tools * Develop a...experience. Only one member (Mr. Garry Greco) had served on the Joint Intelligence Task Force for Counter Terrorism. Although Gary heavily participated

  3. An Integrated Suite of Text and Data Mining Tools - Phase II

    DTIC Science & Technology

    2005-08-30

    Riverside, CA, USA Mazda Motor Corp, Jpn Univ of Darmstadt, Darmstadt, Ger Navy Center for Applied Research in Artificial Intelligence Univ of...with Georgia Tech Research Corporation developed a desktop text-mining software tool named TechOASIS (known commercially as VantagePoint). By the...of this dataset and groups the Corporate Source items that co-occur with the found items. He decides he is only interested in the institutions

  4. Near Real Time Analytics of Human Sensor Networks in the Realm of Big Data

    NASA Astrophysics Data System (ADS)

    Aulov, O.; Halem, M.

    2012-12-01

    With the prolific development of social media, emergency responders have an increasing interest in harvesting social media from outlets such as Flickr, Twitter, and Facebook, in order to assess the scale and specifics of extreme events including wild fires, earthquakes, terrorist attacks, oil spills, etc. A number of experimental platforms have successfully been implemented to demonstrate the utilization of social media data in extreme events, including Twitter Earthquake Detector, which relied on tweets for earthquake monitoring; AirTwitter, which used tweets for air quality reporting; and our previous work, using Flickr data as boundary value forcings to improve the forecast of oil beaching in the aftermath of the Deepwater Horizon oil spill. The majority of these platforms addressed a narrow, specific type of emergency and harvested data from a particular outlet. We demonstrate an interactive framework for monitoring, mining and analyzing a plethora of heterogeneous social media sources for a diverse range of extreme events. Our framework consists of three major parts: a real time social media aggregator, a data processing and analysis engine, and a web-based visualization and reporting tool. The aggregator gathers tweets, Facebook comments from fan pages, Google+ posts, forum discussions, blog posts (such as LiveJournal and Blogger.com), images from photo-sharing platforms (such as Flickr, Picasa), videos from video-sharing platforms (youtube, Vimeo), and so forth. The data processing and analysis engine pre-processes the aggregated information and annotates it with geolocation and sentiment information. In many cases, the metadata of the social media posts does not contain geolocation information—-however, a human reader can easily guess from the body of the text what location is discussed. We are automating this task by use of Named Entity Recognition (NER) algorithms and a gazetteer service. The visualization and reporting tool provides a web-based, user-friendly interface that provides time-series analysis and plotting tools, geo-spacial visualization tools with interactive maps, and cause-effect inference tools. We demonstrate how we address big data challenges of monitoring, aggregating and analyzing vast amounts of social media data at a near realtime. As a result, our framework not only allows emergency responders to augment their situational awareness with social media information, but can also allow them to extract geophysical data and incorporate it into their analysis models.

  5. Visual analytics in cheminformatics: user-supervised descriptor selection for QSAR methods.

    PubMed

    Martínez, María Jimena; Ponzoni, Ignacio; Díaz, Mónica F; Vazquez, Gustavo E; Soto, Axel J

    2015-01-01

    The design of QSAR/QSPR models is a challenging problem, where the selection of the most relevant descriptors constitutes a key step of the process. Several feature selection methods that address this step are concentrated on statistical associations among descriptors and target properties, whereas the chemical knowledge is left out of the analysis. For this reason, the interpretability and generality of the QSAR/QSPR models obtained by these feature selection methods are drastically affected. Therefore, an approach for integrating domain expert's knowledge in the selection process is needed for increase the confidence in the final set of descriptors. In this paper a software tool, which we named Visual and Interactive DEscriptor ANalysis (VIDEAN), that combines statistical methods with interactive visualizations for choosing a set of descriptors for predicting a target property is proposed. Domain expertise can be added to the feature selection process by means of an interactive visual exploration of data, and aided by statistical tools and metrics based on information theory. Coordinated visual representations are presented for capturing different relationships and interactions among descriptors, target properties and candidate subsets of descriptors. The competencies of the proposed software were assessed through different scenarios. These scenarios reveal how an expert can use this tool to choose one subset of descriptors from a group of candidate subsets or how to modify existing descriptor subsets and even incorporate new descriptors according to his or her own knowledge of the target property. The reported experiences showed the suitability of our software for selecting sets of descriptors with low cardinality, high interpretability, low redundancy and high statistical performance in a visual exploratory way. Therefore, it is possible to conclude that the resulting tool allows the integration of a chemist's expertise in the descriptor selection process with a low cognitive effort in contrast with the alternative of using an ad-hoc manual analysis of the selected descriptors. Graphical abstractVIDEAN allows the visual analysis of candidate subsets of descriptors for QSAR/QSPR. In the two panels on the top, users can interactively explore numerical correlations as well as co-occurrences in the candidate subsets through two interactive graphs.

  6. Comparison of Artificial Immune System and Particle Swarm Optimization Techniques for Error Optimization of Machine Vision Based Tool Movements

    NASA Astrophysics Data System (ADS)

    Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod

    2015-10-01

    In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.

  7. Wild plant folk nomenclature of the Mongol herdsmen in the Arhorchin National Nature Reserve, Inner Mongolia, PR China.

    PubMed

    Soyolt; Galsannorbu; Yongping; Wunenbayar; Liu, Guohou; Khasbagan

    2013-04-24

    Folk names of plants are the root of traditional plant biodiversity knowledge. In pace with social change and economic development, Mongolian knowledge concerning plant diversity is gradually vanishing. Collection and analysis of Mongolian folk names of plants is extremely important. During 2008 to 2012, the authors have been to the Arhorchin National Nature Reserve area 5 times. Fieldwork was done in 13 villages, with 56 local Mongol herdsmen being interviewed. This report documents plant folk names, analyzes the relationship between folk names and scientific names, looks at the structure and special characteristics of folk names, plant use information, and comparative analysis were also improved. Ethnobotanical interviewing methods of free-listing and open-ended questionnaires were used. Ethnobotanical interview and voucher specimen collection were carried out in two ways as local plant specimens were collected beforehand and then used in interviews, and local Mongol herdsmen were invited to the field and interviewed while collecting voucher specimens. Mongolian oral language was used as the working language and findings were originally recorded in Mongolian written language. Scientific names of plants are defined through collection and identification of voucher specimens by the methods of plant taxonomy. A total of 146 folk names of local plants are recorded. Plant folk names corresponded with 111 species, 1 subspecies, 7 varieties, 1 form, which belong to 42 families and 88 genera. The correspondence between plant folk names and scientific names may be classified as one to one correspondence, two or three to one correspondence, and one to multitude correspondence. The structure of folk names were classified as primary names, secondary names and borrowed names. There were 12 folk names that contain animal names and they have correspondence with 15 species. There are nine folk names that contain usage information and they have correspondence with 10 species in which five species and one variety of plant are still used by the local people. The results of comparative analysis on the Mongol herdsmen in the Arhorchin National Nature Reserve and the Mongolians in the Ejina desert area shows that there are some similarities, as well as many differences whether in language or in the structure. In the corresponding rate between plant folk names and scientific names yielded a computational correspondence of 82.19%, which can be considered as a high level of consistency between scientific knowledge and traditional knowledge in botanical nomenclature. Primary names have most cultural significance in the plant folk names. Special characteristic of plant folk names were focused on the physical characteristics of animals which were closely related to their traditional animal husbandry and environment. Plant folk names are not only a code to distinguish between different plant species, but also a kind of culture rich in a deep knowledge concerning nature. The results of comparative analysis shows that Mongolian culture in terms of plant nomenclature have characteristics of diversity between the different regions and different tribes.

  8. miRNA Temporal Analyzer (mirnaTA): a bioinformatics tool for identifying differentially expressed microRNAs in temporal studies using normal quantile transformation.

    PubMed

    Cer, Regina Z; Herrera-Galeano, J Enrique; Anderson, Joseph J; Bishop-Lilly, Kimberly A; Mokashi, Vishwesh P

    2014-01-01

    Understanding the biological roles of microRNAs (miRNAs) is a an active area of research that has produced a surge of publications in PubMed, particularly in cancer research. Along with this increasing interest, many open-source bioinformatics tools to identify existing and/or discover novel miRNAs in next-generation sequencing (NGS) reads become available. While miRNA identification and discovery tools are significantly improved, the development of miRNA differential expression analysis tools, especially in temporal studies, remains substantially challenging. Further, the installation of currently available software is non-trivial and steps of testing with example datasets, trying with one's own dataset, and interpreting the results require notable expertise and time. Subsequently, there is a strong need for a tool that allows scientists to normalize raw data, perform statistical analyses, and provide intuitive results without having to invest significant efforts. We have developed miRNA Temporal Analyzer (mirnaTA), a bioinformatics package to identify differentially expressed miRNAs in temporal studies. mirnaTA is written in Perl and R (Version 2.13.0 or later) and can be run across multiple platforms, such as Linux, Mac and Windows. In the current version, mirnaTA requires users to provide a simple, tab-delimited, matrix file containing miRNA name and count data from a minimum of two to a maximum of 20 time points and three replicates. To recalibrate data and remove technical variability, raw data is normalized using Normal Quantile Transformation (NQT), and linear regression model is used to locate any miRNAs which are differentially expressed in a linear pattern. Subsequently, remaining miRNAs which do not fit a linear model are further analyzed in two different non-linear methods 1) cumulative distribution function (CDF) or 2) analysis of variances (ANOVA). After both linear and non-linear analyses are completed, statistically significant miRNAs (P < 0.05) are plotted as heat maps using hierarchical cluster analysis and Euclidean distance matrix computation methods. mirnaTA is an open-source, bioinformatics tool to aid scientists in identifying differentially expressed miRNAs which could be further mined for biological significance. It is expected to provide researchers with a means of interpreting raw data to statistical summaries in a fast and intuitive manner.

  9. Toward functional genomics in bacteria: Analysis of gene expression in Escherichia coli from a bacterial artificial chromosome library of Bacillus cereus

    PubMed Central

    Rondon, Michelle R.; Raffel, Sandra J.; Goodman, Robert M.; Handelsman, Jo

    1999-01-01

    As the study of microbes moves into the era of functional genomics, there is an increasing need for molecular tools for analysis of a wide diversity of microorganisms. Currently, biological study of many prokaryotes of agricultural, medical, and fundamental scientific interest is limited by the lack of adequate genetic tools. We report the application of the bacterial artificial chromosome (BAC) vector to prokaryotic biology as a powerful approach to address this need. We constructed a BAC library in Escherichia coli from genomic DNA of the Gram-positive bacterium Bacillus cereus. This library provides 5.75-fold coverage of the B. cereus genome, with an average insert size of 98 kb. To determine the extent of heterologous expression of B. cereus genes in the library, we screened it for expression of several B. cereus activities in the E. coli host. Clones expressing 6 of 10 activities tested were identified in the library, namely, ampicillin resistance, zwittermicin A resistance, esculin hydrolysis, hemolysis, orange pigment production, and lecithinase activity. We analyzed selected BAC clones genetically to identify rapidly specific B. cereus loci. These results suggest that BAC libraries will provide a powerful approach for studying gene expression from diverse prokaryotes. PMID:10339608

  10. Metabolomic tools for secondary metabolite discovery from marine microbial symbionts.

    PubMed

    Macintyre, Lynsey; Zhang, Tong; Viegelmann, Christina; Martinez, Ignacio Juarez; Cheng, Cheng; Dowdells, Catherine; Abdelmohsen, Usama Ramadam; Gernert, Christine; Hentschel, Ute; Edrada-Ebel, RuAngelie

    2014-06-05

    Marine invertebrate-associated symbiotic bacteria produce a plethora of novel secondary metabolites which may be structurally unique with interesting pharmacological properties. Selection of strains usually relies on literature searching, genetic screening and bioactivity results, often without considering the chemical novelty and abundance of secondary metabolites being produced by the microorganism until the time-consuming bioassay-guided isolation stages. To fast track the selection process, metabolomic tools were used to aid strain selection by investigating differences in the chemical profiles of 77 bacterial extracts isolated from cold water marine invertebrates from Orkney, Scotland using liquid chromatography-high resolution mass spectrometry (LC-HRMS) and nuclear magnetic resonance (NMR) spectroscopy. Following mass spectrometric analysis and dereplication using an Excel macro developed in-house, principal component analysis (PCA) was employed to differentiate the bacterial strains based on their chemical profiles. NMR 1H and correlation spectroscopy (COSY) were also employed to obtain a chemical fingerprint of each bacterial strain and to confirm the presence of functional groups and spin systems. These results were then combined with taxonomic identification and bioassay screening data to identify three bacterial strains, namely Bacillus sp. 4117, Rhodococcus sp. ZS402 and Vibrio splendidus strain LGP32, to prioritize for scale-up based on their chemically interesting secondary metabolomes, established through dereplication and interesting bioactivities, determined from bioassay screening.

  11. Conveyor Performance based on Motor DC 12 Volt Eg-530ad-2f using K-Means Clustering

    NASA Astrophysics Data System (ADS)

    Arifin, Zaenal; Artini, Sri DP; Much Ibnu Subroto, Imam

    2017-04-01

    To produce goods in industry, a controlled tool to improve production is required. Separation process has become a part of production process. Separation process is carried out based on certain criteria to get optimum result. By knowing the characteristics performance of a controlled tools in separation process the optimum results is also possible to be obtained. Clustering analysis is popular method for clustering data into smaller segments. Clustering analysis is useful to divide a group of object into a k-group in which the member value of the group is homogeny or similar. Similarity in the group is set based on certain criteria. The work in this paper based on K-Means method to conduct clustering of loading in the performance of a conveyor driven by a dc motor 12 volt eg-530-2f. This technique gives a complete clustering data for a prototype of conveyor driven by dc motor to separate goods in term of height. The parameters involved are voltage, current, time of travelling. These parameters give two clusters namely optimal cluster with center of cluster 10.50 volt, 0.3 Ampere, 10.58 second, and unoptimal cluster with center of cluster 10.88 volt, 0.28 Ampere and 40.43 second.

  12. Evaluation of digital real-time PCR assay as a molecular diagnostic tool for single-cell analysis.

    PubMed

    Chang, Chia-Hao; Mau-Hsu, Daxen; Chen, Ke-Cheng; Wei, Cheng-Wey; Chiu, Chiung-Ying; Young, Tai-Horng

    2018-02-21

    In a single-cell study, isolating and identifying single cells are essential, but these processes often require a large investment of time or money. The aim of this study was to isolate and analyse single cells using a novel platform, the PanelChip™ Analysis System, which includes 2500 microwells chip and a digital real-time polymerase chain reaction (dqPCR) assay, in comparison with a standard PCR (qPCR) assay. Through the serial dilution of a known concentration standard, namely pUC19, the accuracy and sensitivity levels of two methodologies were compared. The two systems were tested on the basis of expression levels of the genetic markers vimentin, E-cadherin, N-cadherin and GAPDH in A549 lung carcinoma cells at two known concentrations. Furthermore, the influence of a known PCR inhibitor commonly found in blood samples, heparin, was evaluated in both methodologies. Finally, mathematical models were proposed and separation method of single cells was verified; moreover, gene expression levels during epithelial-mesenchymal transition in single cells under TGFβ1 treatment were measured. The drawn conclusion is that dqPCR performed using PanelChip™ is superior to the standard qPCR in terms of sensitivity, precision, and heparin tolerance. The dqPCR assay is a potential tool for clinical diagnosis and single-cell applications.

  13. A novel tracking tool for the analysis of plant-root tip movements.

    PubMed

    Russino, A; Ascrizzi, A; Popova, L; Tonazzini, A; Mancuso, S; Mazzolai, B

    2013-06-01

    The growth process of roots consists of many activities, such as exploring the soil volume, mining minerals, avoiding obstacles and taking up water to fulfil the plant's primary functions, that are performed differently, depending on environmental conditions. Root movements are strictly related to a root decision strategy, which helps plants to survive under stressful conditions by optimizing energy consumption. In this work, we present a novel image-analysis tool to study the kinematics of the root tip (apex), named analyser for root tip tracks (ARTT). The software implementation combines a segmentation algorithm with additional software imaging filters in order to realize a 2D tip detection. The resulting paths, or tracks, arise from the sampled tip positions through the acquired images during the growth. ARTT allows work with no markers and deals autonomously with new emerging root tips, as well as handling a massive number of data relying on minimum user interaction. Consequently, ARTT can be used for a wide range of applications and for the study of kinematics in different plant species. In particular, the study of the root growth and behaviour could lead to the definition of novel principles for the penetration and/or control paradigms for soil exploration and monitoring tasks. The software capabilities were demonstrated by experimental trials performed with Zea mays and Oryza sativa.

  14. Weighing Evidence “Steampunk” Style via the Meta-Analyser

    PubMed Central

    Bowden, Jack; Jackson, Chris

    2016-01-01

    ABSTRACT The funnel plot is a graphical visualization of summary data estimates from a meta-analysis, and is a useful tool for detecting departures from the standard modeling assumptions. Although perhaps not widely appreciated, a simple extension of the funnel plot can help to facilitate an intuitive interpretation of the mathematics underlying a meta-analysis at a more fundamental level, by equating it to determining the center of mass of a physical system. We used this analogy to explain the concepts of weighing evidence and of biased evidence to a young audience at the Cambridge Science Festival, without recourse to precise definitions or statistical formulas and with a little help from Sherlock Holmes! Following on from the science fair, we have developed an interactive web-application (named the Meta-Analyser) to bring these ideas to a wider audience. We envisage that our application will be a useful tool for researchers when interpreting their data. First, to facilitate a simple understanding of fixed and random effects modeling approaches; second, to assess the importance of outliers; and third, to show the impact of adjusting for small study bias. This final aim is realized by introducing a novel graphical interpretation of the well-known method of Egger regression. PMID:28003684

  15. Guasom Analysis Of The Alhambra Survey

    NASA Astrophysics Data System (ADS)

    Garabato, Daniel; Manteiga, Minia; Dafonte, Carlos; Álvarez, Marco A.

    2017-10-01

    GUASOM is a data mining tool designed for knowledge discovery in large astronomical spectrophotometric archives developed in the framework of Gaia DPAC (Data Processing and Analysis Consortium). Our tool is based on a type of unsupervised learning Artificial Neural Networks named Self-organizing maps (SOMs). SOMs permit the grouping and visualization of big amount of data for which there is no a priori knowledge and hence they are very useful for analyzing the huge amount of information present in modern spectrophotometric surveys. SOMs are used to organize the information in clusters of objects, as homogeneously as possible according to their spectral energy distributions, and to project them onto a 2D grid where the data structure can be visualized. Each cluster has a representative, called prototype which is a virtual pattern that better represents or resembles the set of input patterns belonging to such a cluster. Prototypes make easier the task of determining the physical nature and properties of the objects populating each cluster. Our algorithm has been tested on the ALHAMBRA survey spectrophotometric observations, here we present our results concerning the survey segmentation, visualization of the data structure, separation between types of objects (stars and galaxies), data homogeneity of neurons, cluster prototypes, redshift distribution and crossmatch with other databases (Simbad).

  16. bioalcidae, samjs and vcffilterjs: object-oriented formatters and filters for bioinformatics files.

    PubMed

    Lindenbaum, Pierre; Redon, Richard

    2018-04-01

    Reformatting and filtering bioinformatics files are common tasks for bioinformaticians. Standard Linux tools and specific programs are usually used to perform such tasks but there is still a gap between using these tools and the programming interface of some existing libraries. In this study, we developed a set of tools namely bioalcidae, samjs and vcffilterjs that reformat or filter files using a JavaScript engine or a pure java expression and taking advantage of the java API for high-throughput sequencing data (htsjdk). https://github.com/lindenb/jvarkit. pierre.lindenbaum@univ-nantes.fr.

  17. The Multiple Control of Verbal Behavior

    PubMed Central

    Michael, Jack; Palmer, David C; Sundberg, Mark L

    2011-01-01

    Amid the novel terms and original analyses in Skinner's Verbal Behavior, the importance of his discussion of multiple control is easily missed, but multiple control of verbal responses is the rule rather than the exception. In this paper we summarize and illustrate Skinner's analysis of multiple control and introduce the terms convergent multiple control and divergent multiple control. We point out some implications for applied work and discuss examples of the role of multiple control in humor, poetry, problem solving, and recall. Joint control and conditional discrimination are discussed as special cases of multiple control. We suggest that multiple control is a useful analytic tool for interpreting virtually all complex behavior, and we consider the concepts of derived relations and naming as cases in point. PMID:22532752

  18. Mining Adverse Drug Reactions in Social Media with Named Entity Recognition and Semantic Methods.

    PubMed

    Chen, Xiaoyi; Deldossi, Myrtille; Aboukhamis, Rim; Faviez, Carole; Dahamna, Badisse; Karapetiantz, Pierre; Guenegou-Arnoux, Armelle; Girardeau, Yannick; Guillemin-Lanne, Sylvie; Lillo-Le-Louët, Agnès; Texier, Nathalie; Burgun, Anita; Katsahian, Sandrine

    2017-01-01

    Suspected adverse drug reactions (ADR) reported by patients through social media can be a complementary source to current pharmacovigilance systems. However, the performance of text mining tools applied to social media text data to discover ADRs needs to be evaluated. In this paper, we introduce the approach developed to mine ADR from French social media. A protocol of evaluation is highlighted, which includes a detailed sample size determination and evaluation corpus constitution. Our text mining approach provided very encouraging preliminary results with F-measures of 0.94 and 0.81 for recognition of drugs and symptoms respectively, and with F-measure of 0.70 for ADR detection. Therefore, this approach is promising for downstream pharmacovigilance analysis.

  19. GBOOST: a GPU-based tool for detecting gene-gene interactions in genome-wide case control studies.

    PubMed

    Yung, Ling Sing; Yang, Can; Wan, Xiang; Yu, Weichuan

    2011-05-01

    Collecting millions of genetic variations is feasible with the advanced genotyping technology. With a huge amount of genetic variations data in hand, developing efficient algorithms to carry out the gene-gene interaction analysis in a timely manner has become one of the key problems in genome-wide association studies (GWAS). Boolean operation-based screening and testing (BOOST), a recent work in GWAS, completes gene-gene interaction analysis in 2.5 days on a desktop computer. Compared with central processing units (CPUs), graphic processing units (GPUs) are highly parallel hardware and provide massive computing resources. We are, therefore, motivated to use GPUs to further speed up the analysis of gene-gene interactions. We implement the BOOST method based on a GPU framework and name it GBOOST. GBOOST achieves a 40-fold speedup compared with BOOST. It completes the analysis of Wellcome Trust Case Control Consortium Type 2 Diabetes (WTCCC T2D) genome data within 1.34 h on a desktop computer equipped with Nvidia GeForce GTX 285 display card. GBOOST code is available at http://bioinformatics.ust.hk/BOOST.html#GBOOST.

  20. Unobtrusive integration of data management with fMRI analysis.

    PubMed

    Poliakov, Andrew V; Hertzenberg, Xenia; Moore, Eider B; Corina, David P; Ojemann, George A; Brinkley, James F

    2007-01-01

    This note describes a software utility, called X-batch which addresses two pressing issues typically faced by functional magnetic resonance imaging (fMRI) neuroimaging laboratories (1) analysis automation and (2) data management. The first issue is addressed by providing a simple batch mode processing tool for the popular SPM software package (http://www.fil.ion. ucl.ac.uk/spm/; Welcome Department of Imaging Neuroscience, London, UK). The second is addressed by transparently recording metadata describing all aspects of the batch job (e.g., subject demographics, analysis parameters, locations and names of created files, date and time of analysis, and so on). These metadata are recorded as instances of an extended version of the Protégé-based Experiment Lab Book ontology created by the Dartmouth fMRI Data Center. The resulting instantiated ontology provides a detailed record of all fMRI analyses performed, and as such can be part of larger systems for neuroimaging data management, sharing, and visualization. The X-batch system is in use in our own fMRI research, and is available for download at http://X-batch.sourceforge.net/.

Top