Sample records for computational workbench environment

  1. Transportable Applications Environment Plus, Version 5.1

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Transportable Applications Environment Plus (TAE+) computer program providing integrated, portable programming environment for developing and running application programs based on interactive windows, text, and graphical objects. Enables both programmers and nonprogrammers to construct own custom application interfaces easily and to move interfaces and application programs to different computers. Used to define corporate user interface, with noticeable improvements in application developer's and end user's learning curves. Main components are; WorkBench, What You See Is What You Get (WYSIWYG) software tool for design and layout of user interface; and WPT (Window Programming Tools) Package, set of callable subroutines controlling user interface of application program. WorkBench and WPT's written in C++, and remaining code written in C.

  2. Designers workbench: toward real-time immersive modeling

    NASA Astrophysics Data System (ADS)

    Kuester, Falko; Duchaineau, Mark A.; Hamann, Bernd; Joy, Kenneth I.; Ma, Kwan-Liu

    2000-05-01

    This paper introduces the Designers Workbench, a semi- immersive virtual environment for two-handed modeling, sculpting and analysis tasks. The paper outlines the fundamental tools, design metaphors and hardware components required for an intuitive real-time modeling system. As companies focus on streamlining productivity to cope with global competition, the migration to computer-aided design (CAD), computer-aided manufacturing, and computer-aided engineering systems has established a new backbone of modern industrial product development. However, traditionally a product design frequently originates form a clay model that, after digitization, forms the basis for the numerical description of CAD primitives. The Designers Workbench aims at closing this technology or 'digital gap' experienced by design and CAD engineers by transforming the classical design paradigm into its fully integrate digital and virtual analog allowing collaborative development in a semi- immersive virtual environment. This project emphasizes two key components form the classical product design cycle: freeform modeling and analysis. In the freedom modeling stage, content creation in the form of two-handed sculpting of arbitrary objects using polygonal, volumetric or mathematically defined primitives is emphasized, whereas the analysis component provides the tools required for pre- and post-processing steps for finite element analysis tasks applied to the created models.

  3. xQTL workbench: a scalable web environment for multi-level QTL analysis.

    PubMed

    Arends, Danny; van der Velde, K Joeri; Prins, Pjotr; Broman, Karl W; Möller, Steffen; Jansen, Ritsert C; Swertz, Morris A

    2012-04-01

    xQTL workbench is a scalable web platform for the mapping of quantitative trait loci (QTLs) at multiple levels: for example gene expression (eQTL), protein abundance (pQTL), metabolite abundance (mQTL) and phenotype (phQTL) data. Popular QTL mapping methods for model organism and human populations are accessible via the web user interface. Large calculations scale easily on to multi-core computers, clusters and Cloud. All data involved can be uploaded and queried online: markers, genotypes, microarrays, NGS, LC-MS, GC-MS, NMR, etc. When new data types come available, xQTL workbench is quickly customized using the Molgenis software generator. xQTL workbench runs on all common platforms, including Linux, Mac OS X and Windows. An online demo system, installation guide, tutorials, software and source code are available under the LGPL3 license from http://www.xqtl.org. m.a.swertz@rug.nl.

  4. xQTL workbench: a scalable web environment for multi-level QTL analysis

    PubMed Central

    Arends, Danny; van der Velde, K. Joeri; Prins, Pjotr; Broman, Karl W.; Möller, Steffen; Jansen, Ritsert C.; Swertz, Morris A.

    2012-01-01

    Summary: xQTL workbench is a scalable web platform for the mapping of quantitative trait loci (QTLs) at multiple levels: for example gene expression (eQTL), protein abundance (pQTL), metabolite abundance (mQTL) and phenotype (phQTL) data. Popular QTL mapping methods for model organism and human populations are accessible via the web user interface. Large calculations scale easily on to multi-core computers, clusters and Cloud. All data involved can be uploaded and queried online: markers, genotypes, microarrays, NGS, LC-MS, GC-MS, NMR, etc. When new data types come available, xQTL workbench is quickly customized using the Molgenis software generator. Availability: xQTL workbench runs on all common platforms, including Linux, Mac OS X and Windows. An online demo system, installation guide, tutorials, software and source code are available under the LGPL3 license from http://www.xqtl.org. Contact: m.a.swertz@rug.nl PMID:22308096

  5. Designers Workbench: Towards Real-Time Immersive Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuester, F; Duchaineau, M A; Hamann, B

    2001-10-03

    This paper introduces the DesignersWorkbench, a semi-immersive virtual environment for two-handed modeling, sculpting and analysis tasks. The paper outlines the fundamental tools, design metaphors and hardware components required for an intuitive real-time modeling system. As companies focus on streamlining productivity to cope with global competition, the migration to computer-aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) systems has established a new backbone of modern industrial product development. However, traditionally a product design frequently originates from a clay model that, after digitization, forms the basis for the numerical description of CAD primitives. The DesignersWorkbench aims at closing this technologymore » or ''digital gap'' experienced by design and CAD engineers by transforming the classical design paradigm into its filly integrated digital and virtual analog allowing collaborative development in a semi-immersive virtual environment. This project emphasizes two key components from the classical product design cycle: freeform modeling and analysis. In the freeform modeling stage, content creation in the form of two-handed sculpting of arbitrary objects using polygonal, volumetric or mathematically defined primitives is emphasized, whereas the analysis component provides the tools required for pre- and post-processing steps for finite element analysis tasks applied to the created models.« less

  6. Computational Workbench for Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Edmonds, Karina

    2007-01-01

    PyCraft is a computer program that provides an interactive, workbenchlike computing environment for developing and testing algorithms for multibody dynamics. Examples of multibody dynamic systems amenable to analysis with the help of PyCraft include land vehicles, spacecraft, robots, and molecular models. PyCraft is based on the Spatial-Operator- Algebra (SOA) formulation for multibody dynamics. The SOA operators enable construction of simple and compact representations of complex multibody dynamical equations. Within the Py-Craft computational workbench, users can, essentially, use the high-level SOA operator notation to represent the variety of dynamical quantities and algorithms and to perform computations interactively. PyCraft provides a Python-language interface to underlying C++ code. Working with SOA concepts, a user can create and manipulate Python-level operator classes in order to implement and evaluate new dynamical quantities and algorithms. During use of PyCraft, virtually all SOA-based algorithms are available for computational experiments.

  7. Bio and health informatics meets cloud : BioVLab as an example.

    PubMed

    Chae, Heejoon; Jung, Inuk; Lee, Hyungro; Marru, Suresh; Lee, Seong-Whan; Kim, Sun

    2013-01-01

    The exponential increase of genomic data brought by the advent of the next or the third generation sequencing (NGS) technologies and the dramatic drop in sequencing cost have driven biological and medical sciences to data-driven sciences. This revolutionary paradigm shift comes with challenges in terms of data transfer, storage, computation, and analysis of big bio/medical data. Cloud computing is a service model sharing a pool of configurable resources, which is a suitable workbench to address these challenges. From the medical or biological perspective, providing computing power and storage is the most attractive feature of cloud computing in handling the ever increasing biological data. As data increases in size, many research organizations start to experience the lack of computing power, which becomes a major hurdle in achieving research goals. In this paper, we review the features of publically available bio and health cloud systems in terms of graphical user interface, external data integration, security and extensibility of features. We then discuss about issues and limitations of current cloud systems and conclude with suggestion of a biological cloud environment concept, which can be defined as a total workbench environment assembling computational tools and databases for analyzing bio/medical big data in particular application domains.

  8. Computational toxicology using the OpenTox application programming interface and Bioclipse

    PubMed Central

    2011-01-01

    Background Toxicity is a complex phenomenon involving the potential adverse effect on a range of biological functions. Predicting toxicity involves using a combination of experimental data (endpoints) and computational methods to generate a set of predictive models. Such models rely strongly on being able to integrate information from many sources. The required integration of biological and chemical information sources requires, however, a common language to express our knowledge ontologically, and interoperating services to build reliable predictive toxicology applications. Findings This article describes progress in extending the integrative bio- and cheminformatics platform Bioclipse to interoperate with OpenTox, a semantic web framework which supports open data exchange and toxicology model building. The Bioclipse workbench environment enables functionality from OpenTox web services and easy access to OpenTox resources for evaluating toxicity properties of query molecules. Relevant cases and interfaces based on ten neurotoxins are described to demonstrate the capabilities provided to the user. The integration takes advantage of semantic web technologies, thereby providing an open and simplifying communication standard. Additionally, the use of ontologies ensures proper interoperation and reliable integration of toxicity information from both experimental and computational sources. Conclusions A novel computational toxicity assessment platform was generated from integration of two open science platforms related to toxicology: Bioclipse, that combines a rich scriptable and graphical workbench environment for integration of diverse sets of information sources, and OpenTox, a platform for interoperable toxicology data and computational services. The combination provides improved reliability and operability for handling large data sets by the use of the Open Standards from the OpenTox Application Programming Interface. This enables simultaneous access to a variety of distributed predictive toxicology databases, and algorithm and model resources, taking advantage of the Bioclipse workbench handling the technical layers. PMID:22075173

  9. Mission Critical Computer Resources Management Guide

    DTIC Science & Technology

    1988-09-01

    Support Analyzers, Management, Generators Environments Word Workbench Processors Showroom System Structure HO Compilers IMath 1OperatingI Functions I...Simulated Automated, On-Line Generators Support Exercises Catalog, Function Environments Formal Spec Libraries Showroom System Structure I ADA Trackers I...shown in Figure 13-2. In this model, showrooms of larger more capable piecesare developed off-line for later integration and use in multiple systems

  10. T and D-Bench--Innovative Combined Support for Education and Research in Computer Architecture and Embedded Systems

    ERIC Educational Resources Information Center

    Soares, S. N.; Wagner, F. R.

    2011-01-01

    Teaching and Design Workbench (T&D-Bench) is a framework aimed at education and research in the areas of computer architecture and embedded systems. It includes a set of features not found in other educational environments. This set of features is the result of an original combination of design requirements for T&D-Bench: that the…

  11. BioVLAB-MMIA: a cloud environment for microRNA and mRNA integrated analysis (MMIA) on Amazon EC2.

    PubMed

    Lee, Hyungro; Yang, Youngik; Chae, Heejoon; Nam, Seungyoon; Choi, Donghoon; Tangchaisin, Patanachai; Herath, Chathura; Marru, Suresh; Nephew, Kenneth P; Kim, Sun

    2012-09-01

    MicroRNAs, by regulating the expression of hundreds of target genes, play critical roles in developmental biology and the etiology of numerous diseases, including cancer. As a vast amount of microRNA expression profile data are now publicly available, the integration of microRNA expression data sets with gene expression profiles is a key research problem in life science research. However, the ability to conduct genome-wide microRNA-mRNA (gene) integration currently requires sophisticated, high-end informatics tools, significant expertise in bioinformatics and computer science to carry out the complex integration analysis. In addition, increased computing infrastructure capabilities are essential in order to accommodate large data sets. In this study, we have extended the BioVLAB cloud workbench to develop an environment for the integrated analysis of microRNA and mRNA expression data, named BioVLAB-MMIA. The workbench facilitates computations on the Amazon EC2 and S3 resources orchestrated by the XBaya Workflow Suite. The advantages of BioVLAB-MMIA over the web-based MMIA system include: 1) readily expanded as new computational tools become available; 2) easily modifiable by re-configuring graphic icons in the workflow; 3) on-demand cloud computing resources can be used on an "as needed" basis; 4) distributed orchestration supports complex and long running workflows asynchronously. We believe that BioVLAB-MMIA will be an easy-to-use computing environment for researchers who plan to perform genome-wide microRNA-mRNA (gene) integrated analysis tasks.

  12. Composable languages for bioinformatics: the NYoSh experiment

    PubMed Central

    Simi, Manuele

    2014-01-01

    Language WorkBenches (LWBs) are software engineering tools that help domain experts develop solutions to various classes of problems. Some of these tools focus on non-technical users and provide languages to help organize knowledge while other workbenches provide means to create new programming languages. A key advantage of language workbenches is that they support the seamless composition of independently developed languages. This capability is useful when developing programs that can benefit from different levels of abstraction. We reasoned that language workbenches could be useful to develop bioinformatics software solutions. In order to evaluate the potential of language workbenches in bioinformatics, we tested a prominent workbench by developing an alternative to shell scripting. To illustrate what LWBs and Language Composition can bring to bioinformatics, we report on our design and development of NYoSh (Not Your ordinary Shell). NYoSh was implemented as a collection of languages that can be composed to write programs as expressive and concise as shell scripts. This manuscript offers a concrete illustration of the advantages and current minor drawbacks of using the MPS LWB. For instance, we found that we could implement an environment-aware editor for NYoSh that can assist the programmers when developing scripts for specific execution environments. This editor further provides semantic error detection and can be compiled interactively with an automatic build and deployment system. In contrast to shell scripts, NYoSh scripts can be written in a modern development environment, supporting context dependent intentions and can be extended seamlessly by end-users with new abstractions and language constructs. We further illustrate language extension and composition with LWBs by presenting a tight integration of NYoSh scripts with the GobyWeb system. The NYoSh Workbench prototype, which implements a fully featured integrated development environment for NYoSh is distributed at http://nyosh.campagnelab.org. PMID:24482760

  13. Composable languages for bioinformatics: the NYoSh experiment.

    PubMed

    Simi, Manuele; Campagne, Fabien

    2014-01-01

    Language WorkBenches (LWBs) are software engineering tools that help domain experts develop solutions to various classes of problems. Some of these tools focus on non-technical users and provide languages to help organize knowledge while other workbenches provide means to create new programming languages. A key advantage of language workbenches is that they support the seamless composition of independently developed languages. This capability is useful when developing programs that can benefit from different levels of abstraction. We reasoned that language workbenches could be useful to develop bioinformatics software solutions. In order to evaluate the potential of language workbenches in bioinformatics, we tested a prominent workbench by developing an alternative to shell scripting. To illustrate what LWBs and Language Composition can bring to bioinformatics, we report on our design and development of NYoSh (Not Your ordinary Shell). NYoSh was implemented as a collection of languages that can be composed to write programs as expressive and concise as shell scripts. This manuscript offers a concrete illustration of the advantages and current minor drawbacks of using the MPS LWB. For instance, we found that we could implement an environment-aware editor for NYoSh that can assist the programmers when developing scripts for specific execution environments. This editor further provides semantic error detection and can be compiled interactively with an automatic build and deployment system. In contrast to shell scripts, NYoSh scripts can be written in a modern development environment, supporting context dependent intentions and can be extended seamlessly by end-users with new abstractions and language constructs. We further illustrate language extension and composition with LWBs by presenting a tight integration of NYoSh scripts with the GobyWeb system. The NYoSh Workbench prototype, which implements a fully featured integrated development environment for NYoSh is distributed at http://nyosh.campagnelab.org.

  14. The coupling of MATISSE and the SE-WORKBENCH: a new solution for simulating efficiently the atmospheric radiative transfer and the sea surface radiation

    NASA Astrophysics Data System (ADS)

    Cathala, Thierry; Douchin, Nicolas; Latger, Jean; Caillault, Karine; Fauqueux, Sandrine; Huet, Thierry; Lubarre, Luc; Malherbe, Claire; Rosier, Bernard; Simoneau, Pierre

    2009-05-01

    The SE-WORKBENCH workshop, also called CHORALE (French acceptation for "simulated Optronic Acoustic Radar battlefield") is used by the French DGA (MoD) and several other Defense organizations and companies all around the World to perform multi-sensors simulations. CHORALE enables the user to create virtual and realistic multi spectral 3D scenes that may contain several types of target, and then generate the physical signal received by a sensor, typically an IR sensor. The SE-WORKBENCH can be used either as a collection of software modules through dedicated GUIs or as an API made of a large number of specialized toolkits. The SE-WORKBENCH is made of several functional block: one for geometrically and physically modeling the terrain and the targets, one for building the simulation scenario and one for rendering the synthetic environment, both in real and non real time. Among the modules that the modeling block is composed of, SE-ATMOSPHERE is used to simulate the atmospheric conditions of a Synthetic Environment and then to integrate the impact of these conditions on a scene. This software product generates an exploitable physical atmosphere by the SE WORKBENCH tools generating spectral images. It relies on several external radiative transfer models such as MODTRAN V4.2 in the current version. MATISSE [4,5] is a background scene generator developed for the computation of natural background spectral radiance images and useful atmospheric radiative quantities (radiance and transmission along a line of sight, local illumination, solar irradiance ...). Backgrounds include atmosphere, low and high altitude clouds, sea and land. A particular characteristic of the code is its ability to take into account atmospheric spatial variability (temperatures, mixing ratio, etc) along each line of sight. An Application Programming Interface (API) is included to facilitate its use in conjunction with external codes. MATISSE is currently considered as a new external radiative transfer model to be integrated in SE-ATMOSPHERE as a complement to MODTRAN. Compared to the latter which is used as a whole MATISSE can be used step by step and modularly as an API: this can avoid to pre compute large atmospheric parameters tables as it is done currently with MODTRAN. The use of MATISSE will also enable a real coupling between the ray tracing process of the SEWORKBENCH and the radiative transfer model of MATISSE. This will lead to the improvement of the link between a general atmospheric model and a specific 3D terrain. The paper will demonstrate the advantages for the SE WORKEBNCH of using MATISSE as a new atmospheric code, but also for computing the radiative properties of the sea surface.

  15. A national center for biocomputation: in search of a patient-specific interactive virtual surgery workbench

    NASA Technical Reports Server (NTRS)

    Ross, M. D.; Montgomery, K.; Linton, S.; Cheng, R.; Smith, J.

    1998-01-01

    This report describes the three-dimensional imaging and virtual environment technologies developed in NASA's Biocomputation Center for scientific purposes that have now led to applications in the field of medicine. A major goal is to develop a virtual environment surgery workbench for planning complex craniofacial and breast reconstructive surgery, and for training surgeons.

  16. Enabling systematic, harmonised and large-scale biofilms data computation: the Biofilms Experiment Workbench.

    PubMed

    Pérez-Rodríguez, Gael; Glez-Peña, Daniel; Azevedo, Nuno F; Pereira, Maria Olívia; Fdez-Riverola, Florentino; Lourenço, Anália

    2015-03-01

    Biofilms are receiving increasing attention from the biomedical community. Biofilm-like growth within human body is considered one of the key microbial strategies to augment resistance and persistence during infectious processes. The Biofilms Experiment Workbench is a novel software workbench for the operation and analysis of biofilms experimental data. The goal is to promote the interchange and comparison of data among laboratories, providing systematic, harmonised and large-scale data computation. The workbench was developed with AIBench, an open-source Java desktop application framework for scientific software development in the domain of translational biomedicine. Implementation favours free and open-source third-parties, such as the R statistical package, and reaches for the Web services of the BiofOmics database to enable public experiment deposition. First, we summarise the novel, free, open, XML-based interchange format for encoding biofilms experimental data. Then, we describe the execution of common scenarios of operation with the new workbench, such as the creation of new experiments, the importation of data from Excel spreadsheets, the computation of analytical results, the on-demand and highly customised construction of Web publishable reports, and the comparison of results between laboratories. A considerable and varied amount of biofilms data is being generated, and there is a critical need to develop bioinformatics tools that expedite the interchange and comparison of microbiological and clinical results among laboratories. We propose a simple, open-source software infrastructure which is effective, extensible and easy to understand. The workbench is freely available for non-commercial use at http://sing.ei.uvigo.es/bew under LGPL license. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. UNIX Writer's Workbench: Software for Streamlined Communication.

    ERIC Educational Resources Information Center

    Frase, Lawrence T; Diel, Mary

    1986-01-01

    Discusses computer editing and describes the capacities and features of an integrated software package, Writer's Workbench. Suggests ways in which this program can be used to improve writing skills. Reviews the effects of this program on technical users, college students, and high school students. (ML)

  18. Integration of Dakota into the NEAMS Workbench

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Lefebvre, Robert A.; Langley, Brandon R.

    2017-07-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on integrating Dakota into the NEAMS Workbench. The NEAMS Workbench, developed at Oak Ridge National Laboratory, is a new software framework that provides a graphical user interface, input file creation, parsing, validation, job execution, workflow management, and output processing for a variety of nuclear codes. Dakota is a tool developed at Sandia National Laboratories that provides a suite of uncertainty quantification and optimization algorithms. Providing Dakota within the NEAMS Workbench allows users of nuclear simulation codes to perform uncertainty and optimization studies on their nuclear codes frommore » within a common, integrated environment. Details of the integration and parsing are provided, along with an example of Dakota running a sampling study on the fuels performance code, BISON, from within the NEAMS Workbench.« less

  19. TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (HP9000 SERIES 300/400 VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is expected to be available on media suitable for seven different machine platforms: 1) DEC VAX computers running VMS (TK50 cartridge in VAX BACKUP format), 2) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and 7) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2.

  20. TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is expected to be available on media suitable for seven different machine platforms: 1) DEC VAX computers running VMS (TK50 cartridge in VAX BACKUP format), 2) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and 7) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2.

  1. Scalable web services for the PSIPRED Protein Analysis Workbench.

    PubMed

    Buchan, Daniel W A; Minneci, Federico; Nugent, Tim C O; Bryson, Kevin; Jones, David T

    2013-07-01

    Here, we present the new UCL Bioinformatics Group's PSIPRED Protein Analysis Workbench. The Workbench unites all of our previously available analysis methods into a single web-based framework. The new web portal provides a greatly streamlined user interface with a number of new features to allow users to better explore their results. We offer a number of additional services to enable computationally scalable execution of our prediction methods; these include SOAP and XML-RPC web server access and new HADOOP packages. All software and services are available via the UCL Bioinformatics Group website at http://bioinf.cs.ucl.ac.uk/.

  2. TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (HP9000 SERIES 700/800 VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.

  3. TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (IBM RS/6000 VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.

  4. TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (SUN4 VERSION WITH MOTIF)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.

  5. TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (SILICON GRAPHICS VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.

  6. TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (SUN4 VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.

  7. TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (DEC RISC ULTRIX VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.

  8. The environment workbench: A design tool for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Jongeward, Gary A.; Kuharski, Robert A.; Rankin, Thomas V.; Wilcox, Katherine G.; Roche, James C.

    1991-01-01

    The environment workbench (EWB) is being developed for NASA by S-CUBED to provide a standard tool that can be used by the Space Station Freedom (SSF) design and user community for requirements verification. The desktop tool will predict and analyze the interactions of SSF with its natural and self-generated environments. A brief review of the EWB design and capabilities is presented. Calculations using a prototype EWB of the on-orbit floating potentials and contaminant environment of SSF are also presented. Both the positive and negative grounding configurations for the solar arrays are examined to demonstrate the capability of the EWB to provide quick estimates of environments, interactions, and system effects.

  9. E-HOSPITAL - A Digital Workbench for Hospital Operations and Services Planning Using Information Technology and Algebraic Languages.

    PubMed

    Gartner, Daniel; Padman, Rema

    2017-01-01

    In this paper, we describe the development of a unified framework and a digital workbench for the strategic, tactical and operational hospital management plan driven by information technology and analytics. The workbench can be used not only by multiple stakeholders in the healthcare delivery setting, but also for pedagogical purposes on topics such as healthcare analytics, services management, and information systems. This tool combines the three classical hierarchical decision-making levels in one integrated environment. At each level, several decision problems can be chosen. Extensions of mathematical models from the literature are presented and incorporated into the digital platform. In a case study using real-world data, we demonstrate how we used the workbench to inform strategic capacity planning decisions in a multi-hospital, multi-stakeholder setting in the United Kingdom.

  10. Defect modelling in an interactive 3-D CAD environment

    NASA Astrophysics Data System (ADS)

    Reilly, D.; Potts, A.; McNab, A.; Toft, M.; Chapman, R. K.

    2000-05-01

    This paper describes enhancement of the NDT Workbench, as presented at QNDE '98, to include theoretical models for the ultrasonic inspection of smooth planar defects, developed by British Energy and BNFL-Magnox Generation. The Workbench is a PC-based software package for the reconstruction, visualization and analysis of 3-D ultrasonic NDT data in an interactive CAD environment. This extension of the Workbeach now provides the user with a well established modelling approach, coupled with a graphical user interface for: a) configuring the model for flaw size, shape, orientation and location; b) flexible specification of probe parameters; c) selection of scanning surface and scan pattern on the CAD component model; d) presentation of the output as a simulated ultrasound image within the component, or as graphical or tabular displays. The defect modelling facilities of the Workbench can be used for inspection procedure assessment and confirmation of data interpretation, by comparison of overlay images generated from real and simulated data. The modelling technique currently implemented is based on the Geometrical Theory of Diffraction, for simulation of strip-like, circular or elliptical crack responses in the time harmonic or time dependent cases. Eventually, the Workbench will also allow modelling using elastodynamic Kirchhoff theory.

  11. The Nimrod computational workbench: a case study in desktop metacomputing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abramson, D.; Sosic, R.; Foster, I.

    The coordinated use of geographically distributed computers, or metacomputing, can in principle provide more accessible and cost- effective supercomputing than conventional high-performance systems. However, we lack evidence that metacomputing systems can be made easily usable, or that there exist large numbers of applications able to exploit metacomputing resources. In this paper, we present work that addresses both these concerns. The basis for this work is a system called Nimrod that provides a desktop problem-solving environment for parametric experiments. We describe how Nimrod has been extended to support the scheduling of computational resources located in a wide-area environment, and report onmore » an experiment in which Nimrod was used to schedule a large parametric study across the Australian Internet. The experiment provided both new scientific results and insights into Nimrod capabilities. We relate the results of this experiment to lessons learned from the I-WAY distributed computing experiment, and draw conclusions as to how Nimrod and I-WAY- like computing environments should be developed to support desktop metacomputing.« less

  12. The Microgravity Science Glovebox

    NASA Technical Reports Server (NTRS)

    Baugher, Charles R.; Primm, Lowell (Technical Monitor)

    2001-01-01

    The Microgravity Science Glovebox (MSG) provides scientific investigators the opportunity to implement interactive experiments on the International Space Station. The facility has been designed around the concept of an enclosed scientific workbench that allows the crew to assemble and operate an experimental apparatus with participation from ground-based scientists through real-time data and video links. Workbench utilities provided to operate the experiments include power, data acquisition, computer communications, vacuum, nitrogen. and specialized tools. Because the facility work area is enclosed and held at a negative pressure with respect to the crew living area, the requirements on the experiments for containment of small parts, particulates, fluids, and gasses are substantially reduced. This environment allows experiments to be constructed in close parallel with bench type investigations performed in groundbased laboratories. Such an approach enables experimental scientists to develop hardware that more closely parallel their traditional laboratory experience and transfer these experiments into meaningful space-based research. When delivered to the ISS the MSG will represent a significant scientific capability that will be continuously available for a decade of evolutionary research.

  13. AnaBench: a Web/CORBA-based workbench for biomolecular sequence analysis

    PubMed Central

    Badidi, Elarbi; De Sousa, Cristina; Lang, B Franz; Burger, Gertraud

    2003-01-01

    Background Sequence data analyses such as gene identification, structure modeling or phylogenetic tree inference involve a variety of bioinformatics software tools. Due to the heterogeneity of bioinformatics tools in usage and data requirements, scientists spend much effort on technical issues including data format, storage and management of input and output, and memorization of numerous parameters and multi-step analysis procedures. Results In this paper, we present the design and implementation of AnaBench, an interactive, Web-based bioinformatics Analysis workBench allowing streamlined data analysis. Our philosophy was to minimize the technical effort not only for the scientist who uses this environment to analyze data, but also for the administrator who manages and maintains the workbench. With new bioinformatics tools published daily, AnaBench permits easy incorporation of additional tools. This flexibility is achieved by employing a three-tier distributed architecture and recent technologies including CORBA middleware, Java, JDBC, and JSP. A CORBA server permits transparent access to a workbench management database, which stores information about the users, their data, as well as the description of all bioinformatics applications that can be launched from the workbench. Conclusion AnaBench is an efficient and intuitive interactive bioinformatics environment, which offers scientists application-driven, data-driven and protocol-driven analysis approaches. The prototype of AnaBench, managed by a team at the Université de Montréal, is accessible on-line at: . Please contact the authors for details about setting up a local-network AnaBench site elsewhere. PMID:14678565

  14. Computing through Scientific Abstractions in SysBioPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, George; Stephan, Eric G.; Gracio, Deborah K.

    2004-10-13

    Today, biologists and bioinformaticists have a tremendous amount of computational power at their disposal. With the availability of supercomputers, burgeoning scientific databases and digital libraries such as GenBank and PubMed, and pervasive computational environments such as the Grid, biologists have access to a wealth of computational capabilities and scientific data at hand. Yet, the rapid development of computational technologies has far exceeded the typical biologist’s ability to effectively apply the technology in their research. Computational sciences research and development efforts such as the Biology Workbench, BioSPICE (Biological Simulation Program for Intra-Cellular Evaluation), and BioCoRE (Biological Collaborative Research Environment) are importantmore » in connecting biologists and their scientific problems to computational infrastructures. On the Computational Cell Environment and Heuristic Entity-Relationship Building Environment projects at the Pacific Northwest National Laboratory, we are jointly developing a new breed of scientific problem solving environment called SysBioPSE that will allow biologists to access and apply computational resources in the scientific research context. In contrast to other computational science environments, SysBioPSE operates as an abstraction layer above a computational infrastructure. The goal of SysBioPSE is to allow biologists to apply computational resources in the context of the scientific problems they are addressing and the scientific perspectives from which they conduct their research. More specifically, SysBioPSE allows biologists to capture and represent scientific concepts and theories and experimental processes, and to link these views to scientific applications, data repositories, and computer systems.« less

  15. CATIA V5 Virtual Environment Support for Constellation Ground Operations

    NASA Technical Reports Server (NTRS)

    Kelley, Andrew

    2009-01-01

    This summer internship primarily involved using CATIA V5 modeling software to design and model parts to support ground operations for the Constellation program. I learned several new CATIA features, including the Imagine and Shape workbench and the Tubing Design workbench, and presented brief workbench lessons to my co-workers. Most modeling tasks involved visualizing design options for Launch Pad 39B operations, including Mobile Launcher Platform (MLP) access and internal access to the Ares I rocket. Other ground support equipment, including a hydrazine servicing cart, a mobile fuel vapor scrubber, a hypergolic propellant tank cart, and a SCAPE (Self Contained Atmospheric Protective Ensemble) suit, was created to aid in the visualization of pad operations.

  16. Using the iPlant collaborative discovery environment.

    PubMed

    Oliver, Shannon L; Lenards, Andrew J; Barthelson, Roger A; Merchant, Nirav; McKay, Sheldon J

    2013-06-01

    The iPlant Collaborative is an academic consortium whose mission is to develop an informatics and social infrastructure to address the "grand challenges" in plant biology. Its cyberinfrastructure supports the computational needs of the research community and facilitates solving major challenges in plant science. The Discovery Environment provides a powerful and rich graphical interface to the iPlant Collaborative cyberinfrastructure by creating an accessible virtual workbench that enables all levels of expertise, ranging from students to traditional biology researchers and computational experts, to explore, analyze, and share their data. By providing access to iPlant's robust data-management system and high-performance computing resources, the Discovery Environment also creates a unified space in which researchers can access scalable tools. Researchers can use available Applications (Apps) to execute analyses on their data, as well as customize or integrate their own tools to better meet the specific needs of their research. These Apps can also be used in workflows that automate more complicated analyses. This module describes how to use the main features of the Discovery Environment, using bioinformatics workflows for high-throughput sequence data as examples. © 2013 by John Wiley & Sons, Inc.

  17. TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (DEC VAX ULTRIX VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. Data-driven graphical objects such as dials, thermometers, and strip charts are also included. TAE Plus updates the strip chart as the data values change. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. The Silicon Graphics version of TAE Plus now has a font caching scheme and a color caching scheme to make color allocation more efficient. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides an extremely powerful means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif Toolkit 1.1 or 1.1.1. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus comes with InterViews and idraw, two software packages developed by Stanford University and integrated in TAE Plus. TAE Plus was developed in 1989 and version 5.1 was released in 1991. TAE Plus is currently available on media suitable for eight different machine platforms: 1) DEC VAX computers running VMS 5.3 or higher (TK50 cartridge in VAX BACKUP format), 2) DEC VAXstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX 8.0 (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX 8.05 (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun3 series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), 7) Sun4 (SPARC) series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), and 8) SGI Indigo computers running IRIX 4.0.1 and IRIX/Motif 1.0.1 (.25 inch IRIS tape cartridge in UNIX tar format). An optional Motif Object Code License is available for either Sun version. TAE is a trademark of the National Aeronautics and Space Administration. X Window System is a trademark of the Massachusetts Institute of Technology. Motif is a trademark of the Open Software Foundation. DEC, VAX, VMS, TK50 and ULTRIX are trademarks of Digital Equipment Corporation. HP9000 and HP-UX are trademarks of Hewlett-Packard Co. Sun3, Sun4, SunOS, and SPARC are trademarks of Sun Microsystems, Inc. SGI and IRIS are registered trademarks of Silicon Graphics, Inc.

  18. TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (SUN3 VERSION)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. Data-driven graphical objects such as dials, thermometers, and strip charts are also included. TAE Plus updates the strip chart as the data values change. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. The Silicon Graphics version of TAE Plus now has a font caching scheme and a color caching scheme to make color allocation more efficient. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides an extremely powerful means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif Toolkit 1.1 or 1.1.1. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus comes with InterViews and idraw, two software packages developed by Stanford University and integrated in TAE Plus. TAE Plus was developed in 1989 and version 5.1 was released in 1991. TAE Plus is currently available on media suitable for eight different machine platforms: 1) DEC VAX computers running VMS 5.3 or higher (TK50 cartridge in VAX BACKUP format), 2) DEC VAXstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX 8.0 (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX 8.05 (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun3 series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), 7) Sun4 (SPARC) series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), and 8) SGI Indigo computers running IRIX 4.0.1 and IRIX/Motif 1.0.1 (.25 inch IRIS tape cartridge in UNIX tar format). An optional Motif Object Code License is available for either Sun version. TAE is a trademark of the National Aeronautics and Space Administration. X Window System is a trademark of the Massachusetts Institute of Technology. Motif is a trademark of the Open Software Foundation. DEC, VAX, VMS, TK50 and ULTRIX are trademarks of Digital Equipment Corporation. HP9000 and HP-UX are trademarks of Hewlett-Packard Co. Sun3, Sun4, SunOS, and SPARC are trademarks of Sun Microsystems, Inc. SGI and IRIS are registered trademarks of Silicon Graphics, Inc.

  19. TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (SUN3 VERSION WITH MOTIF)

    NASA Technical Reports Server (NTRS)

    TAE SUPPORT OFFICE

    1994-01-01

    TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. Data-driven graphical objects such as dials, thermometers, and strip charts are also included. TAE Plus updates the strip chart as the data values change. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. The Silicon Graphics version of TAE Plus now has a font caching scheme and a color caching scheme to make color allocation more efficient. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides an extremely powerful means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif Toolkit 1.1 or 1.1.1. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus comes with InterViews and idraw, two software packages developed by Stanford University and integrated in TAE Plus. TAE Plus was developed in 1989 and version 5.1 was released in 1991. TAE Plus is currently available on media suitable for eight different machine platforms: 1) DEC VAX computers running VMS 5.3 or higher (TK50 cartridge in VAX BACKUP format), 2) DEC VAXstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX 8.0 (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX 8.05 (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun3 series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), 7) Sun4 (SPARC) series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), and 8) SGI Indigo computers running IRIX 4.0.1 and IRIX/Motif 1.0.1 (.25 inch IRIS tape cartridge in UNIX tar format). An optional Motif Object Code License is available for either Sun version. TAE is a trademark of the National Aeronautics and Space Administration. X Window System is a trademark of the Massachusetts Institute of Technology. Motif is a trademark of the Open Software Foundation. DEC, VAX, VMS, TK50 and ULTRIX are trademarks of Digital Equipment Corporation. HP9000 and HP-UX are trademarks of Hewlett-Packard Co. Sun3, Sun4, SunOS, and SPARC are trademarks of Sun Microsystems, Inc. SGI and IRIS are registered trademarks of Silicon Graphics, Inc.

  20. Validation of the thermal code of RadTherm-IR, IR-Workbench, and F-TOM

    NASA Astrophysics Data System (ADS)

    Schwenger, Frédéric; Grossmann, Peter; Malaplate, Alain

    2009-05-01

    System assessment by image simulation requires synthetic scenarios that can be viewed by the device to be simulated. In addition to physical modeling of the camera, a reliable modeling of scene elements is necessary. Software products for modeling of target data in the IR should be capable of (i) predicting surface temperatures of scene elements over a long period of time and (ii) computing sensor views of the scenario. For such applications, FGAN-FOM acquired the software products RadTherm-IR (ThermoAnalytics Inc., Calumet, USA; IR-Workbench (OKTAL-SE, Toulouse, France). Inspection of the accuracy of simulation results by validation is necessary before using these products for applications. In the first step of validation, the performance of both "thermal solvers" was determined through comparison of the computed diurnal surface temperatures of a simple object with the corresponding values from measurements. CUBI is a rather simple geometric object with well known material parameters which makes it suitable for testing and validating object models in IR. It was used in this study as a test body. Comparison of calculated and measured surface temperature values will be presented, together with the results from the FGAN-FOM thermal object code F-TOM. In the second validation step, radiances of the simulated sensor views computed by RadTherm-IR and IR-Workbench will be compared with radiances retrieved from the recorded sensor images taken by the sensor that was simulated. Strengths and weaknesses of the models RadTherm-IR, IR-Workbench and F-TOM will be discussed.

  1. Collaborative workbench for cyberinfrastructure to accelerate science algorithm development

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Maskey, M.; Kuo, K.; Lynnes, C.

    2013-12-01

    There are significant untapped resources for information and knowledge creation within the Earth Science community in the form of data, algorithms, services, analysis workflows or scripts, and the related knowledge about these resources. Despite the huge growth in social networking and collaboration platforms, these resources often reside on an investigator's workstation or laboratory and are rarely shared. A major reason for this is that there are very few scientific collaboration platforms, and those that exist typically require the use of a new set of analysis tools and paradigms to leverage the shared infrastructure. As a result, adoption of these collaborative platforms for science research is inhibited by the high cost to an individual scientist of switching from his or her own familiar environment and set of tools to a new environment and tool set. This presentation will describe an ongoing project developing an Earth Science Collaborative Workbench (CWB). The CWB approach will eliminate this barrier by augmenting a scientist's current research environment and tool set to allow him or her to easily share diverse data and algorithms. The CWB will leverage evolving technologies such as commodity computing and social networking to design an architecture for scalable collaboration that will support the emerging vision of an Earth Science Collaboratory. The CWB is being implemented on the robust and open source Eclipse framework and will be compatible with widely used scientific analysis tools such as IDL. The myScience Catalog built into CWB will capture and track metadata and provenance about data and algorithms for the researchers in a non-intrusive manner with minimal overhead. Seamless interfaces to multiple Cloud services will support sharing algorithms, data, and analysis results, as well as access to storage and computer resources. A Community Catalog will track the use of shared science artifacts and manage collaborations among researchers.

  2. Collaborative WorkBench for Researchers - Work Smarter, Not Harder

    NASA Technical Reports Server (NTRS)

    Ramachandran, Rahul; Kuo, Kwo-sen; Maskey, Manil; Lynnes, Christopher

    2014-01-01

    It is important to define some commonly used terminology related to collaboration to facilitate clarity in later discussions. We define provisioning as infrastructure capabilities such as computation, storage, data, and tools provided by some agency or similarly trusted institution. Sharing is defined as the process of exchanging data, programs, and knowledge among individuals (often strangers) and groups. Collaboration is a specialized case of sharing. In collaboration, sharing with others (usually known colleagues) is done in pursuit of a common scientific goal or objective. Collaboration entails more dynamic and frequent interactions and can occur at different speeds. Synchronous collaboration occurs in real time such as editing a shared document on the fly, chatting, video conference, etc., and typically requires a peer-to-peer connection. Asynchronous collaboration is episodic in nature based on a push-pull model. Examples of asynchronous collaboration include email exchanges, blogging, repositories, etc. The purpose of a workbench is to provide a customizable framework for different applications. Since the workbench will be common to all the customized tools, it promotes building modular functionality that can be used and reused by multiple tools. The objective of our Collaborative Workbench (CWB) is thus to create such an open and extensible framework for the Earth Science community via a set of plug-ins. Our CWB is based on the Eclipse [2] Integrated Development Environment (IDE), which is designed as a small kernel containing a plug-in loader for hundreds of plug-ins. The kernel itself is an implementation of a known specification to provide an environment for the plug-ins to execute. This design enables modularity, where discrete chunks of functionality can be reused to build new applications. The minimal set of plug-ins necessary to create a client application is called the Eclipse Rich Client Platform (RCP) [3]; The Eclipse RCP also supports thousands of community-contributed plug-ins, making it a popular development platform for many diverse applications including the Science Activity Planner developed at JPL for the Mars rovers [4] and the scientific experiment tool Gumtree [5]. By leveraging the Eclipse RCP to provide an open, extensible framework, a CWB supports customizations via plug-ins to build rich user applications specific for Earth Science. More importantly, CWB plug-ins can be used by existing science tools built off Eclipse such as IDL or PyDev to provide seamless collaboration functionalities.

  3. Intelligible machine learning with malibu.

    PubMed

    Langlois, Robert E; Lu, Hui

    2008-01-01

    malibu is an open-source machine learning work-bench developed in C/C++ for high-performance real-world applications, namely bioinformatics and medical informatics. It leverages third-party machine learning implementations for more robust bug-free software. This workbench handles several well-studied supervised machine learning problems including classification, regression, importance-weighted classification and multiple-instance learning. The malibu interface was designed to create reproducible experiments ideally run in a remote and/or command line environment. The software can be found at: http://proteomics.bioengr. uic.edu/malibu/index.html.

  4. Using bio.tools to generate and annotate workbench tool descriptions

    PubMed Central

    Hillion, Kenzo-Hugo; Kuzmin, Ivan; Khodak, Anton; Rasche, Eric; Crusoe, Michael; Peterson, Hedi; Ison, Jon; Ménager, Hervé

    2017-01-01

    Workbench and workflow systems such as Galaxy, Taverna, Chipster, or Common Workflow Language (CWL)-based frameworks, facilitate the access to bioinformatics tools in a user-friendly, scalable and reproducible way. Still, the integration of tools in such environments remains a cumbersome, time consuming and error-prone process. A major consequence is the incomplete or outdated description of tools that are often missing important information, including parameters and metadata such as publication or links to documentation. ToolDog (Tool DescriptiOn Generator) facilitates the integration of tools - which have been registered in the ELIXIR tools registry (https://bio.tools) - into workbench environments by generating tool description templates. ToolDog includes two modules. The first module analyses the source code of the bioinformatics software with language-specific plugins, and generates a skeleton for a Galaxy XML or CWL tool description. The second module is dedicated to the enrichment of the generated tool description, using metadata provided by bio.tools. This last module can also be used on its own to complete or correct existing tool descriptions with missing metadata. PMID:29333231

  5. The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunderam, Vaidy S.

    2012-03-20

    The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept ofmore » a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.« less

  6. DyNAMiC Workbench: an integrated development environment for dynamic DNA nanotechnology

    PubMed Central

    Grun, Casey; Werfel, Justin; Zhang, David Yu; Yin, Peng

    2015-01-01

    Dynamic DNA nanotechnology provides a promising avenue for implementing sophisticated assembly processes, mechanical behaviours, sensing and computation at the nanoscale. However, design of these systems is complex and error-prone, because the need to control the kinetic pathway of a system greatly increases the number of design constraints and possible failure modes for the system. Previous tools have automated some parts of the design workflow, but an integrated solution is lacking. Here, we present software implementing a three ‘tier’ design process: a high-level visual programming language is used to describe systems, a molecular compiler builds a DNA implementation and nucleotide sequences are generated and optimized. Additionally, our software includes tools for analysing and ‘debugging’ the designs in silico, and for importing/exporting designs to other commonly used software systems. The software we present is built on many existing pieces of software, but is integrated into a single package—accessible using a Web-based interface at http://molecular-systems.net/workbench. We hope that the deep integration between tools and the flexibility of this design process will lead to better experimental results, fewer experimental design iterations and the development of more complex DNA nanosystems. PMID:26423437

  7. EWB: The Environment WorkBench Version 4.0

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Environment WorkBench EWB is a desktop integrated analysis tool for studying a spacecraft's interactions with its environment. Over 100 environment and analysis models are integrated into the menu-based tool. EWB, which was developed for and under the guidance of the NASA Lewis Research Center, is built atop the Module Integrator and Rule-based Intelligent Analytic Database (MIRIAD) architecture. This allows every module in EWB to communicate information to other modules in a transparent manner from the user's point of view. It removes the tedious and error-prone steps of entering data by hand from one model to another. EWB runs under UNIX operating systems (SGI and SUN workstations) and under MS Windows (3.x, 95, and NT) operating systems. MIRIAD, the unique software that makes up the core of EWB, provides the flexibility to easily modify old models and incorporate new ones as user needs change. The MIRIAD approach separates the computer assisted engineering (CAE) tool into three distinct units: 1) A modern graphical user interface to present information; 2) A data dictionary interpreter to coordinate analysis; and 3) A database for storing system designs and analysis results. The user interface is externally programmable through ASCII data files, which contain the location and type of information to be displayed on the screen. This approach provides great flexibility in tailoring the look and feel of the code to individual user needs. MIRIADbased applications, such as EWB, have utilities for viewing tabulated parametric study data, XY line plots, contour plots, and three-dimensional plots of contour data and system geometries. In addition, a Monte Carlo facility is provided to allow statistical assessments (including uncertainties) in models or data.

  8. The UEA sRNA Workbench (version 4.4): a comprehensive suite of tools for analyzing miRNAs and sRNAs.

    PubMed

    Stocks, Matthew B; Mohorianu, Irina; Beckers, Matthew; Paicu, Claudia; Moxon, Simon; Thody, Joshua; Dalmay, Tamas; Moulton, Vincent

    2018-05-02

    RNA interference, a highly conserved regulatory mechanism, is mediated via small RNAs. Recent technical advances enabled the analysis of larger, complex datasets and the investigation of microRNAs and the less known small interfering RNAs. However, the size and intricacy of current data requires a comprehensive set of tools, able to discriminate the patterns from the low-level, noise-like, variation; numerous and varied suggestions from the community represent an invaluable source of ideas for future tools, the ability of the community to contribute to this software is essential. We present a new version of the UEA sRNA Workbench, reconfigured to allow an easy insertion of new tools/workflows. In its released form, it comprises of a suite of tools in a user-friendly environment, with enhanced capabilities for a comprehensive processing of sRNA-seq data e.g. tools for an accurate prediction of sRNA loci (CoLIde) and miRNA loci (miRCat2), as well as workflows to guide the users through common steps such as quality checking of the input data, normalization of abundances or detection of differential expression represent the first step in sRNA-seq analyses. The UEA sRNA Workbench is available at: http://srna-workbench.cmp.uea.ac.uk The source code is available at: https://github.com/sRNAworkbenchuea/UEA_sRNA_Workbench. v.moulton@uea.ac.uk.

  9. Seamless online science workflow development and collaboration using IDL and the ENVI Services Engine

    NASA Astrophysics Data System (ADS)

    Harris, A. T.; Ramachandran, R.; Maskey, M.

    2013-12-01

    The Exelis-developed IDL and ENVI software are ubiquitous tools in Earth science research environments. The IDL Workbench is used by the Earth science community for programming custom data analysis and visualization modules. ENVI is a software solution for processing and analyzing geospatial imagery that combines support for multiple Earth observation scientific data types (optical, thermal, multi-spectral, hyperspectral, SAR, LiDAR) with advanced image processing and analysis algorithms. The ENVI & IDL Services Engine (ESE) is an Earth science data processing engine that allows researchers to use open standards to rapidly create, publish and deploy advanced Earth science data analytics within any existing enterprise infrastructure. Although powerful in many ways, the tools lack collaborative features out-of-box. Thus, as part of the NASA funded project, Collaborative Workbench to Accelerate Science Algorithm Development, researchers at the University of Alabama in Huntsville and Exelis have developed plugins that allow seamless research collaboration from within IDL workbench. Such additional features within IDL workbench are possible because IDL workbench is built using the Eclipse Rich Client Platform (RCP). RCP applications allow custom plugins to be dropped in for extended functionalities. Specific functionalities of the plugins include creating complex workflows based on IDL application source code, submitting workflows to be executed by ESE in the cloud, and sharing and cloning of workflows among collaborators. All these functionalities are available to scientists without leaving their IDL workbench. Because ESE can interoperate with any middleware, scientific programmers can readily string together IDL processing tasks (or tasks written in other languages like C++, Java or Python) to create complex workflows for deployment within their current enterprise architecture (e.g. ArcGIS Server, GeoServer, Apache ODE or SciFlo from JPL). Using the collaborative IDL Workbench, coupled with ESE for execution in the cloud, asynchronous workflows could be executed in batch mode on large data in the cloud. We envision that a scientist will initially develop a scientific workflow locally on a small set of data. Once tested, the scientist will deploy the workflow to the cloud for execution. Depending on the results, the scientist may share the workflow and results, allowing them to be stored in a community catalog and instantly loaded into the IDL Workbench of other scientists. Thereupon, scientists can clone and modify or execute the workflow with different input parameters. The Collaborative Workbench will provide a platform for collaboration in the cloud, helping Earth scientists solve big-data problems in the Earth and planetary sciences.

  10. GlycoWorkbench: a tool for the computer-assisted annotation of mass spectra of glycans.

    PubMed

    Ceroni, Alessio; Maass, Kai; Geyer, Hildegard; Geyer, Rudolf; Dell, Anne; Haslam, Stuart M

    2008-04-01

    Mass spectrometry is the main analytical technique currently used to address the challenges of glycomics as it offers unrivalled levels of sensitivity and the ability to handle complex mixtures of different glycan variations. Determination of glycan structures from analysis of MS data is a major bottleneck in high-throughput glycomics projects, and robust solutions to this problem are of critical importance. However, all the approaches currently available have inherent restrictions to the type of glycans they can identify, and none of them have proved to be a definitive tool for glycomics. GlycoWorkbench is a software tool developed by the EUROCarbDB initiative to assist the manual interpretation of MS data. The main task of GlycoWorkbench is to evaluate a set of structures proposed by the user by matching the corresponding theoretical list of fragment masses against the list of peaks derived from the spectrum. The tool provides an easy to use graphical interface, a comprehensive and increasing set of structural constituents, an exhaustive collection of fragmentation types, and a broad list of annotation options. The aim of GlycoWorkbench is to offer complete support for the routine interpretation of MS data. The software is available for download from: http://www.eurocarbdb.org/applications/ms-tools.

  11. Metabolomics Workbench: An international repository for metabolomics data and metadata, metabolite standards, protocols, tutorials and training, and analysis tools

    PubMed Central

    Sud, Manish; Fahy, Eoin; Cotter, Dawn; Azam, Kenan; Vadivelu, Ilango; Burant, Charles; Edison, Arthur; Fiehn, Oliver; Higashi, Richard; Nair, K. Sreekumaran; Sumner, Susan; Subramaniam, Shankar

    2016-01-01

    The Metabolomics Workbench, available at www.metabolomicsworkbench.org, is a public repository for metabolomics metadata and experimental data spanning various species and experimental platforms, metabolite standards, metabolite structures, protocols, tutorials, and training material and other educational resources. It provides a computational platform to integrate, analyze, track, deposit and disseminate large volumes of heterogeneous data from a wide variety of metabolomics studies including mass spectrometry (MS) and nuclear magnetic resonance spectrometry (NMR) data spanning over 20 different species covering all the major taxonomic categories including humans and other mammals, plants, insects, invertebrates and microorganisms. Additionally, a number of protocols are provided for a range of metabolite classes, sample types, and both MS and NMR-based studies, along with a metabolite structure database. The metabolites characterized in the studies available on the Metabolomics Workbench are linked to chemical structures in the metabolite structure database to facilitate comparative analysis across studies. The Metabolomics Workbench, part of the data coordinating effort of the National Institute of Health (NIH) Common Fund's Metabolomics Program, provides data from the Common Fund's Metabolomics Resource Cores, metabolite standards, and analysis tools to the wider metabolomics community and seeks data depositions from metabolomics researchers across the world. PMID:26467476

  12. VERS: a virtual environment for reconstructive surgery planning

    NASA Astrophysics Data System (ADS)

    Montgomery, Kevin N.

    1997-05-01

    The virtual environment for reconstructive surgery (VERS) project at the NASA Ames Biocomputation Center is applying virtual reality technology to aid surgeons in planning surgeries. We are working with a craniofacial surgeon at Stanford to assemble and visualize the bone structure of patients requiring reconstructive surgery either through developmental abnormalities or trauma. This project is an extension of our previous work in 3D reconstruction, mesh generation, and immersive visualization. The current VR system, consisting of an SGI Onyx RE2, FakeSpace BOOM and ImmersiveWorkbench, Virtual Technologies CyberGlove and Ascension Technologies tracker, is currently in development and has already been used to visualize defects preoperatively. In the near future it will be used to more fully plan the surgery and compute the projected result to soft tissue structure. This paper presents the work in progress and details the production of a high-performance, collaborative, and networked virtual environment.

  13. Collaborative WorkBench (cwb): Enabling Experiment Execution, Analysis and Visualization with Increased Scientific Productivity

    NASA Astrophysics Data System (ADS)

    Maskey, Manil; Ramachandran, Rahul; Kuo, Kwo-Sen

    2015-04-01

    The Collaborative WorkBench (CWB) has been successfully developed to support collaborative science algorithm development. It incorporates many features that enable and enhance science collaboration, including the support for both asynchronous and synchronous modes of interactions in collaborations. With the former, members in a team can share a full range of research artifacts, e.g. data, code, visualizations, and even virtual machine images. With the latter, they can engage in dynamic interactions such as notification, instant messaging, file exchange, and, most notably, collaborative programming. CWB also implements behind-the-scene provenance capture as well as version control to relieve scientists of these chores. Furthermore, it has achieved a seamless integration between researchers' local compute environments and those of the Cloud. CWB has also been successfully extended to support instrument verification and validation. Adopted by almost every researcher, the current practice of downloading data to local compute resources for analysis results in much duplication and inefficiency. CWB leverages Cloud infrastructure to provide a central location for data used by an entire science team, thereby eliminating much of this duplication and waste. Furthermore, use of CWB in concert with this same Cloud infrastructure enables co-located analysis with data where opportunities of data-parallelism can be better exploited, thereby further improving efficiency. With its collaboration-enabling features apposite to steps throughout the scientific process, we expect CWB to fundamentally transform research collaboration and realize maximum science productivity.

  14. Transportable Applications Environment (TAE) Plus: A NASA tool used to develop and manage graphical user interfaces

    NASA Technical Reports Server (NTRS)

    Szczur, Martha R.

    1992-01-01

    The Transportable Applications Environment (TAE) Plus was built to support the construction of graphical user interfaces (GUI's) for highly interactive applications, such as real-time processing systems and scientific analysis systems. It is a general purpose portable tool that includes a 'What You See Is What You Get' WorkBench that allows user interface designers to layout and manipulate windows and interaction objects. The WorkBench includes both user entry objects (e.g., radio buttons, menus) and data-driven objects (e.g., dials, gages, stripcharts), which dynamically change based on values of realtime data. Discussed here is what TAE Plus provides, how the implementation has utilized state-of-the-art technologies within graphic workstations, and how it has been used both within and without NASA.

  15. Remote voice training: A case study on space shuttle applications, appendix C

    NASA Technical Reports Server (NTRS)

    Mollakarimi, Cindy; Hamid, Tamin

    1990-01-01

    The Tile Automation System includes applications of automation and robotics technology to all aspects of the Shuttle tile processing and inspection system. An integrated set of rapid prototyping testbeds was developed which include speech recognition and synthesis, laser imaging systems, distributed Ada programming environments, distributed relational data base architectures, distributed computer network architectures, multi-media workbenches, and human factors considerations. Remote voice training in the Tile Automation System is discussed. The user is prompted over a headset by synthesized speech for the training sequences. The voice recognition units and the voice output units are remote from the user and are connected by Ethernet to the main computer system. A supervisory channel is used to monitor the training sequences. Discussions include the training approaches as well as the human factors problems and solutions for this system utilizing remote training techniques.

  16. Neck Pain

    MedlinePlus

    Neck pain Overview Neck pain is a common complaint. Neck muscles can be strained from poor posture — whether it's leaning over your computer or ... workbench. Osteoarthritis also is a common cause of neck pain. Rarely, neck pain can be a symptom of ...

  17. The Generation Challenge Programme Platform: Semantic Standards and Workbench for Crop Science

    PubMed Central

    Bruskiewich, Richard; Senger, Martin; Davenport, Guy; Ruiz, Manuel; Rouard, Mathieu; Hazekamp, Tom; Takeya, Masaru; Doi, Koji; Satoh, Kouji; Costa, Marcos; Simon, Reinhard; Balaji, Jayashree; Akintunde, Akinnola; Mauleon, Ramil; Wanchana, Samart; Shah, Trushar; Anacleto, Mylah; Portugal, Arllet; Ulat, Victor Jun; Thongjuea, Supat; Braak, Kyle; Ritter, Sebastian; Dereeper, Alexis; Skofic, Milko; Rojas, Edwin; Martins, Natalia; Pappas, Georgios; Alamban, Ryan; Almodiel, Roque; Barboza, Lord Hendrix; Detras, Jeffrey; Manansala, Kevin; Mendoza, Michael Jonathan; Morales, Jeffrey; Peralta, Barry; Valerio, Rowena; Zhang, Yi; Gregorio, Sergio; Hermocilla, Joseph; Echavez, Michael; Yap, Jan Michael; Farmer, Andrew; Schiltz, Gary; Lee, Jennifer; Casstevens, Terry; Jaiswal, Pankaj; Meintjes, Ayton; Wilkinson, Mark; Good, Benjamin; Wagner, James; Morris, Jane; Marshall, David; Collins, Anthony; Kikuchi, Shoshi; Metz, Thomas; McLaren, Graham; van Hintum, Theo

    2008-01-01

    The Generation Challenge programme (GCP) is a global crop research consortium directed toward crop improvement through the application of comparative biology and genetic resources characterization to plant breeding. A key consortium research activity is the development of a GCP crop bioinformatics platform to support GCP research. This platform includes the following: (i) shared, public platform-independent domain models, ontology, and data formats to enable interoperability of data and analysis flows within the platform; (ii) web service and registry technologies to identify, share, and integrate information across diverse, globally dispersed data sources, as well as to access high-performance computational (HPC) facilities for computationally intensive, high-throughput analyses of project data; (iii) platform-specific middleware reference implementations of the domain model integrating a suite of public (largely open-access/-source) databases and software tools into a workbench to facilitate biodiversity analysis, comparative analysis of crop genomic data, and plant breeding decision making. PMID:18483570

  18. Metabolomics Workbench: An international repository for metabolomics data and metadata, metabolite standards, protocols, tutorials and training, and analysis tools.

    PubMed

    Sud, Manish; Fahy, Eoin; Cotter, Dawn; Azam, Kenan; Vadivelu, Ilango; Burant, Charles; Edison, Arthur; Fiehn, Oliver; Higashi, Richard; Nair, K Sreekumaran; Sumner, Susan; Subramaniam, Shankar

    2016-01-04

    The Metabolomics Workbench, available at www.metabolomicsworkbench.org, is a public repository for metabolomics metadata and experimental data spanning various species and experimental platforms, metabolite standards, metabolite structures, protocols, tutorials, and training material and other educational resources. It provides a computational platform to integrate, analyze, track, deposit and disseminate large volumes of heterogeneous data from a wide variety of metabolomics studies including mass spectrometry (MS) and nuclear magnetic resonance spectrometry (NMR) data spanning over 20 different species covering all the major taxonomic categories including humans and other mammals, plants, insects, invertebrates and microorganisms. Additionally, a number of protocols are provided for a range of metabolite classes, sample types, and both MS and NMR-based studies, along with a metabolite structure database. The metabolites characterized in the studies available on the Metabolomics Workbench are linked to chemical structures in the metabolite structure database to facilitate comparative analysis across studies. The Metabolomics Workbench, part of the data coordinating effort of the National Institute of Health (NIH) Common Fund's Metabolomics Program, provides data from the Common Fund's Metabolomics Resource Cores, metabolite standards, and analysis tools to the wider metabolomics community and seeks data depositions from metabolomics researchers across the world. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Computer Applications in Professional Writing: Systems that Analyze and Describe Natural Language.

    ERIC Educational Resources Information Center

    O'Brien, Frank

    Two varieties of user-friendly computer systems that deal with natural language are now available, providing either at-the-monitor stylistic and grammatic correction of keyed-in writing or a sorting, selecting, and generating of statistical data for any written or spoken document. The editor programs, such as "The Writer's Workbench"…

  20. HDX Workbench: Software for the Analysis of H/D Exchange MS Data

    NASA Astrophysics Data System (ADS)

    Pascal, Bruce D.; Willis, Scooter; Lauer, Janelle L.; Landgraf, Rachelle R.; West, Graham M.; Marciano, David; Novick, Scott; Goswami, Devrishi; Chalmers, Michael J.; Griffin, Patrick R.

    2012-09-01

    Hydrogen/deuterium exchange mass spectrometry (HDX-MS) is an established method for the interrogation of protein conformation and dynamics. While the data analysis challenge of HDX-MS has been addressed by a number of software packages, new computational tools are needed to keep pace with the improved methods and throughput of this technique. To address these needs, we report an integrated desktop program titled HDX Workbench, which facilitates automation, management, visualization, and statistical cross-comparison of large HDX data sets. Using the software, validated data analysis can be achieved at the rate of generation. The application is available at the project home page http://hdx.florida.scripps.edu.

  1. An agile acquisition decision-support workbench for evaluating ISR effectiveness

    NASA Astrophysics Data System (ADS)

    Stouch, Daniel W.; Champagne, Valerie; Mow, Christopher; Rosenberg, Brad; Serrin, Joshua

    2011-06-01

    The U.S. Air Force is consistently evolving to support current and future operations through the planning and execution of intelligence, surveillance and reconnaissance (ISR) missions. However, it is a challenge to maintain a precise awareness of current and emerging ISR capabilities to properly prepare for future conflicts. We present a decisionsupport tool for acquisition managers to empirically compare ISR capabilities and approaches to employing them, thereby enabling the DoD to acquire ISR platforms and sensors that provide the greatest return on investment. We have developed an analysis environment to perform modeling and simulation-based experiments to objectively compare alternatives. First, the analyst specifies an operational scenario for an area of operations by providing terrain and threat information; a set of nominated collections; sensor and platform capabilities; and processing, exploitation, and dissemination (PED) capacities. Next, the analyst selects and configures ISR collection strategies to generate collection plans. The analyst then defines customizable measures of effectiveness or performance to compute during the experiment. Finally, the analyst empirically compares the efficacy of each solution and generates concise reports to document their conclusions, providing traceable evidence for acquisition decisions. Our capability demonstrates the utility of using a workbench environment for analysts to design and run experiments. Crafting impartial metrics enables the acquisition manager to focus on evaluating solutions based on specific military needs. Finally, the metric and collection plan visualizations provide an intuitive understanding of the suitability of particular solutions. This facilitates a more agile acquisition strategy that handles rapidly changing technology in response to current military needs.

  2. GP Workbench Manual: Technical Manual, User's Guide, and Software Guide

    USGS Publications Warehouse

    Oden, Charles P.; Moulton, Craig W.

    2006-01-01

    GP Workbench is an open-source general-purpose geophysical data processing software package written primarily for ground penetrating radar (GPR) data. It also includes support for several USGS prototype electromagnetic instruments such as the VETEM and ALLTEM. The two main programs in the package are GP Workbench and GP Wave Utilities. GP Workbench has routines for filtering, gridding, and migrating GPR data; as well as an inversion routine for characterizing UXO (unexploded ordinance) using ALLTEM data. GP Workbench provides two-dimensional (section view) and three-dimensional (plan view or time slice view) processing for GPR data. GP Workbench can produce high-quality graphics for reports when Surfer 8 or higher (Golden Software) is installed. GP Wave Utilities provides a wide range of processing algorithms for single waveforms, such as filtering, correlation, deconvolution, and calculating GPR waveforms. GP Wave Utilities is used primarily for calibrating radar systems and processing individual traces. Both programs also contain research features related to the calibration of GPR systems and calculating subsurface waveforms. The software is written to run on the Windows operating systems. GP Workbench can import GPR data file formats used by major commercial instrument manufacturers including Sensors and Software, GSSI, and Mala. The GP Workbench native file format is SU (Seismic Unix), and subsequently, files generated by GP Workbench can be read by Seismic Unix as well as many other data processing packages.

  3. The research of PSD location method in micro laser welding fields

    NASA Astrophysics Data System (ADS)

    Zhang, Qiue; Zhang, Rong; Dong, Hua

    2010-11-01

    In the field of micro laser welding, besides the special requirement in the parameter of lasers, the locating in welding points accurately is very important. The article adopt position sensitive detector (PSD) as hard core, combine optic system, electric circuits and PC and software processing, confirm the location of welding points. The signal detection circuits adopt the special integrate circuit H-2476 to process weak signal. It is an integrated circuit for high-speed, high-sensitivity optical range finding, which has stronger noiseproof feature, combine digital filter arithmetic, carry out repair the any non-ideal factors, increasing the measure precision. The amplifier adopt programmable amplifier LTC6915. The system adapt two dimension stepping motor drive the workbench, computer and corresponding software processing, make sure the location of spot weld. According to different workpieces to design the clamps. The system on-line detect PSD 's output signal in the moving processing. At the workbench moves in the X direction, the filaments offset is detected dynamic. Analyze the X axes moving sampling signal direction could be estimate the Y axes moving direction, and regulate the Y axes moving values. The workbench driver adopt A3979, it is a stepping motor driver with insert transducer and operate easily. It adapts the requirement of location in micro laser welding fields, real-time control to adjust by computer. It can be content up 20 μm's laser micro welding requirement on the whole. Using laser powder cladding technology achieve inter-penetration welding of high quality and reliability.

  4. A Problem-Solving Environment for Biological Network Informatics: Bio-Spice

    DTIC Science & Technology

    2007-06-01

    user an environment to access software tools. The Dashboard is built upon the NetBeans Integrated Development Environment (IDE), an open source Java...based integration platform was demonstrated. During the subsequent six month development cycle, the first version of the NetBeans based Bio-SPICE...frameworks (OAA, NetBeans , and Systems Biology Workbench (SBW)[15]), it becomes possible for Bio-SPICE tools to truly interoperate. This interoperation

  5. An Interactive Multimedia Software Program for Exploring Electrochemical Cells.

    ERIC Educational Resources Information Center

    Greenbowe, Thomas J.

    1994-01-01

    Describes computer-animated sequences and interactive multimedia instructional programs for use in introductory chemistry which allow students to explore electrochemical cells. The workbench section enables students to manipulate the experimental apparatus, chemicals, and instruments in order to design and build an experiment. The interactive…

  6. Increasing Range and Lethality of Extended-Range Munitions (ERMS) using Numerical Weather Prediction (NWP) and the AUV Workbench to Compute a Ballistic Correction (BALCOR)

    DTIC Science & Technology

    2006-12-01

    2 D . APPROACH TAKEN......................................................................................3 E...7 d . FORCEnet.................................................................................8 D . HISTORY OF LONG-RANGE PROJECTILES (LRPS...46 D . NUMERICAL WEATHER MODELING CENTERS...............................47 1. Fleet Numerical Meteorological

  7. Style and Usage Software: Mentor, not Judge.

    ERIC Educational Resources Information Center

    Smye, Randy

    Computer software style and usage checkers can encourage students' recursive revision strategies. For example, HOMER is based on the revision pedagogy presented in Richard Lanham's "Revising Prose," while Grammatik II focuses on readability, passive voice, and possibly misused words or phrases. Writer's Workbench "Style" (a UNIX program) provides…

  8. An Architecture for the Semantic Processing of Natural Language Input to a Policy Workbench

    DTIC Science & Technology

    2003-03-01

    WORKBENCH by E. John Custy March 2003 Thesis Advisor: J. Bret Michael Co-Advisor: Neil C. Rowe Approved for...Policy Workbench 6. AUTHOR(S) E. John Custy 5. FUNDING NUMBERS 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School Monterey...POLICY WORKBENCH E. John Custy B.S.E.E. New Jersey Institute of Technology, 1986 M.A. Cognitive and Neural Systems, Boston University, 1991 Master of

  9. The UEA Small RNA Workbench: A Suite of Computational Tools for Small RNA Analysis.

    PubMed

    Mohorianu, Irina; Stocks, Matthew Benedict; Applegate, Christopher Steven; Folkes, Leighton; Moulton, Vincent

    2017-01-01

    RNA silencing (RNA interference, RNAi) is a complex, highly conserved mechanism mediated by short, typically 20-24 nt in length, noncoding RNAs known as small RNAs (sRNAs). They act as guides for the sequence-specific transcriptional and posttranscriptional regulation of target mRNAs and play a key role in the fine-tuning of biological processes such as growth, response to stresses, or defense mechanism.High-throughput sequencing (HTS) technologies are employed to capture the expression levels of sRNA populations. The processing of the resulting big data sets facilitated the computational analysis of the sRNA patterns of variation within biological samples such as time point experiments, tissue series or various treatments. Rapid technological advances enable larger experiments, often with biological replicates leading to a vast amount of raw data. As a result, in this fast-evolving field, the existing methods for sequence characterization and prediction of interaction (regulatory) networks periodically require adapting or in extreme cases, a complete redesign to cope with the data deluge. In addition, the presence of numerous tools focused only on particular steps of HTS analysis hinders the systematic parsing of the results and their interpretation.The UEA small RNA Workbench (v1-4), described in this chapter, provides a user-friendly, modular, interactive analysis in the form of a suite of computational tools designed to process and mine sRNA datasets for interesting characteristics that can be linked back to the observed phenotypes. First, we show how to preprocess the raw sequencing output and prepare it for downstream analysis. Then we review some quality checks that can be used as a first indication of sources of variability between samples. Next we show how the Workbench can provide a comparison of the effects of different normalization approaches on the distributions of expression, enhanced methods for the identification of differentially expressed transcripts and a summary of their corresponding patterns. Finally we describe individual analysis tools such as PAREsnip, for the analysis of PARE (degradome) data or CoLIde for the identification of sRNA loci based on their expression patterns and the visualization of the results using the software. We illustrate the features of the UEA sRNA Workbench on Arabidopsis thaliana and Homo sapiens datasets.

  10. The ALICE System: A Workbench for Learning and Using Language.

    ERIC Educational Resources Information Center

    Levin, Lori; And Others

    1991-01-01

    ALICE, a multimedia framework for intelligent computer-assisted language instruction (ICALI) at Carnegie Mellon University (PA), consists of a set of tools for building a number of different types of ICALI programs in any language. Its Natural Language Processing tools for syntactic error detection, morphological analysis, and generation of…

  11. EPSAT - A workbench for designing high-power systems for the space environment

    NASA Technical Reports Server (NTRS)

    Kuharski, R. A.; Jongeward, G. A.; Wilcox, K. G.; Kennedy, E. M.; Stevens, N. J.; Putnam, R. M.; Roche, J. C.

    1990-01-01

    The Environment Power System Analysis Tool (EPSAT) is being developed to provide space power system design engineers with an analysis tool for determining the performance of power systems in both naturally occurring and self-induced environments. This paper presents the results of the project after two years of a three-year development program. The relevance of the project result for SDI are pointed out, and models of the interaction of the environment and power systems are discussed.

  12. RF Wave Simulation Using the MFEM Open Source FEM Package

    NASA Astrophysics Data System (ADS)

    Stillerman, J.; Shiraiwa, S.; Bonoli, P. T.; Wright, J. C.; Green, D. L.; Kolev, T.

    2016-10-01

    A new plasma wave simulation environment based on the finite element method is presented. MFEM, a scalable open-source FEM library, is used as the basis for this capability. MFEM allows for assembling an FEM matrix of arbitrarily high order in a parallel computing environment. A 3D frequency domain RF physics layer was implemented using a python wrapper for MFEM and a cold collisional plasma model was ported. This physics layer allows for defining the plasma RF wave simulation model without user knowledge of the FEM weak-form formulation. A graphical user interface is built on πScope, a python-based scientific workbench, such that a user can build a model definition file interactively. Benchmark cases have been ported to this new environment, with results being consistent with those obtained using COMSOL multiphysics, GENRAY, and TORIC/TORLH spectral solvers. This work is a first step in bringing to bear the sophisticated computational tool suite that MFEM provides (e.g., adaptive mesh refinement, solver suite, element types) to the linear plasma-wave interaction problem, and within more complicated integrated workflows, such as coupling with core spectral solver, or incorporating additional physics such as an RF sheath potential model or kinetic effects. USDoE Awards DE-FC02-99ER54512, DE-FC02-01ER54648.

  13. Automation of Shuttle Tile Inspection - Engineering methodology for Space Station

    NASA Technical Reports Server (NTRS)

    Wiskerchen, M. J.; Mollakarimi, C.

    1987-01-01

    The Space Systems Integration and Operations Research Applications (SIORA) Program was initiated in late 1986 as a cooperative applications research effort between Stanford University, NASA Kennedy Space Center, and Lockheed Space Operations Company. One of the major initial SIORA tasks was the application of automation and robotics technology to all aspects of the Shuttle tile processing and inspection system. This effort has adopted a systems engineering approach consisting of an integrated set of rapid prototyping testbeds in which a government/university/industry team of users, technologists, and engineers test and evaluate new concepts and technologies within the operational world of Shuttle. These integrated testbeds include speech recognition and synthesis, laser imaging inspection systems, distributed Ada programming environments, distributed relational database architectures, distributed computer network architectures, multimedia workbenches, and human factors considerations.

  14. adwTools Developed: New Bulk Alloy and Surface Analysis Software for the Alloy Design Workbench

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Morse, Jeffrey A.; Noebe, Ronald D.; Abel, Phillip B.

    2004-01-01

    A suite of atomistic modeling software, called the Alloy Design Workbench, has been developed by the Computational Materials Group at the NASA Glenn Research Center and the Ohio Aerospace Institute (OAI). The main goal of this software is to guide and augment experimental materials research and development efforts by creating powerful, yet intuitive, software that combines a graphical user interface with an operating code suitable for real-time atomistic simulations of multicomponent alloy systems. Targeted for experimentalists, the interface is straightforward and requires minimum knowledge of the underlying theory, allowing researchers to focus on the scientific aspects of the work. The centerpiece of the Alloy Design Workbench suite is the adwTools module, which concentrates on the atomistic analysis of surfaces and bulk alloys containing an arbitrary number of elements. An additional module, adwParams, handles ab initio input for the parameterization used in adwTools. Future modules planned for the suite include adwSeg, which will provide numerical predictions for segregation profiles to alloy surfaces and interfaces, and adwReport, which will serve as a window into the database, providing public access to the parameterization data and a repository where users can submit their own findings from the rest of the suite. The entire suite is designed to run on desktop-scale computers. The adwTools module incorporates a custom OAI/Glenn-developed Fortran code based on the BFS (Bozzolo- Ferrante-Smith) method for alloys, ref. 1). The heart of the suite, this code is used to calculate the energetics of different compositions and configurations of atoms.

  15. Making Information Overload Work: The Dragon Software System on a Virtual Reality Responsive Workbench

    DTIC Science & Technology

    1998-03-01

    Research Laboratory’s Virtual Reality Responsive Workbench (VRRWB) and Dragon software system which together address the problem of battle space...and describe the lessons which have been learned. Interactive graphics, workbench, battle space visualization, virtual reality , user interface.

  16. AstroGrid: Taverna in the Virtual Observatory .

    NASA Astrophysics Data System (ADS)

    Benson, K. M.; Walton, N. A.

    This paper reports on the implementation of the Taverna workbench by AstroGrid, a tool for designing and executing workflows of tasks in the Virtual Observatory. The workflow approach helps astronomers perform complex task sequences with little technical effort. Visual approach to workflow construction streamlines highly complex analysis over public and private data and uses computational resources as minimal as a desktop computer. Some integration issues and future work are discussed in this article.

  17. Application of ANSYS Workbench and CFX at NASA's John C. Stennis Space Center

    NASA Technical Reports Server (NTRS)

    Woods, Jody L.

    2007-01-01

    This viewgraph presentation reviews the overall work of the Stennis Space Center, with particular attention paid to the systems analysis and modeling being done with ANSYS Workbench and CFX. Examples of the analyses done with ANSYS Workbench and CFX and planned analyses are reviewed.

  18. Moving Towards a Science-Driven Workbench for Earth Science Solutions

    NASA Astrophysics Data System (ADS)

    Graves, S. J.; Djorgovski, S. G.; Law, E.; Yang, C. P.; Keiser, K.

    2017-12-01

    The NSF-funded EarthCube Integration and Test Environment (ECITE) prototype was proposed as a 2015 Integrated Activities project and resulted in the prototyping of an EarthCube federated cloud environment and the Integration and Testing Framework. The ECITE team has worked with EarthCube science and technology governance committees to define the types of integration, testing and evaluation necessary to achieve and demonstrate interoperability and functionality that benefit and support the objectives of the EarthCube cyber-infrastructure. The scope of ECITE also includes reaching beyond NSF and EarthCube to work with the broader Earth science community, such as the Earth Science Information Partners (ESIP) to incorporate lessons learned from other testbed activities, and ultimately provide broader community benefits. This presentation will discuss evolving ECITE ideas for a science-driven workbench that will start with documented science use cases, map the use cases to solution scenarios that identify the available technology and data resources that match the use case, the generation of solution workflows and test plans, the testing and evaluation of the solutions in a cloud environment, and finally the documentation of identified technology and data gaps that will assist with driving the development of additional EarthCube resources.

  19. Alloy Design Workbench-Surface Modeling Package Developed

    NASA Technical Reports Server (NTRS)

    Abel, Phillip B.; Noebe, Ronald D.; Bozzolo, Guillermo H.; Good, Brian S.; Daugherty, Elaine S.

    2003-01-01

    NASA Glenn Research Center's Computational Materials Group has integrated a graphical user interface with in-house-developed surface modeling capabilities, with the goal of using computationally efficient atomistic simulations to aid the development of advanced aerospace materials, through the modeling of alloy surfaces, surface alloys, and segregation. The software is also ideal for modeling nanomaterials, since surface and interfacial effects can dominate material behavior and properties at this level. Through the combination of an accurate atomistic surface modeling methodology and an efficient computational engine, it is now possible to directly model these types of surface phenomenon and metallic nanostructures without a supercomputer. Fulfilling a High Operating Temperature Propulsion Components (HOTPC) project level-I milestone, a graphical user interface was created for a suite of quantum approximate atomistic materials modeling Fortran programs developed at Glenn. The resulting "Alloy Design Workbench-Surface Modeling Package" (ADW-SMP) is the combination of proven quantum approximate Bozzolo-Ferrante-Smith (BFS) algorithms (refs. 1 and 2) with a productivity-enhancing graphical front end. Written in the portable, platform independent Java programming language, the graphical user interface calls on extensively tested Fortran programs running in the background for the detailed computational tasks. Designed to run on desktop computers, the package has been deployed on PC, Mac, and SGI computer systems. The graphical user interface integrates two modes of computational materials exploration. One mode uses Monte Carlo simulations to determine lowest energy equilibrium configurations. The second approach is an interactive "what if" comparison of atomic configuration energies, designed to provide real-time insight into the underlying drivers of alloying processes.

  20. Development of a High Resolution 3D Infant Stomach Model for Surgical Planning

    NASA Astrophysics Data System (ADS)

    Chaudry, Qaiser; Raza, S. Hussain; Lee, Jeonggyu; Xu, Yan; Wulkan, Mark; Wang, May D.

    Medical surgical procedures have not changed much during the past century due to the lack of accurate low-cost workbench for testing any new improvement. The increasingly cheaper and powerful computer technologies have made computer-based surgery planning and training feasible. In our work, we have developed an accurate 3D stomach model, which aims to improve the surgical procedure that treats the infant pediatric and neonatal gastro-esophageal reflux disease (GERD). We generate the 3-D infant stomach model based on in vivo computer tomography (CT) scans of an infant. CT is a widely used clinical imaging modality that is cheap, but with low spatial resolution. To improve the model accuracy, we use the high resolution Visible Human Project (VHP) in model building. Next, we add soft muscle material properties to make the 3D model deformable. Then we use virtual reality techniques such as haptic devices to make the 3D stomach model deform upon touching force. This accurate 3D stomach model provides a workbench for testing new GERD treatment surgical procedures. It has the potential to reduce or eliminate the extensive cost associated with animal testing when improving any surgical procedure, and ultimately, to reduce the risk associated with infant GERD surgery.

  1. A Computer Text Analysis of Four Cohesion Devices in English Discourse by Native and Nonnative Writers.

    ERIC Educational Resources Information Center

    Reid, Joy

    1992-01-01

    In a contrastive rhetoric study of nonnative English speakers, 768 essays written in English by native speakers of Arabic, Chinese, Spanish, and English were examined using the Writer's Workbench program to determine whether distinctive, quantifiable differences in the use of 4 cohesion devices existed among the 4 language backgrounds. (Author/LB)

  2. A Comparison, for Teaching Purposes, of Three Data-Acquisition Systems for the Macintosh.

    ERIC Educational Resources Information Center

    Swanson, Harold D.

    1990-01-01

    Three commercial products for data acquisition with the Macintosh computer, known by the trade names of LabVIEW, Analog Connection WorkBench, and MacLab were reviewed and compared, on the basis of actual trials, for their suitability in physiological and biological teaching laboratories. Suggestions for using these software packages are provided.…

  3. Research on the thickness control method of workbench oil film based on theoretical model

    NASA Astrophysics Data System (ADS)

    Pei, Tang; Lin, Lin; Liu, Ge; Yu, Liping; Xu, Zhen; Zhao, Di

    2018-06-01

    To improve the thickness adjustability of the workbench oil film, we designed a software system to control the thickness of oil film based on the Siemens 840dsl CNC system and set up an experimental platform. A regulation scheme of oil film thickness based on theoretical model is proposed, the accuracy and feasibility of which is proved by experiment results. It's verified that the method mentioned above can meet the demands of workbench oil film thickness control, the experiment is simple and efficient with high control precision. Reliable theory support is supplied for the development of workbench oil film active control system as well.

  4. ARC integration into the NEAMS Workbench

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stauff, N.; Gaughan, N.; Kim, T.

    2017-01-01

    One of the objectives of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Integration Product Line (IPL) is to facilitate the deployment of the high-fidelity codes developed within the program. The Workbench initiative was launched in FY-2017 by the IPL to facilitate the transition from conventional tools to high fidelity tools. The Workbench provides a common user interface for model creation, real-time validation, execution, output processing, and visualization for integrated codes.

  5. Collaborative Visualization Project: shared-technology learning environments for science learning

    NASA Astrophysics Data System (ADS)

    Pea, Roy D.; Gomez, Louis M.

    1993-01-01

    Project-enhanced science learning (PESL) provides students with opportunities for `cognitive apprenticeships' in authentic scientific inquiry using computers for data-collection and analysis. Student teams work on projects with teacher guidance to develop and apply their understanding of science concepts and skills. We are applying advanced computing and communications technologies to augment and transform PESL at-a-distance (beyond the boundaries of the individual school), which is limited today to asynchronous, text-only networking and unsuitable for collaborative science learning involving shared access to multimedia resources such as data, graphs, tables, pictures, and audio-video communication. Our work creates user technology (a Collaborative Science Workbench providing PESL design support and shared synchronous document views, program, and data access; a Science Learning Resource Directory for easy access to resources including two-way video links to collaborators, mentors, museum exhibits, media-rich resources such as scientific visualization graphics), and refine enabling technologies (audiovisual and shared-data telephony, networking) for this PESL niche. We characterize participation scenarios for using these resources and we discuss national networked access to science education expertise.

  6. Next generation simulation tools: the Systems Biology Workbench and BioSPICE integration.

    PubMed

    Sauro, Herbert M; Hucka, Michael; Finney, Andrew; Wellock, Cameron; Bolouri, Hamid; Doyle, John; Kitano, Hiroaki

    2003-01-01

    Researchers in quantitative systems biology make use of a large number of different software packages for modelling, analysis, visualization, and general data manipulation. In this paper, we describe the Systems Biology Workbench (SBW), a software framework that allows heterogeneous application components--written in diverse programming languages and running on different platforms--to communicate and use each others' capabilities via a fast binary encoded-message system. Our goal was to create a simple, high performance, opensource software infrastructure which is easy to implement and understand. SBW enables applications (potentially running on separate, distributed computers) to communicate via a simple network protocol. The interfaces to the system are encapsulated in client-side libraries that we provide for different programming languages. We describe in this paper the SBW architecture, a selection of current modules, including Jarnac, JDesigner, and SBWMeta-tool, and the close integration of SBW into BioSPICE, which enables both frameworks to share tools and compliment and strengthen each others capabilities.

  7. LOSITAN: a workbench to detect molecular adaptation based on a Fst-outlier method.

    PubMed

    Antao, Tiago; Lopes, Ana; Lopes, Ricardo J; Beja-Pereira, Albano; Luikart, Gordon

    2008-07-28

    Testing for selection is becoming one of the most important steps in the analysis of multilocus population genetics data sets. Existing applications are difficult to use, leaving many non-trivial, error-prone tasks to the user. Here we present LOSITAN, a selection detection workbench based on a well evaluated Fst-outlier detection method. LOSITAN greatly facilitates correct approximation of model parameters (e.g., genome-wide average, neutral Fst), provides data import and export functions, iterative contour smoothing and generation of graphics in a easy to use graphical user interface. LOSITAN is able to use modern multi-core processor architectures by locally parallelizing fdist, reducing computation time by half in current dual core machines and with almost linear performance gains in machines with more cores. LOSITAN makes selection detection feasible to a much wider range of users, even for large population genomic datasets, by both providing an easy to use interface and essential functionality to complete the whole selection detection process.

  8. Integrated Sensitivity Analysis Workflow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman-Hill, Ernest J.; Hoffman, Edward L.; Gibson, Marcus J.

    2014-08-01

    Sensitivity analysis is a crucial element of rigorous engineering analysis, but performing such an analysis on a complex model is difficult and time consuming. The mission of the DART Workbench team at Sandia National Laboratories is to lower the barriers to adoption of advanced analysis tools through software integration. The integrated environment guides the engineer in the use of these integrated tools and greatly reduces the cycle time for engineering analysis.

  9. WormQTL—public archive and analysis web portal for natural variation data in Caenorhabditis spp

    PubMed Central

    Snoek, L. Basten; Van der Velde, K. Joeri; Arends, Danny; Li, Yang; Beyer, Antje; Elvin, Mark; Fisher, Jasmin; Hajnal, Alex; Hengartner, Michael O.; Poulin, Gino B.; Rodriguez, Miriam; Schmid, Tobias; Schrimpf, Sabine; Xue, Feng; Jansen, Ritsert C.; Kammenga, Jan E.; Swertz, Morris A.

    2013-01-01

    Here, we present WormQTL (http://www.wormqtl.org), an easily accessible database enabling search, comparative analysis and meta-analysis of all data on variation in Caenorhabditis spp. Over the past decade, Caenorhabditis elegans has become instrumental for molecular quantitative genetics and the systems biology of natural variation. These efforts have resulted in a valuable amount of phenotypic, high-throughput molecular and genotypic data across different developmental worm stages and environments in hundreds of C. elegans strains. WormQTL provides a workbench of analysis tools for genotype–phenotype linkage and association mapping based on but not limited to R/qtl (http://www.rqtl.org). All data can be uploaded and downloaded using simple delimited text or Excel formats and are accessible via a public web user interface for biologists and R statistic and web service interfaces for bioinformaticians, based on open source MOLGENIS and xQTL workbench software. WormQTL welcomes data submissions from other worm researchers. PMID:23180786

  10. WormQTL--public archive and analysis web portal for natural variation data in Caenorhabditis spp.

    PubMed

    Snoek, L Basten; Van der Velde, K Joeri; Arends, Danny; Li, Yang; Beyer, Antje; Elvin, Mark; Fisher, Jasmin; Hajnal, Alex; Hengartner, Michael O; Poulin, Gino B; Rodriguez, Miriam; Schmid, Tobias; Schrimpf, Sabine; Xue, Feng; Jansen, Ritsert C; Kammenga, Jan E; Swertz, Morris A

    2013-01-01

    Here, we present WormQTL (http://www.wormqtl.org), an easily accessible database enabling search, comparative analysis and meta-analysis of all data on variation in Caenorhabditis spp. Over the past decade, Caenorhabditis elegans has become instrumental for molecular quantitative genetics and the systems biology of natural variation. These efforts have resulted in a valuable amount of phenotypic, high-throughput molecular and genotypic data across different developmental worm stages and environments in hundreds of C. elegans strains. WormQTL provides a workbench of analysis tools for genotype-phenotype linkage and association mapping based on but not limited to R/qtl (http://www.rqtl.org). All data can be uploaded and downloaded using simple delimited text or Excel formats and are accessible via a public web user interface for biologists and R statistic and web service interfaces for bioinformaticians, based on open source MOLGENIS and xQTL workbench software. WormQTL welcomes data submissions from other worm researchers.

  11. GRID-Launcher v.1.0.

    NASA Astrophysics Data System (ADS)

    Deniskina, N.; Brescia, M.; Cavuoti, S.; d'Angelo, G.; Laurino, O.; Longo, G.

    GRID-launcher-1.0 was built within the VO-Tech framework, as a software interface between the UK-ASTROGRID and a generic GRID infrastructures in order to allow any ASTROGRID user to launch on the GRID computing intensive tasks from the ASTROGRID Workbench or Desktop. Even though of general application, so far the Grid-Launcher has been tested on a few selected softwares (VONeural-MLP, VONeural-SVM, Sextractor and SWARP) and on the SCOPE-GRID.

  12. Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Gorelick, Noel

    2013-04-01

    The Google Earth Engine platform is a system designed to enable petabyte-scale, scientific analysis and visualization of geospatial datasets. Earth Engine provides a consolidated environment including a massive data catalog co-located with thousands of computers for analysis. The user-friendly front-end provides a workbench environment to allow interactive data and algorithm development and exploration and provides a convenient mechanism for scientists to share data, visualizations and analytic algorithms via URLs. The Earth Engine data catalog contains a wide variety of popular, curated datasets, including the world's largest online collection of Landsat scenes (> 2.0M), numerous MODIS collections, and many vector-based data sets. The platform provides a uniform access mechanism to a variety of data types, independent of their bands, projection, bit-depth, resolution, etc..., facilitating easy multi-sensor analysis. Additionally, a user is able to add and curate their own data and collections. Using a just-in-time, distributed computation model, Earth Engine can rapidly process enormous quantities of geo-spatial data. All computation is performed lazily; nothing is computed until it's required either for output or as input to another step. This model allows real-time feedback and preview during algorithm development, supporting a rapid algorithm development, test, and improvement cycle that scales seamlessly to large-scale production data processing. Through integration with a variety of other services, Earth Engine is able to bring to bear considerable analytic and technical firepower in a transparent fashion, including: AI-based classification via integration with Google's machine learning infrastructure, publishing and distribution at Google scale through integration with the Google Maps API, Maps Engine and Google Earth, and support for in-the-field activities such as validation, ground-truthing, crowd-sourcing and citizen science though the Android Open Data Kit.

  13. Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Gorelick, N.

    2012-12-01

    The Google Earth Engine platform is a system designed to enable petabyte-scale, scientific analysis and visualization of geospatial datasets. Earth Engine provides a consolidated environment including a massive data catalog co-located with thousands of computers for analysis. The user-friendly front-end provides a workbench environment to allow interactive data and algorithm development and exploration and provides a convenient mechanism for scientists to share data, visualizations and analytic algorithms via URLs. The Earth Engine data catalog contains a wide variety of popular, curated datasets, including the world's largest online collection of Landsat scenes (> 2.0M), numerous MODIS collections, and many vector-based data sets. The platform provides a uniform access mechanism to a variety of data types, independent of their bands, projection, bit-depth, resolution, etc..., facilitating easy multi-sensor analysis. Additionally, a user is able to add and curate their own data and collections. Using a just-in-time, distributed computation model, Earth Engine can rapidly process enormous quantities of geo-spatial data. All computation is performed lazily; nothing is computed until it's required either for output or as input to another step. This model allows real-time feedback and preview during algorithm development, supporting a rapid algorithm development, test, and improvement cycle that scales seamlessly to large-scale production data processing. Through integration with a variety of other services, Earth Engine is able to bring to bear considerable analytic and technical firepower in a transparent fashion, including: AI-based classification via integration with Google's machine learning infrastructure, publishing and distribution at Google scale through integration with the Google Maps API, Maps Engine and Google Earth, and support for in-the-field activities such as validation, ground-truthing, crowd-sourcing and citizen science though the Android Open Data Kit.

  14. New impressive capabilities of SE-workbench for EO/IR real-time rendering of animated scenarios including flares

    NASA Astrophysics Data System (ADS)

    Le Goff, Alain; Cathala, Thierry; Latger, Jean

    2015-10-01

    To provide technical assessments of EO/IR flares and self-protection systems for aircraft, DGA Information superiority resorts to synthetic image generation to model the operational battlefield of an aircraft, as viewed by EO/IR threats. For this purpose, it completed the SE-Workbench suite from OKTAL-SE with functionalities to predict a realistic aircraft IR signature and is yet integrating the real-time EO/IR rendering engine of SE-Workbench called SE-FAST-IR. This engine is a set of physics-based software and libraries that allows preparing and visualizing a 3D scene for the EO/IR domain. It takes advantage of recent advances in GPU computing techniques. The recent past evolutions that have been performed concern mainly the realistic and physical rendering of reflections, the rendering of both radiative and thermal shadows, the use of procedural techniques for the managing and the rendering of very large terrains, the implementation of Image- Based Rendering for dynamic interpolation of plume static signatures and lastly for aircraft the dynamic interpolation of thermal states. The next step is the representation of the spectral, directional, spatial and temporal signature of flares by Lacroix Defense using OKTAL-SE technology. This representation is prepared from experimental data acquired during windblast tests and high speed track tests. It is based on particle system mechanisms to model the different components of a flare. The validation of a flare model will comprise a simulation of real trials and a comparison of simulation outputs to experimental results concerning the flare signature and above all the behavior of the stimulated threat.

  15. Software Engineering Laboratory (SEL) programmer workbench phase 1 evaluation

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Phase 1 of the SEL programmer workbench consists of the design of the following three components: communications link, command language processor, and collection of software aids. A brief description, and evaluation, and recommendations are presented for each of these three components.

  16. Bioclipse: an open source workbench for chemo- and bioinformatics.

    PubMed

    Spjuth, Ola; Helmus, Tobias; Willighagen, Egon L; Kuhn, Stefan; Eklund, Martin; Wagener, Johannes; Murray-Rust, Peter; Steinbeck, Christoph; Wikberg, Jarl E S

    2007-02-22

    There is a need for software applications that provide users with a complete and extensible toolkit for chemo- and bioinformatics accessible from a single workbench. Commercial packages are expensive and closed source, hence they do not allow end users to modify algorithms and add custom functionality. Existing open source projects are more focused on providing a framework for integrating existing, separately installed bioinformatics packages, rather than providing user-friendly interfaces. No open source chemoinformatics workbench has previously been published, and no successful attempts have been made to integrate chemo- and bioinformatics into a single framework. Bioclipse is an advanced workbench for resources in chemo- and bioinformatics, such as molecules, proteins, sequences, spectra, and scripts. It provides 2D-editing, 3D-visualization, file format conversion, calculation of chemical properties, and much more; all fully integrated into a user-friendly desktop application. Editing supports standard functions such as cut and paste, drag and drop, and undo/redo. Bioclipse is written in Java and based on the Eclipse Rich Client Platform with a state-of-the-art plugin architecture. This gives Bioclipse an advantage over other systems as it can easily be extended with functionality in any desired direction. Bioclipse is a powerful workbench for bio- and chemoinformatics as well as an advanced integration platform. The rich functionality, intuitive user interface, and powerful plugin architecture make Bioclipse the most advanced and user-friendly open source workbench for chemo- and bioinformatics. Bioclipse is released under Eclipse Public License (EPL), an open source license which sets no constraints on external plugin licensing; it is totally open for both open source plugins as well as commercial ones. Bioclipse is freely available at http://www.bioclipse.net.

  17. 6. INTERIOR VIEW OF NORTH ENTRANCE TO BASEMENT SHOWING WORKBENCH ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. INTERIOR VIEW OF NORTH ENTRANCE TO BASEMENT SHOWING WORKBENCH AT PHOTO LEFT AND ONE OF TWO DOORWAYS TO MAIN BASEMENT AREA AT PHOTO RIGHT. VIEW TO NORTH. - Bishop Creek Hydroelectric System, Control Station, Worker Cottage, Bishop Creek, Bishop, Inyo County, CA

  18. WISARD: workbench for integrated superfast association studies for related datasets.

    PubMed

    Lee, Sungyoung; Choi, Sungkyoung; Qiao, Dandi; Cho, Michael; Silverman, Edwin K; Park, Taesung; Won, Sungho

    2018-04-20

    A Mendelian transmission produces phenotypic and genetic relatedness between family members, giving family-based analytical methods an important role in genetic epidemiological studies-from heritability estimations to genetic association analyses. With the advance in genotyping technologies, whole-genome sequence data can be utilized for genetic epidemiological studies, and family-based samples may become more useful for detecting de novo mutations. However, genetic analyses employing family-based samples usually suffer from the complexity of the computational/statistical algorithms, and certain types of family designs, such as incorporating data from extended families, have rarely been used. We present a Workbench for Integrated Superfast Association studies for Related Data (WISARD) programmed in C/C++. WISARD enables the fast and a comprehensive analysis of SNP-chip and next-generation sequencing data on extended families, with applications from designing genetic studies to summarizing analysis results. In addition, WISARD can automatically be run in a fully multithreaded manner, and the integration of R software for visualization makes it more accessible to non-experts. Comparison with existing toolsets showed that WISARD is computationally suitable for integrated analysis of related subjects, and demonstrated that WISARD outperforms existing toolsets. WISARD has also been successfully utilized to analyze the large-scale massive sequencing dataset of chronic obstructive pulmonary disease data (COPD), and we identified multiple genes associated with COPD, which demonstrates its practical value.

  19. Applications of AN OO Methodology and Case to a Daq System

    NASA Astrophysics Data System (ADS)

    Bee, C. P.; Eshghi, S.; Jones, R.; Kolos, S.; Magherini, C.; Maidantchik, C.; Mapelli, L.; Mornacchi, G.; Niculescu, M.; Patel, A.; Prigent, D.; Spiwoks, R.; Soloviev, I.; Caprini, M.; Duval, P. Y.; Etienne, F.; Ferrato, D.; Le van Suu, A.; Qian, Z.; Gaponenko, I.; Merzliakov, Y.; Ambrosini, G.; Ferrari, R.; Fumagalli, G.; Polesello, G.

    The RD13 project has evaluated the use of the Object Oriented Information Engineering (OOIE) method during the development of several software components connected to the DAQ system. The method is supported by a sophisticated commercial CASE tool (Object Management Workbench) and programming environment (Kappa) which covers the full life-cycle of the software including model simulation, code generation and application deployment. This paper gives an overview of the method, CASE tool, DAQ components which have been developed and we relate our experiences with the method and tool, its integration into our development environment and the spiral lifecycle it supports.

  20. Grayscale Optical Correlator Workbench

    NASA Technical Reports Server (NTRS)

    Hanan, Jay; Zhou, Hanying; Chao, Tien-Hsin

    2006-01-01

    Grayscale Optical Correlator Workbench (GOCWB) is a computer program for use in automatic target recognition (ATR). GOCWB performs ATR with an accurate simulation of a hardware grayscale optical correlator (GOC). This simulation is performed to test filters that are created in GOCWB. Thus, GOCWB can be used as a stand-alone ATR software tool or in combination with GOC hardware for building (target training), testing, and optimization of filters. The software is divided into three main parts, denoted filter, testing, and training. The training part is used for assembling training images as input to a filter. The filter part is used for combining training images into a filter and optimizing that filter. The testing part is used for testing new filters and for general simulation of GOC output. The current version of GOCWB relies on the mathematical software tools from MATLAB binaries for performing matrix operations and fast Fourier transforms. Optimization of filters is based on an algorithm, known as OT-MACH, in which variables specified by the user are parameterized and the best filter is selected on the basis of an average result for correct identification of targets in multiple test images.

  1. Modeling of Habitat and Foraging Behavior of Beaked Whales in the Southern California Bight

    DTIC Science & Technology

    2015-09-30

    619) 261-1651 fax: (760) 652-4878 email: tina.yack@bio-waves.net Jeffrey E. Moore Southwest Fisheries Science Center NOAA Fisheries ...a species label. Data from acoustic line-transect surveys (2008-2011) carried out by NOAA Southwest Fisheries Science Center (Jay Barlow) in...stationary HARP sites. For this purpose we took advantage of the Effects of Sound on Marine Environment (ESME) 2012 Workbench framework (D. Mountain

  2. Synaptic changes in rat maculae in space and medical imaging: the link

    NASA Technical Reports Server (NTRS)

    Ross, M. D.

    1998-01-01

    Two different space life sciences missions (SLS-1 and SLS-2) have demonstrated that the synapses of the hair cells of rat vestibular maculae increase significantly in microgravity. The results also indicate that macular synapses are sensitive to stress. These findings argue that vestibular maculae exhibit neuroplasticity to macroenvironmental and microenvironmental changes. This capability should be clinically relevant to rehabilitative training and/or pharmacological treatments for vestibular disease. The results of this ultrastructural research also demonstrated that type I and type II hair cells are integrated into the same neuronal circuitry. The findings were the basis for development of three-dimensional reconstruction software to learn details of macular wiring. This software, produced for scientific research, has now been adapted to reconstruct the face and skull directly from computerized tomography scans. In collaboration with craniofacial reconstructive surgeons at Stanford University Medical Center, an effort is under way to produce a virtual environment workbench for complex craniofacial surgery. When completed, the workbench will help surgeons train for and simulate surgery. The methods are patient specific. This research illustrates the value of basic research in leading to unanticipated medical applications.

  3. The Infobiotics Workbench: an integrated in silico modelling platform for Systems and Synthetic Biology.

    PubMed

    Blakes, Jonathan; Twycross, Jamie; Romero-Campero, Francisco Jose; Krasnogor, Natalio

    2011-12-01

    The Infobiotics Workbench is an integrated software suite incorporating model specification, simulation, parameter optimization and model checking for Systems and Synthetic Biology. A modular model specification allows for straightforward creation of large-scale models containing many compartments and reactions. Models are simulated either using stochastic simulation or numerical integration, and visualized in time and space. Model parameters and structure can be optimized with evolutionary algorithms, and model properties calculated using probabilistic model checking. Source code and binaries for Linux, Mac and Windows are available at http://www.infobiotics.org/infobiotics-workbench/; released under the GNU General Public License (GPL) version 3. Natalio.Krasnogor@nottingham.ac.uk.

  4. TERRA REF: Advancing phenomics with high resolution, open access sensor and genomics data

    NASA Astrophysics Data System (ADS)

    LeBauer, D.; Kooper, R.; Burnette, M.; Willis, C.

    2017-12-01

    Automated plant measurement has the potential to improve understanding of genetic and environmental controls on plant traits (phenotypes). The application of sensors and software in the automation of high throughput phenotyping reflects a fundamental shift from labor intensive hand measurements to drone, tractor, and robot mounted sensing platforms. These tools are expected to speed the rate of crop improvement by enabling plant breeders to more accurately select plants with improved yields, resource use efficiency, and stress tolerance. However, there are many challenges facing high throughput phenomics: sensors and platforms are expensive, currently there are few standard methods of data collection and storage, and the analysis of large data sets requires high performance computers and automated, reproducible computing pipelines. To overcome these obstacles and advance the science of high throughput phenomics, the TERRA Phenotyping Reference Platform (TERRA-REF) team is developing an open-access database of high resolution sensor data. TERRA REF is an integrated field and greenhouse phenotyping system that includes: a reference field scanner with fifteen sensors that can generate terrabytes of data each day at mm resolution; UAV, tractor, and fixed field sensing platforms; and an automated controlled-environment scanner. These platforms will enable investigation of diverse sensing modalities, and the investigation of traits under controlled and field environments. It is the goal of TERRA REF to lower the barrier to entry for academic and industry researchers by providing high-resolution data, open source software, and online computing resources. Our project is unique in that all data will be made fully public in November 2018, and is already available to early adopters through the beta-user program. We will describe the datasets and how to use them as well as the databases and computing pipeline and how these can be reused and remixed in other phenomics pipelines. Finally, we will describe the National Data Service workbench, a cloud computing platform that can access the petabyte scale data while supporting reproducible research.

  5. Seamless 3D interaction for virtual tables, projection planes, and CAVEs

    NASA Astrophysics Data System (ADS)

    Encarnacao, L. M.; Bimber, Oliver; Schmalstieg, Dieter; Barton, Robert J., III

    2000-08-01

    The Virtual Table presents stereoscopic graphics to a user in a workbench-like setting. This device shares with other large- screen display technologies (such as data walls and surround- screen projection systems) the lack of human-centered unencumbered user interfaces and 3D interaction technologies. Such shortcomings present severe limitations to the application of virtual reality (VR) technology to time- critical applications as well as employment scenarios that involve heterogeneous groups of end-users without high levels of computer familiarity and expertise. Traditionally such employment scenarios are common in planning-related application areas such as mission rehearsal and command and control. For these applications, a high grade of flexibility with respect to the system requirements (display and I/O devices) as well as to the ability to seamlessly and intuitively switch between different interaction modalities and interaction are sought. Conventional VR techniques may be insufficient to meet this challenge. This paper presents novel approaches for human-centered interfaces to Virtual Environments focusing on the Virtual Table visual input device. It introduces new paradigms for 3D interaction in virtual environments (VE) for a variety of application areas based on pen-and-clipboard, mirror-in-hand, and magic-lens metaphors, and introduces new concepts for combining VR and augmented reality (AR) techniques. It finally describes approaches toward hybrid and distributed multi-user interaction environments and concludes by hypothesizing on possible use cases for defense applications.

  6. Development of a comprehensive software engineering environment

    NASA Technical Reports Server (NTRS)

    Hartrum, Thomas C.; Lamont, Gary B.

    1987-01-01

    The generation of a set of tools for software lifecycle is a recurring theme in the software engineering literature. The development of such tools and their integration into a software development environment is a difficult task because of the magnitude (number of variables) and the complexity (combinatorics) of the software lifecycle process. An initial development of a global approach was initiated in 1982 as the Software Development Workbench (SDW). Continuing efforts focus on tool development, tool integration, human interfacing, data dictionaries, and testing algorithms. Current efforts are emphasizing natural language interfaces, expert system software development associates and distributed environments with Ada as the target language. The current implementation of the SDW is on a VAX-11/780. Other software development tools are being networked through engineering workstations.

  7. Oqtans: the RNA-seq workbench in the cloud for complete and reproducible quantitative transcriptome analysis.

    PubMed

    Sreedharan, Vipin T; Schultheiss, Sebastian J; Jean, Géraldine; Kahles, André; Bohnert, Regina; Drewe, Philipp; Mudrakarta, Pramod; Görnitz, Nico; Zeller, Georg; Rätsch, Gunnar

    2014-05-01

    We present Oqtans, an open-source workbench for quantitative transcriptome analysis, that is integrated in Galaxy. Its distinguishing features include customizable computational workflows and a modular pipeline architecture that facilitates comparative assessment of tool and data quality. Oqtans integrates an assortment of machine learning-powered tools into Galaxy, which show superior or equal performance to state-of-the-art tools. Implemented tools comprise a complete transcriptome analysis workflow: short-read alignment, transcript identification/quantification and differential expression analysis. Oqtans and Galaxy facilitate persistent storage, data exchange and documentation of intermediate results and analysis workflows. We illustrate how Oqtans aids the interpretation of data from different experiments in easy to understand use cases. Users can easily create their own workflows and extend Oqtans by integrating specific tools. Oqtans is available as (i) a cloud machine image with a demo instance at cloud.oqtans.org, (ii) a public Galaxy instance at galaxy.cbio.mskcc.org, (iii) a git repository containing all installed software (oqtans.org/git); most of which is also available from (iv) the Galaxy Toolshed and (v) a share string to use along with Galaxy CloudMan.

  8. A Workbench for Discovering Task-Specific Theories of Learning

    DTIC Science & Technology

    1989-03-03

    mind (the cognitive architecture) will not be of much use to educators who wish to perform a cognitive task analysis of their subject matter before...analysis packages that can be added to a cognitive architecture, thus creating a ’workbench’ for performing cognitive task analysis . Such tools becomes...learning theories have been. Keywords: Cognitive task analysis , Instructional design, Cognitive modelling, Learning.

  9. Java bioinformatics analysis web services for multiple sequence alignment--JABAWS:MSA.

    PubMed

    Troshin, Peter V; Procter, James B; Barton, Geoffrey J

    2011-07-15

    JABAWS is a web services framework that simplifies the deployment of web services for bioinformatics. JABAWS:MSA provides services for five multiple sequence alignment (MSA) methods (Probcons, T-coffee, Muscle, Mafft and ClustalW), and is the system employed by the Jalview multiple sequence analysis workbench since version 2.6. A fully functional, easy to set up server is provided as a Virtual Appliance (VA), which can be run on most operating systems that support a virtualization environment such as VMware or Oracle VirtualBox. JABAWS is also distributed as a Web Application aRchive (WAR) and can be configured to run on a single computer and/or a cluster managed by Grid Engine, LSF or other queuing systems that support DRMAA. JABAWS:MSA provides clients full access to each application's parameters, allows administrators to specify named parameter preset combinations and execution limits for each application through simple configuration files. The JABAWS command-line client allows integration of JABAWS services into conventional scripts. JABAWS is made freely available under the Apache 2 license and can be obtained from: http://www.compbio.dundee.ac.uk/jabaws.

  10. Multiagent Work Practice Simulation: Progress and Challenges

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Shaffe, Michael G. (Technical Monitor)

    2001-01-01

    Modeling and simulating complex human-system interactions requires going beyond formal procedures and information flows to analyze how people interact with each other. Such work practices include conversations, modes of communication, informal assistance, impromptu meetings, workarounds, and so on. To make these social processes visible, we have developed a multiagent simulation tool, called Brahms, for modeling the activities of people belonging to multiple groups, situated in a physical environment (geographic regions, buildings, transport vehicles, etc.) consisting of tools, documents, and a computer system. We are finding many useful applications of Brahms for system requirements analysis, instruction, implementing software agents, and as a workbench for relating cognitive and social theories of human behavior. Many challenges remain for representing work practices, including modeling: memory over multiple days, scheduled activities combining physical objects, groups, and locations on a timeline (such as a Space Shuttle mission), habitat vehicles with trajectories (such as the Shuttle), agent movement in 3D space (e.g., inside the International Space Station), agent posture and line of sight, coupled movements (such as carrying objects), and learning (mimicry, forming habits, detecting repetition, etc.).

  11. Multiagent Work Practice Simulation: Progress and Challenges

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten

    2002-01-01

    Modeling and simulating complex human-system interactions requires going beyond formal procedures and information flows to analyze how people interact with each other. Such work practices include conversations, modes of communication, informal assistance, impromptu meetings, workarounds, and so on. To make these social processes visible, we have developed a multiagent simulation tool, called Brahms, for modeling the activities of people belonging to multiple groups, situated in a physical environment (geographic regions, buildings, transport vehicles, etc.) consisting of tools, documents, and computer systems. We are finding many useful applications of Brahms for system requirements analysis, instruction, implementing software agents, and as a workbench for relating cognitive and social theories of human behavior. Many challenges remain for representing work practices, including modeling: memory over multiple days, scheduled activities combining physical objects, groups, and locations on a timeline (such as a Space Shuttle mission), habitat vehicles with trajectories (such as the Shuttle), agent movement in 3d space (e.g., inside the International Space Station), agent posture and line of sight, coupled movements (such as carrying objects), and learning (mimicry, forming habits, detecting repetition, etc.).

  12. TAE Plus: Transportable Applications Environment Plus tools for building graphic-oriented applications

    NASA Technical Reports Server (NTRS)

    Szczur, Martha R.

    1989-01-01

    The Transportable Applications Environment Plus (TAE Plus), developed by NASA's Goddard Space Flight Center, is a portable User Interface Management System (UIMS), which provides an intuitive WYSIWYG WorkBench for prototyping and designing an application's user interface, integrated with tools for efficiently implementing the designed user interface and effective management of the user interface during an application's active domain. During the development of TAE Plus, many design and implementation decisions were based on the state-of-the-art within graphics workstations, windowing system and object-oriented programming languages. Some of the problems and issues experienced during implementation are discussed. A description of the next development steps planned for TAE Plus is also given.

  13. Network-based collaborative research environment LDRD final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davies, B.R.; McDonald, M.J.

    1997-09-01

    The Virtual Collaborative Environment (VCE) and Distributed Collaborative Workbench (DCW) are new technologies that make it possible for diverse users to synthesize and share mechatronic, sensor, and information resources. Using these technologies, university researchers, manufacturers, design firms, and others can directly access and reconfigure systems located throughout the world. The architecture for implementing VCE and DCW has been developed based on the proposed National Information Infrastructure or Information Highway and a tool kit of Sandia-developed software. Further enhancements to the VCE and DCW technologies will facilitate access to other mechatronic resources. This report describes characteristics of VCE and DCW andmore » also includes background information about the evolution of these technologies.« less

  14. The FOSS GIS Workbench on the GFZ Load Sharing Facility compute cluster

    NASA Astrophysics Data System (ADS)

    Löwe, P.; Klump, J.; Thaler, J.

    2012-04-01

    Compute clusters can be used as GIS workbenches, their wealth of resources allow us to take on geocomputation tasks which exceed the limitations of smaller systems. To harness these capabilities requires a Geographic Information System (GIS), able to utilize the available cluster configuration/architecture and a sufficient degree of user friendliness to allow for wide application. In this paper we report on the first successful porting of GRASS GIS, the oldest and largest Free Open Source (FOSS) GIS project, onto a compute cluster using Platform Computing's Load Sharing Facility (LSF). In 2008, GRASS6.3 was installed on the GFZ compute cluster, which at that time comprised 32 nodes. The interaction with the GIS was limited to the command line interface, which required further development to encapsulate the GRASS GIS business layer to facilitate its use by users not familiar with GRASS GIS. During the summer of 2011, multiple versions of GRASS GIS (v 6.4, 6.5 and 7.0) were installed on the upgraded GFZ compute cluster, now consisting of 234 nodes with 480 CPUs providing 3084 cores. The GFZ compute cluster currently offers 19 different processing queues with varying hardware capabilities and priorities, allowing for fine-grained scheduling and load balancing. After successful testing of core GIS functionalities, including the graphical user interface, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008). A first application of the new GIS functionality was the generation of maps of simulated tsunamis in the Mediterranean Sea for the Tsunami Atlas of the FP-7 TRIDEC Project (www.tridec-online.eu). For this, up to 500 processing nodes were used in parallel. Further trials included the processing of geometrically complex problems, requiring significant amounts of processing time. The GIS cluster successfully completed all these tasks, with processing times lasting up to full 20 CPU days. The deployment of GRASS GIS on a compute cluster allows our users to tackle GIS tasks previously out of reach of single workstations. In addition, this GRASS GIS cluster implementation will be made available to other users at GFZ in the course of 2012. It will thus become a research utility in the sense of "Software as a Service" (SaaS) and can be seen as our first step towards building a GFZ corporate cloud service.

  15. Drawings of the Modular Equipment Transporter and Hand Tool Carrier

    NASA Image and Video Library

    1970-10-12

    S70-50762 (November 1970) --- A line drawing illustrating layout view of the modular equipment transporter (MET) and its equipment. A MET (or Rickshaw, as it has been nicknamed) will be used on the lunar surface for the first time during the Apollo 14 lunar landing mission. The Rickshaw will serve as a portable workbench with a place for the Apollo lunar hand tools (ALHT) and their carrier, three cameras, two sample container bags, a special environment sample container (SESC), a lunar portable magnetometer (LPM) and spare film magazines.

  16. M3MS-16OR0401086 – Report on NEAMS Workbench Support for MOOSE Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lefebvre, Robert A.; Langley, Brandon R.; Thompson, Adam B.

    This report summarizes the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Workbench from Oak Ridge National Laboratory (ORNL) and the integration of the MOOSE framework. This report marks the completion of NEAMS milestone M3MS-16OR0401086. This report documents the developed infrastructure to support the MOOSE framework applications, the applications’ results, visualization status, the collaboration that facilitated this progress, and future considerations.

  17. Application of double laser interferometer in the measurement of translational stages' roll characteristics

    NASA Astrophysics Data System (ADS)

    Jin, Tao; Shen, Lu; Ke, Youlong; Hou, Wenmei; Ju, Aisong; Yang, Wei; Luo, Jialin

    2016-10-01

    In order to achieve rapid measurement of larger travel translation stages' roll-angle error in industry and to study the roll characteristics, this paper designs a small roll-angle measurement system based on laser heterodyne interferometry technology, test and researched on the roll characteristics of ball screw linear translation stage to fill the blank of the market. The results show that: during the operation of the ball screw linear translation stage, the workbench's roll angle changes complexly, its value is not only changing with different positions, but also shows different levels of volatility, what's more, the volatility varies with the workbench's work speed . Because of the non uniform stiffness of ball screw, at the end of each movement, the elastic potential energy being stored from the working process should release slowly, and the workbench will cost a certain time to roll fluctuate before it achieves a stable tumbling again.

  18. Widening the adoption of workflows to include human and human-machine scientific processes

    NASA Astrophysics Data System (ADS)

    Salayandia, L.; Pinheiro da Silva, P.; Gates, A. Q.

    2010-12-01

    Scientific workflows capture knowledge in the form of technical recipes to access and manipulate data that help scientists manage and reuse established expertise to conduct their work. Libraries of scientific workflows are being created in particular fields, e.g., Bioinformatics, where combined with cyber-infrastructure environments that provide on-demand access to data and tools, result in powerful workbenches for scientists of those communities. The focus in these particular fields, however, has been more on automating rather than documenting scientific processes. As a result, technical barriers have impeded a wider adoption of scientific workflows by scientific communities that do not rely as heavily on cyber-infrastructure and computing environments. Semantic Abstract Workflows (SAWs) are introduced to widen the applicability of workflows as a tool to document scientific recipes or processes. SAWs intend to capture a scientists’ perspective about the process of how she or he would collect, filter, curate, and manipulate data to create the artifacts that are relevant to her/his work. In contrast, scientific workflows describe the process from the point of view of how technical methods and tools are used to conduct the work. By focusing on a higher level of abstraction that is closer to a scientist’s understanding, SAWs effectively capture the controlled vocabularies that reflect a particular scientific community, as well as the types of datasets and methods used in a particular domain. From there on, SAWs provide the flexibility to adapt to different environments to carry out the recipes or processes. These environments range from manual fieldwork to highly technical cyber-infrastructure environments, i.e., such as those already supported by scientific workflows. Two cases, one from Environmental Science and another from Geophysics, are presented as illustrative examples.

  19. PWL 1.0 Personal WaveLab: an object-oriented workbench for seismogram analysis on Windows systems

    NASA Astrophysics Data System (ADS)

    Bono, Andrea; Badiali, Lucio

    2005-02-01

    Personal WaveLab 1.0 wants to be the starting point for an ex novo development of seismic time-series analysis procedures for Windows-based personal computers. Our objective is two-fold. Firstly, being itself a stand-alone application, it allows to do "basic" digital or digitised seismic waveform analysis. Secondly, thanks to its architectural characteristics it can be the basis for the development of more complex and power featured applications. An expanded version of PWL, called SisPick!, is currently in use at the Istituto Nazionale di Geofisica e Vulcanologia (Italian Institute of Geophysics and Volcanology) for real-time monitoring with purposes of Civil Protection. This means that about 90 users tested the application for more than 1 year, making its features more robust and efficient. SisPick! was also employed in the United Nations Nyragongo Project, in Congo, and during the Stromboli emergency in summer of 2002. The main appeals of the application package are: ease of use, object-oriented design, good computational speed, minimal need of disk space and the complete absence of third-party developed components (including ActiveX). Windows environment spares the user scripting or complex interaction with the system. The system is in constant development to answer the needs and suggestions of its users. Microsoft Visual Basic 6 source code, installation package, test data sets and documentation are available at no cost.

  20. The Kineticist’s Workbench: Combining Symbolic and Numerical Methods in the Simulation of Chemical Reaction Mechanisms

    DTIC Science & Technology

    1991-06-01

    algorithms (for the analysis of mechanisms), traditional numerical simulation methods, and algorithms that examine the (continued on back) 14. SUBJECT TERMS ...7540-01-280.S500 )doo’c -O• 98 (; : 89) 2YB 󈧆 Block 13 continued: simulation results and reinterpret them in qualitative terms . Moreover...simulation results and reinterpret them in qualitative terms . Moreover, the Workbench can use symbolic procedures to help guide or simplify the task

  1. TCW: Transcriptome Computational Workbench

    PubMed Central

    Soderlund, Carol; Nelson, William; Willer, Mark; Gang, David R.

    2013-01-01

    Background The analysis of transcriptome data involves many steps and various programs, along with organization of large amounts of data and results. Without a methodical approach for storage, analysis and query, the resulting ad hoc analysis can lead to human error, loss of data and results, inefficient use of time, and lack of verifiability, repeatability, and extensibility. Methodology The Transcriptome Computational Workbench (TCW) provides Java graphical interfaces for methodical analysis for both single and comparative transcriptome data without the use of a reference genome (e.g. for non-model organisms). The singleTCW interface steps the user through importing transcript sequences (e.g. Illumina) or assembling long sequences (e.g. Sanger, 454, transcripts), annotating the sequences, and performing differential expression analysis using published statistical programs in R. The data, metadata, and results are stored in a MySQL database. The multiTCW interface builds a comparison database by importing sequence and annotation from one or more single TCW databases, executes the ESTscan program to translate the sequences into proteins, and then incorporates one or more clusterings, where the clustering options are to execute the orthoMCL program, compute transitive closure, or import clusters. Both singleTCW and multiTCW allow extensive query and display of the results, where singleTCW displays the alignment of annotation hits to transcript sequences, and multiTCW displays multiple transcript alignments with MUSCLE or pairwise alignments. The query programs can be executed on the desktop for fastest analysis, or from the web for sharing the results. Conclusion It is now affordable to buy a multi-processor machine, and easy to install Java and MySQL. By simply downloading the TCW, the user can interactively analyze, query and view their data. The TCW allows in-depth data mining of the results, which can lead to a better understanding of the transcriptome. TCW is freely available from www.agcol.arizona.edu/software/tcw. PMID:23874959

  2. TCW: transcriptome computational workbench.

    PubMed

    Soderlund, Carol; Nelson, William; Willer, Mark; Gang, David R

    2013-01-01

    The analysis of transcriptome data involves many steps and various programs, along with organization of large amounts of data and results. Without a methodical approach for storage, analysis and query, the resulting ad hoc analysis can lead to human error, loss of data and results, inefficient use of time, and lack of verifiability, repeatability, and extensibility. The Transcriptome Computational Workbench (TCW) provides Java graphical interfaces for methodical analysis for both single and comparative transcriptome data without the use of a reference genome (e.g. for non-model organisms). The singleTCW interface steps the user through importing transcript sequences (e.g. Illumina) or assembling long sequences (e.g. Sanger, 454, transcripts), annotating the sequences, and performing differential expression analysis using published statistical programs in R. The data, metadata, and results are stored in a MySQL database. The multiTCW interface builds a comparison database by importing sequence and annotation from one or more single TCW databases, executes the ESTscan program to translate the sequences into proteins, and then incorporates one or more clusterings, where the clustering options are to execute the orthoMCL program, compute transitive closure, or import clusters. Both singleTCW and multiTCW allow extensive query and display of the results, where singleTCW displays the alignment of annotation hits to transcript sequences, and multiTCW displays multiple transcript alignments with MUSCLE or pairwise alignments. The query programs can be executed on the desktop for fastest analysis, or from the web for sharing the results. It is now affordable to buy a multi-processor machine, and easy to install Java and MySQL. By simply downloading the TCW, the user can interactively analyze, query and view their data. The TCW allows in-depth data mining of the results, which can lead to a better understanding of the transcriptome. TCW is freely available from www.agcol.arizona.edu/software/tcw.

  3. Enabling Discoveries in Earth Sciences Through the Geosciences Network (GEON)

    NASA Astrophysics Data System (ADS)

    Seber, D.; Baru, C.; Memon, A.; Lin, K.; Youn, C.

    2005-12-01

    Taking advantage of the state-of-the-art information technology resources GEON researchers are building a cyberinfrastructure designed to enable data sharing, semantic data integration, high-end computations and 4D visualization in easy-to-use web-based environments. The GEON Network currently allows users to search and register Earth science resources such as data sets (GIS layers, GMT files, geoTIFF images, ASCII files, relational databases etc), software applications or ontologies. Portal based access mechanisms enable developers to built dynamic user interfaces to conduct advanced processing and modeling efforts across distributed computers and supercomputers. Researchers and educators can access the networked resources through the GEON portal and its portlets that were developed to conduct better and more comprehensive science and educational studies. For example, the SYNSEIS portlet in GEON enables users to access in near-real time seismic waveforms from the IRIS Data Management Center, easily build a 3D geologic model within the area of the seismic station(s) and the epicenter and perform a 3D synthetic seismogram analysis to understand the lithospheric structure and earthquake source parameters for any given earthquake in the US. Similarly, GEON's workbench area enables users to create their own work environment and copy, visualize and analyze any data sets within the network, and create subsets of the data sets for their own purposes. Since all these resources are built as part of a Service-oriented Architecture (SOA), they are also used in other development platforms. One such platform is Kepler Workflow system which can access web service based resources and provides users with graphical programming interfaces to build a model to conduct computations and/or visualization efforts using the networked resources. Developments in the area of semantic integration of the networked datasets continue to advance and prototype studies can be accessed via the GEON portal at www.geongrid.org

  4. High Performance Input/Output for Parallel Computer Systems

    NASA Technical Reports Server (NTRS)

    Ligon, W. B.

    1996-01-01

    The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.

  5. A method for gear fatigue life prediction considering the internal flow field of the gear pump

    NASA Astrophysics Data System (ADS)

    Shen, Haidong; Li, Zhiqiang; Qi, Lele; Qiao, Liang

    2018-01-01

    Gear pump is the most widely used volume type hydraulic pump, and it is the main power source of the hydraulic system. Its performance is influenced by many factors, such as working environment, maintenance, fluid pressure and so on. It is different from the gear transmission system, the internal flow field of gear pump has a greater impact on the gear life, therefore it needs to consider the internal hydraulic system when predicting the gear fatigue life. In this paper, a certain aircraft gear pump as the research object, aim at the typical failure forms, gear contact fatigue, of gear pump, proposing the prediction method based on the virtual simulation. The method use CFD (Computational fluid dynamics) software to analyze pressure distribution of internal flow field of the gear pump, and constructed the unidirectional flow-solid coupling model of gear to acquire the contact stress of tooth surface on Ansys workbench software. Finally, employing nominal stress method and Miner cumulative damage theory to calculated the gear contact fatigue life based on modified material P-S-N curve. Engineering practice show that the method is feasible and efficient.

  6. Open discovery: An integrated live Linux platform of Bioinformatics tools.

    PubMed

    Vetrivel, Umashankar; Pilla, Kalabharath

    2008-01-01

    Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery - a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in.

  7. featsel: A framework for benchmarking of feature selection algorithms and cost functions

    NASA Astrophysics Data System (ADS)

    Reis, Marcelo S.; Estrela, Gustavo; Ferreira, Carlos Eduardo; Barrera, Junior

    In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and cost functions for benchmarking experiments. We also provide illustrative examples, in which featsel outperforms the popular Weka workbench in feature selection procedures on data sets from the UCI Machine Learning Repository.

  8. Elasto-Plastic Behavior of Aluminum Foams Subjected to Compression Loading

    NASA Astrophysics Data System (ADS)

    Silva, H. M.; Carvalho, C. D.; Peixinho, N. R.

    2017-05-01

    The non-linear behavior of uniform-size cellular foams made of aluminum is investigated when subjected to compressive loads while comparing numerical results obtained in the Finite Element Method software (FEM) ANSYS workbench and ANSYS Mechanical APDL (ANSYS Parametric Design Language). The numerical model is built on AUTODESK INVENTOR, being imported into ANSYS and solved by the Newton-Raphson iterative method. The most similar conditions were used in ANSYS mechanical and ANSYS workbench, as possible. The obtained numerical results and the differences between the two programs are presented and discussed

  9. Data mining in bioinformatics using Weka.

    PubMed

    Frank, Eibe; Hall, Mark; Trigg, Len; Holmes, Geoffrey; Witten, Ian H

    2004-10-12

    The Weka machine learning workbench provides a general-purpose environment for automatic classification, regression, clustering and feature selection-common data mining problems in bioinformatics research. It contains an extensive collection of machine learning algorithms and data pre-processing methods complemented by graphical user interfaces for data exploration and the experimental comparison of different machine learning techniques on the same problem. Weka can process data given in the form of a single relational table. Its main objectives are to (a) assist users in extracting useful information from data and (b) enable them to easily identify a suitable algorithm for generating an accurate predictive model from it. http://www.cs.waikato.ac.nz/ml/weka.

  10. Microgravity Science Glovebox (MSG) Space Sciences's Past, Present, and Future on the International Space Station (ISS)

    NASA Technical Reports Server (NTRS)

    Spivey, Reggie A.; Jordan, Lee P.

    2012-01-01

    The Microgravity Science Glovebox (MSG) is a double rack facility designed for microgravity investigation handling aboard the International Space Station (ISS). The unique design of the facility allows it to accommodate science and technology investigations in a "workbench" type environment. MSG facility provides an enclosed working area for investigation manipulation and observation in the ISS. Provides two levels of containment via physical barrier, negative pressure, and air filtration. The MSG team and facilities provide quick access to space for exploratory and National Lab type investigations to gain an understanding of the role of gravity in the physics associated research areas.

  11. Open discovery: An integrated live Linux platform of Bioinformatics tools

    PubMed Central

    Vetrivel, Umashankar; Pilla, Kalabharath

    2008-01-01

    Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery ‐ a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. Availability The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in PMID:19238235

  12. Launching an EarthCube Interoperability Workbench for Constructing Workflows and Employing Service Interfaces

    NASA Astrophysics Data System (ADS)

    Fulker, D. W.; Pearlman, F.; Pearlman, J.; Arctur, D. K.; Signell, R. P.

    2016-12-01

    A major challenge for geoscientists—and a key motivation for the National Science Foundation's EarchCube initiative—is to integrate data across disciplines, as is necessary for complex Earth-system studies such as climate change. The attendant technical and social complexities have led EarthCube participants to devise a system-of-systems architectural concept. Its centerpiece is a (virtual) interoperability workbench, around which a learning community can coalesce, supported in their evolving quests to join data from diverse sources, to synthesize new forms of data depicting Earth phenomena, and to overcome immense obstacles that arise, for example, from mismatched nomenclatures, projections, mesh geometries and spatial-temporal scales. The full architectural concept will require significant time and resources to implement, but this presentation describes a (minimal) starter kit. With a keep-it-simple mantra this workbench starter kit can fulfill the following four objectives: 1) demonstrate the feasibility of an interoperability workbench by mid-2017; 2) showcase scientifically useful examples of cross-domain interoperability, drawn, e.g., from funded EarthCube projects; 3) highlight selected aspects of EarthCube's architectural concept, such as a system of systems (SoS) linked via service interfaces; 4) demonstrate how workflows can be designed and used in a manner that enables sharing, promotes collaboration and fosters learning. The outcome, despite its simplicity, will embody service interfaces sufficient to construct—from extant components—data-integration and data-synthesis workflows involving multiple geoscience domains. Tentatively, the starter kit will build on the Jupyter Notebook web application, augmented with libraries for interfacing current services (at data centers involved in EarthCube's Council of Data Facilities, e.g.) and services developed specifically for EarthCube and spanning most geoscience domains.

  13. Language workbench user interfaces for data analysis

    PubMed Central

    Benson, Victoria M.

    2015-01-01

    Biological data analysis is frequently performed with command line software. While this practice provides considerable flexibility for computationally savy individuals, such as investigators trained in bioinformatics, this also creates a barrier to the widespread use of data analysis software by investigators trained as biologists and/or clinicians. Workflow systems such as Galaxy and Taverna have been developed to try and provide generic user interfaces that can wrap command line analysis software. These solutions are useful for problems that can be solved with workflows, and that do not require specialized user interfaces. However, some types of analyses can benefit from custom user interfaces. For instance, developing biomarker models from high-throughput data is a type of analysis that can be expressed more succinctly with specialized user interfaces. Here, we show how Language Workbench (LW) technology can be used to model the biomarker development and validation process. We developed a language that models the concepts of Dataset, Endpoint, Feature Selection Method and Classifier. These high-level language concepts map directly to abstractions that analysts who develop biomarker models are familiar with. We found that user interfaces developed in the Meta-Programming System (MPS) LW provide convenient means to configure a biomarker development project, to train models and view the validation statistics. We discuss several advantages of developing user interfaces for data analysis with a LW, including increased interface consistency, portability and extension by language composition. The language developed during this experiment is distributed as an MPS plugin (available at http://campagnelab.org/software/bdval-for-mps/). PMID:25755929

  14. Hemodynamics model of fluid–solid interaction in internal carotid artery aneurysms

    PubMed Central

    Fu-Yu, Wang; Lei, Liu; Xiao-Jun, Zhang; Hai-Yue, Ju

    2010-01-01

    The objective of this study is to present a relatively simple method to reconstruct cerebral aneurysms as 3D numerical grids. The method accurately duplicates the geometry to provide computer simulations of the blood flow. Initial images were obtained by using CT angiography and 3D digital subtraction angiography in DICOM format. The image was processed by using MIMICS software, and the 3D fluid model (blood flow) and 3D solid model (wall) were generated. The subsequent output was exported to the ANSYS workbench software to generate the volumetric mesh for further hemodynamic study. The fluid model was defined and simulated in CFX software while the solid model was calculated in ANSYS software. The force data calculated firstly in the CFX software were transferred to the ANSYS software, and after receiving the force data, total mesh displacement data were calculated in the ANSYS software. Then, the mesh displacement data were transferred back to the CFX software. The data exchange was processed in workbench software. The results of simulation could be visualized in CFX-post. Two examples of grid reconstruction and blood flow simulation for patients with internal carotid artery aneurysms were presented. The wall shear stress, wall total pressure, and von Mises stress could be visualized. This method seems to be relatively simple and suitable for direct use by neurosurgeons or neuroradiologists, and maybe a practical tool for planning treatment and follow-up of patients after neurosurgical or endovascular interventions with 3D angiography. PMID:20812022

  15. Hemodynamics model of fluid-solid interaction in internal carotid artery aneurysms.

    PubMed

    Bai-Nan, Xu; Fu-Yu, Wang; Lei, Liu; Xiao-Jun, Zhang; Hai-Yue, Ju

    2011-01-01

    The objective of this study is to present a relatively simple method to reconstruct cerebral aneurysms as 3D numerical grids. The method accurately duplicates the geometry to provide computer simulations of the blood flow. Initial images were obtained by using CT angiography and 3D digital subtraction angiography in DICOM format. The image was processed by using MIMICS software, and the 3D fluid model (blood flow) and 3D solid model (wall) were generated. The subsequent output was exported to the ANSYS workbench software to generate the volumetric mesh for further hemodynamic study. The fluid model was defined and simulated in CFX software while the solid model was calculated in ANSYS software. The force data calculated firstly in the CFX software were transferred to the ANSYS software, and after receiving the force data, total mesh displacement data were calculated in the ANSYS software. Then, the mesh displacement data were transferred back to the CFX software. The data exchange was processed in workbench software. The results of simulation could be visualized in CFX-post. Two examples of grid reconstruction and blood flow simulation for patients with internal carotid artery aneurysms were presented. The wall shear stress, wall total pressure, and von Mises stress could be visualized. This method seems to be relatively simple and suitable for direct use by neurosurgeons or neuroradiologists, and maybe a practical tool for planning treatment and follow-up of patients after neurosurgical or endovascular interventions with 3D angiography.

  16. Apache Open Climate Workbench: Building Open Source Climate Science Tools and Community at the Apache Software Foundation

    NASA Astrophysics Data System (ADS)

    Joyce, M.; Ramirez, P.; Boustani, M.; Mattmann, C. A.; Khudikyan, S.; McGibbney, L. J.; Whitehall, K. D.

    2014-12-01

    Apache Open Climate Workbench (OCW; https://climate.apache.org/) is a Top-Level Project at the Apache Software Foundation that aims to provide a suite of tools for performing climate science evaluations using model outputs from a multitude of different sources (ESGF, CORDEX, U.S. NCA, NARCCAP) with remote sensing data from NASA, NOAA, and other agencies. Apache OCW is the second NASA project to become a Top-Level Project at the Apache Software Foundation. It grew out of the Jet Propulsion Laboratory's (JPL) Regional Climate Model Evaluation System (RCMES) project, a collaboration between JPL and the University of California, Los Angeles' Joint Institute for Regional Earth System Science and Engineering (JIFRESSE). Apache OCW provides scientists and developers with tools for data manipulation, metrics for dataset comparisons, and a visualization suite. In addition to a powerful low-level API, Apache OCW also supports a web application for quick, browser-controlled evaluations, a command line application for local evaluations, and a virtual machine for isolated experimentation with minimal setup. This talk will look at the difficulties and successes of moving a closed community research project out into the wild world of open source. We'll explore the growing pains Apache OCW went through to become a Top-Level Project at the Apache Software Foundation as well as the benefits gained by opening up development to the broader climate and computer science communities.

  17. Dynamic modeling of environmental risk associated with drilling discharges to marine sediments.

    PubMed

    Durgut, İsmail; Rye, Henrik; Reed, Mark; Smit, Mathijs G D; Ditlevsen, May Kristin

    2015-10-15

    Drilling discharges are complex mixtures of base-fluids, chemicals and particulates, and may, after discharge to the marine environment, result in adverse effects on benthic communities. A numerical model was developed to estimate the fate of drilling discharges in the marine environment, and associated environmental risks. Environmental risk from deposited drilling waste in marine sediments is generally caused by four types of stressors: oxygen depletion, toxicity, burial and change of grain size. In order to properly model these stressors, natural burial, biodegradation and bioturbation processes were also included. Diagenetic equations provide the basis for quantifying environmental risk. These equations are solved numerically by an implicit-central differencing scheme. The sediment model described here is, together with a fate and risk model focusing on the water column, implemented in the DREAM and OSCAR models, both available within the Marine Environmental Modeling Workbench (MEMW) at SINTEF in Trondheim, Norway. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. CyVerse Data Commons: lessons learned in cyberinfrastructure management and data hosting from the Life Sciences

    NASA Astrophysics Data System (ADS)

    Swetnam, T. L.; Walls, R.; Merchant, N.

    2017-12-01

    CyVerse, is a US National Science Foundation funded initiative "to design, deploy, and expand a national cyberinfrastructure for life sciences research, and to train scientists in its use," supporting and enabling cross disciplinary collaborations across institutions. CyVerse' free, open-source, cyberinfrastructure is being adopted into biogeoscience and space sciences research. CyVerse data-science agnostic platforms provide shared data storage, high performance computing, and cloud computing that allow analysis of very large data sets (including incomplete or work-in-progress data sets). Part of CyVerse success has been in addressing the handling of data through its entire lifecycle, from creation to final publication in a digital data repository to reuse in new analyses. CyVerse developers and user communities have learned many lessons that are germane to Earth and Environmental Science. We present an overview of the tools and services available through CyVerse including: interactive computing with the Discovery Environment (https://de.cyverse.org/), an interactive data science workbench featuring data storage and transfer via the Data Store; cloud computing with Atmosphere (https://atmo.cyverse.org); and access to HPC via Agave API (https://agaveapi.co/). Each CyVerse service emphasizes access to long term data storage, including our own Data Commons (http://datacommons.cyverse.org), as well as external repositories. The Data Commons service manages, organizes, preserves, publishes, allows for discovery and reuse of data. All data published to CyVerse's Curated Data receive a permanent identifier (PID) in the form of a DOI (Digital Object Identifier) or ARK (Archival Resource Key). Data that is more fluid can also be published in the Data commons through Community Collaborated data. The Data Commons provides landing pages, permanent DOIs or ARKs, and supports data reuse and citation through features such as open data licenses and downloadable citations. The ability to access and do computing on data within the CyVerse framework or with external compute resources when necessary, has proven highly beneficial to our user community, which has continuously grown since the inception of CyVerse nine years ago.

  19. Finite element analyses of a linear-accelerator electron gun

    NASA Astrophysics Data System (ADS)

    Iqbal, M.; Wasy, A.; Islam, G. U.; Zhou, Z.

    2014-02-01

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.

  20. Finite element analyses of a linear-accelerator electron gun.

    PubMed

    Iqbal, M; Wasy, A; Islam, G U; Zhou, Z

    2014-02-01

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.

  1. A two-dimensional model of water: Solvation of nonpolar solutes

    NASA Astrophysics Data System (ADS)

    Urbič, T.; Vlachy, V.; Kalyuzhnyi, Yu. V.; Southall, N. T.; Dill, K. A.

    2002-01-01

    We recently applied a Wertheim integral equation theory (IET) and a thermodynamic perturbation theory (TPT) to the Mercedes-Benz (MB) model of pure water. These analytical theories offer the advantage of being computationally less intensive than the Monte Carlo simulations by orders of magnitudes. The long-term goal of this work is to develop analytical theories of water that can handle orientation-dependent interactions and the MB model serves as a simple workbench for this development. Here we apply the IET and TPT to the hydrophobic effect, the transfer of a nonpopular solute into MB water. As before, we find that the theories reproduce the Monte Carlo results quite accurately at higher temperatures, while they predict the qualitative trends in cold water.

  2. Optimisation of Critical Infrastructure Protection: The SiVe Project on Airport Security

    NASA Astrophysics Data System (ADS)

    Breiing, Marcus; Cole, Mara; D'Avanzo, John; Geiger, Gebhard; Goldner, Sascha; Kuhlmann, Andreas; Lorenz, Claudia; Papproth, Alf; Petzel, Erhard; Schwetje, Oliver

    This paper outlines the scientific goals, ongoing work and first results of the SiVe research project on critical infrastructure security. The methodology is generic while pilot studies are chosen from airport security. The outline proceeds in three major steps, (1) building a threat scenario, (2) development of simulation models as scenario refinements, and (3) assessment of alternatives. Advanced techniques of systems analysis and simulation are employed to model relevant airport structures and processes as well as offences. Computer experiments are carried out to compare and optimise alternative solutions. The optimality analyses draw on approaches to quantitative risk assessment recently developed in the operational sciences. To exploit the advantages of the various techniques, an integrated simulation workbench is build up in the project.

  3. bold: The Barcode of Life Data System (http://www.barcodinglife.org)

    PubMed Central

    RATNASINGHAM, SUJEEVAN; HEBERT, PAUL D N

    2007-01-01

    The Barcode of Life Data System (bold) is an informatics workbench aiding the acquisition, storage, analysis and publication of DNA barcode records. By assembling molecular, morphological and distributional data, it bridges a traditional bioinformatics chasm. bold is freely available to any researcher with interests in DNA barcoding. By providing specialized services, it aids the assembly of records that meet the standards needed to gain BARCODE designation in the global sequence databases. Because of its web-based delivery and flexible data security model, it is also well positioned to support projects that involve broad research alliances. This paper provides a brief introduction to the key elements of bold, discusses their functional capabilities, and concludes by examining computational resources and future prospects. PMID:18784790

  4. VASA: Interactive Computational Steering of Large Asynchronous Simulation Pipelines for Societal Infrastructure.

    PubMed

    Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S

    2014-12-01

    We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.

  5. Toward a workbench for rodent brain image data: systems architecture and design.

    PubMed

    Moene, Ivar A; Subramaniam, Shankar; Darin, Dmitri; Leergaard, Trygve B; Bjaalie, Jan G

    2007-01-01

    We present a novel system for storing and manipulating microscopic images from sections through the brain and higher-level data extracted from such images. The system is designed and built on a three-tier paradigm and provides the research community with a web-based interface for facile use in neuroscience research. The Oracle relational database management system provides the ability to store a variety of objects relevant to the images and provides the framework for complex querying of data stored in the system. Further, the suite of applications intimately tied into the infrastructure in the application layer provide the user the ability not only to query and visualize the data, but also to perform analysis operations based on the tools embedded into the system. The presentation layer uses extant protocols of the modern web browser and this provides ease of use of the system. The present release, named Functional Anatomy of the Cerebro-Cerebellar System (FACCS), available through The Rodent Brain Workbench (http:// rbwb.org/), is targeted at the functional anatomy of the cerebro-cerebellar system in rats, and holds axonal tracing data from these projections. The system is extensible to other circuits and projections and to other categories of image data and provides a unique environment for analysis of rodent brain maps in the context of anatomical data. The FACCS application assumes standard animal brain atlas models and can be extended to future models. The system is available both for interactive use from a remote web-browser client as well as for download to a local server machine.

  6. Virome Assembly and Annotation: A Surprise in the Namib Desert

    PubMed Central

    Hesse, Uljana; van Heusden, Peter; Kirby, Bronwyn M.; Olonade, Israel; van Zyl, Leonardo J.; Trindade, Marla

    2017-01-01

    Sequencing, assembly, and annotation of environmental virome samples is challenging. Methodological biases and differences in species abundance result in fragmentary read coverage; sequence reconstruction is further complicated by the mosaic nature of viral genomes. In this paper, we focus on biocomputational aspects of virome analysis, emphasizing latent pitfalls in sequence annotation. Using simulated viromes that mimic environmental data challenges we assessed the performance of five assemblers (CLC-Workbench, IDBA-UD, SPAdes, RayMeta, ABySS). Individual analyses of relevant scaffold length fractions revealed shortcomings of some programs in reconstruction of viral genomes with excessive read coverage (IDBA-UD, RayMeta), and in accurate assembly of scaffolds ≥50 kb (SPAdes, RayMeta, ABySS). The CLC-Workbench assembler performed best in terms of genome recovery (including highly covered genomes) and correct reconstruction of large scaffolds; and was used to assemble a virome from a copper rich site in the Namib Desert. We found that scaffold network analysis and cluster-specific read reassembly improved reconstruction of sequences with excessive read coverage, and that strict data filtering for non-viral sequences prior to downstream analyses was essential. In this study we describe novel viral genomes identified in the Namib Desert copper site virome. Taxonomic affiliations of diverse proteins in the dataset and phylogenetic analyses of circovirus-like proteins indicated links to the marine habitat. Considering additional evidence from this dataset we hypothesize that viruses may have been carried from the Atlantic Ocean into the Namib Desert by fog and wind, highlighting the impact of the extended environment on an investigated niche in metagenome studies. PMID:28167933

  7. The In-Space Soldering Investigation: Research Conducted on the International Space Station in Support of NASA's Exploration Initiative

    NASA Technical Reports Server (NTRS)

    Grugel, R. N.; Fincke, M.; Sergre, P. N.; Ogle, J. A.; Funkhouser, G.; Parris, F.; Murphy, L.; Gillies, D.; Hua, F.

    2004-01-01

    Soldering is a well established joining and repair process that is of particular importance in the electronics industry. Still. internal solder joint defects such as porosity are prevalent and compromise desired properties such as electrical/thermal conductivity and fatigue strength. Soldering equipment resides aboard the International Space Station (ISS) and will likely accompany Exploration Missions during transit to, as well as on, the moon and Mars. Unfortunately, detrimental porosity appears to be enhanced in lower gravity environments. To this end, the In-Space Soldering Investigation (ISSI) is being conducted in the Microgravity Workbench Area (MWA) aboard the ISS as "Saturday Science" with the goal of promoting our understanding of joining techniques, shape equilibrium, wetting phenomena, and microstructural development in a microgravity environment. The work presented here will focus on direct observation of melting dynamics and shape determination in comparison to ground-based samples, with implications made to processing in other low-gravity environments. Unexpected convection effects, masked on Earth, will also be shown as well as the value of the ISS as a research platform in support of Exploration Missions.

  8. The In-Space Soldering Investigation: To Date Analysis of Experiments Conducted on the International Space Station

    NASA Technical Reports Server (NTRS)

    Grugel, Richard N.; Gillies, D. C.; Hua, F.; Anilkumar, A.

    2006-01-01

    Soldering is a well established joining and repair process that is of particular importance in the electronics industry. Still, internal solder joint defects such as porosity are prevalent and compromise desired properties such as electrical/thermal conductivity and fatigue strength. Soldering equipment resides aboard the International Space Station (ISS) and will likely accompany Exploration Missions during transit to, as well as on, the moon and Mars. Unfortunately, detrimental porosity appears to be enhanced in lower gravity environments. To this end, the In-Space Soldering Investigation (ISSI) is being conducted in the Microgravity Workbench Area (MWA) aboard the ISS as "Saturday Science" with the goal of promoting our understanding of joining techniques, shape equilibrium, wetting phenomena, and microstructural development in a microgravity environment. The work presented here will focus on direct observation of melting dynamics and shape determination in comparison to ground-based samples, with implications made to processing in other low-gravity environments. Unexpected convection effects, masked on Earth, will also be shown as well as the value of the ISS as a research platform in support of Exploration Missions.

  9. Navy Requirements for Controlling Multiple Off-Board Robots Using the Autonomous Unmanned Vehicle Workbench

    DTIC Science & Technology

    2007-06-01

    2 D . THESIS ORGANIZATION...c. Validation ................................................................................13 d . XSLT...X3D Earth...........................................................................................16 D . USING AUVW FOR SIMULATION

  10. Laminar forced convection from a rotating horizontal cylinder in cross flow

    NASA Astrophysics Data System (ADS)

    Chandran, Prabul; Venugopal, G.; Jaleel, H. Abdul; Rajkumar, M. R.

    2017-04-01

    The influence of non-dimensional rotational velocity, flow Reynolds number and Prandtl number of the fluid on laminar forced convection from a rotating horizontal cylinder subject to constant heat flux boundary condition is numerically investigated. The numerical simulations have been conducted using commercial Computational Fluid Dynamics package CFX available in ANSYS Workbench 14. Results are presented for the non-dimensional rotational velocity α ranging from 0 to 4, flow Reynolds number from 25 to 40 and Prandtl number of the fluid from 0.7 to 5.4. The rotational effects results in reduction in heat transfer compared to heat transfer from stationary heated cylinder due to thickening of boundary layer as consequence of the rotation of the cylinder. Heat transfer rate increases with increase in Prandtl number of the fluid.

  11. Finite element analyses of a linear-accelerator electron gun

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iqbal, M., E-mail: muniqbal.chep@pu.edu.pk, E-mail: muniqbal@ihep.ac.cn; Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049; Wasy, A.

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gunmore » is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.« less

  12. Shape-memory-alloy-based smart knee spacer for total knee arthroplasty: 3D CAD modelling and a computational study.

    PubMed

    Gautam, Arvind; Callejas, Miguel A; Acharyya, Amit; Acharyya, Swati Ghosh

    2018-05-01

    This study introduced a shape memory alloy (SMA)-based smart knee spacer for total knee arthroplasty (TKA). Subsequently, a 3D CAD model of a smart tibial component of TKA was designed in Solidworks software, and verified using a finite element analysis in ANSYS Workbench. The two major properties of the SMA (NiTi), the pseudoelasticity (PE) and shape memory effect (SME), were exploited, modelled, and analysed for a TKA application. The effectiveness of the proposed model was verified in ANSYS Workbench through the finite element analysis (FEA) of the maximum deformation and equivalent (von Mises) stress distribution. The proposed model was also compared with a polymethylmethacrylate (PMMA)-based spacer for the upper portion of the tibial component for three subjects with body mass index (BMI) of 23.88, 31.09, and 38.39. The proposed SMA -based smart knee spacer contained 96.66978% less deformation with a standard deviation of 0.01738 than that of the corresponding PMMA based counterpart for the same load and flexion angle. Based on the maximum deformation analysis, the PMMA-based spacer had 30 times more permanent deformation than that of the proposed SMA-based spacer for the same load and flexion angle. The SME property of the lower portion of the tibial component for fixation of the spacer at its position was verified by an FEA in ANSYS. Wherein, a strain life-based fatigue analysis was performed and tested for the PE and SME built spacers through the FEA. Therefore, the SMA-based smart knee spacer eliminated the drawbacks of the PMMA-based spacer, including spacer fracture, loosening, dislocation, tilting or translation, and knee subluxation. Copyright © 2018. Published by Elsevier Ltd.

  13. miRCat2: accurate prediction of plant and animal microRNAs from next-generation sequencing datasets

    PubMed Central

    Paicu, Claudia; Mohorianu, Irina; Stocks, Matthew; Xu, Ping; Coince, Aurore; Billmeier, Martina; Dalmay, Tamas; Moulton, Vincent; Moxon, Simon

    2017-01-01

    Abstract Motivation MicroRNAs are a class of ∼21–22 nt small RNAs which are excised from a stable hairpin-like secondary structure. They have important gene regulatory functions and are involved in many pathways including developmental timing, organogenesis and development in eukaryotes. There are several computational tools for miRNA detection from next-generation sequencing datasets. However, many of these tools suffer from high false positive and false negative rates. Here we present a novel miRNA prediction algorithm, miRCat2. miRCat2 incorporates a new entropy-based approach to detect miRNA loci, which is designed to cope with the high sequencing depth of current next-generation sequencing datasets. It has a user-friendly interface and produces graphical representations of the hairpin structure and plots depicting the alignment of sequences on the secondary structure. Results We test miRCat2 on a number of animal and plant datasets and present a comparative analysis with miRCat, miRDeep2, miRPlant and miReap. We also use mutants in the miRNA biogenesis pathway to evaluate the predictions of these tools. Results indicate that miRCat2 has an improved accuracy compared with other methods tested. Moreover, miRCat2 predicts several new miRNAs that are differentially expressed in wild-type versus mutants in the miRNA biogenesis pathway. Availability and Implementation miRCat2 is part of the UEA small RNA Workbench and is freely available from http://srna-workbench.cmp.uea.ac.uk/. Contact v.moulton@uea.ac.uk or s.moxon@uea.ac.uk Supplementary information Supplementary data are available at Bioinformatics online. PMID:28407097

  14. A novel low profile wireless flow sensor to monitor hemodynamic changes in cerebral aneurysm

    NASA Astrophysics Data System (ADS)

    Chen, Yanfei; Jankowitz, Brian T.; Cho, Sung Kwon; Chun, Youngjae

    2015-03-01

    A proof of concept of low-profile flow sensor has been designed, fabricated, and subsequently tested to demonstrate its feasibility for monitoring hemodynamic changes in cerebral aneurysm. The prototype sensor contains three layers, i.e., a thin polyurethane layer was sandwiched between two sputter-deposited thin film nitinol layers (6μm thick). A novel superhydrophilic surface treatment was used to create hemocompatible surface of thin nitinol electrode layers. A finite element model was conducted using ANSYS Workbench 15.0 Static Structural to optimize the dimensions of flow sensor. A computational fluid dynamics calculations were performed using ANSYS Workbench Fluent to assess the flow velocity patterns within the aneurysm sac. We built a test platform with a z-axis translation stage and an S-beam load cell to compare the capacitance changes of the sensors with different parameters during deformation. Both LCR meter and oscilloscope were used to measure the capacitance and the resonant frequency shifts, respectively. The experimental compression tests demonstrated the linear relationship between the capacitance and applied compression force and decreasing the length, width and increasing the thickness improved the sensor sensitivity. The experimentally measured resonant frequency dropped from 12.7MHz to 12.48MHz, indicating a 0.22MHz shift with 200g ( 2N) compression force while the theoretical resonant frequency shifted 0.35MHz with 50g ( 0.5N). Our recent results demonstrated a feasibility of the low-profile flow sensor for monitoring haemodynamics in cerebral aneurysm region, as well as the efficacy of the use of the surface treated thin film nitinol for the low-profile sensor materials.

  15. Human Connectome Project Informatics: quality control, database services, and data visualization

    PubMed Central

    Marcus, Daniel S.; Harms, Michael P.; Snyder, Abraham Z.; Jenkinson, Mark; Wilson, J Anthony; Glasser, Matthew F.; Barch, Deanna M.; Archie, Kevin A.; Burgess, Gregory C.; Ramaratnam, Mohana; Hodge, Michael; Horton, William; Herrick, Rick; Olsen, Timothy; McKay, Michael; House, Matthew; Hileman, Michael; Reid, Erin; Harwell, John; Coalson, Timothy; Schindler, Jon; Elam, Jennifer S.; Curtiss, Sandra W.; Van Essen, David C.

    2013-01-01

    The Human Connectome Project (HCP) has developed protocols, standard operating and quality control procedures, and a suite of informatics tools to enable high throughput data collection, data sharing, automated data processing and analysis, and data mining and visualization. Quality control procedures include methods to maintain data collection consistency over time, to measure head motion, and to establish quantitative modality-specific overall quality assessments. Database services developed as customizations of the XNAT imaging informatics platform support both internal daily operations and open access data sharing. The Connectome Workbench visualization environment enables user interaction with HCP data and is increasingly integrated with the HCP's database services. Here we describe the current state of these procedures and tools and their application in the ongoing HCP study. PMID:23707591

  16. Computing chemical organizations in biological networks.

    PubMed

    Centler, Florian; Kaleta, Christoph; di Fenizio, Pietro Speroni; Dittrich, Peter

    2008-07-15

    Novel techniques are required to analyze computational models of intracellular processes as they increase steadily in size and complexity. The theory of chemical organizations has recently been introduced as such a technique that links the topology of biochemical reaction network models to their dynamical repertoire. The network is decomposed into algebraically closed and self-maintaining subnetworks called organizations. They form a hierarchy representing all feasible system states including all steady states. We present three algorithms to compute the hierarchy of organizations for network models provided in SBML format. Two of them compute the complete organization hierarchy, while the third one uses heuristics to obtain a subset of all organizations for large models. While the constructive approach computes the hierarchy starting from the smallest organization in a bottom-up fashion, the flux-based approach employs self-maintaining flux distributions to determine organizations. A runtime comparison on 16 different network models of natural systems showed that none of the two exhaustive algorithms is superior in all cases. Studying a 'genome-scale' network model with 762 species and 1193 reactions, we demonstrate how the organization hierarchy helps to uncover the model structure and allows to evaluate the model's quality, for example by detecting components and subsystems of the model whose maintenance is not explained by the model. All data and a Java implementation that plugs into the Systems Biology Workbench is available from http://www.minet.uni-jena.de/csb/prj/ot/tools.

  17. FOSS GIS on the GFZ HPC cluster: Towards a service-oriented Scientific Geocomputation Environment

    NASA Astrophysics Data System (ADS)

    Loewe, P.; Klump, J.; Thaler, J.

    2012-12-01

    High performance compute clusters can be used as geocomputation workbenches. Their wealth of resources enables us to take on geocomputation tasks which exceed the limitations of smaller systems. These general capabilities can be harnessed via tools such as Geographic Information System (GIS), provided they are able to utilize the available cluster configuration/architecture and provide a sufficient degree of user friendliness to allow for wide application. While server-level computing is clearly not sufficient for the growing numbers of data- or computation-intense tasks undertaken, these tasks do not get even close to the requirements needed for access to "top shelf" national cluster facilities. So until recently such kind of geocomputation research was effectively barred due to lack access to of adequate resources. In this paper we report on the experiences gained by providing GRASS GIS as a software service on a HPC compute cluster at the German Research Centre for Geosciences using Platform Computing's Load Sharing Facility (LSF). GRASS GIS is the oldest and largest Free Open Source (FOSS) GIS project. During ramp up in 2011, multiple versions of GRASS GIS (v 6.4.2, 6.5 and 7.0) were installed on the HPC compute cluster, which currently consists of 234 nodes with 480 CPUs providing 3084 cores. Nineteen different processing queues with varying hardware capabilities and priorities are provided, allowing for fine-grained scheduling and load balancing. After successful initial testing, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008) and allow to use all 3084 cores for GRASS based geocomputation work. However, in practice applications are limited to fewer resources as assigned to their respective queue. Applications of the new GIS functionality comprise so far of hydrological analysis, remote sensing and the generation of maps of simulated tsunamis in the Mediterranean Sea for the Tsunami Atlas of the FP-7 TRIDEC Project (www.tridec-online.eu). This included the processing of complex problems, requiring significant amounts of processing time up to full 20 CPU days. This GRASS GIS-based service is provided as a research utility in the sense of "Software as a Service" (SaaS) and is a first step towards a GFZ corporate cloud service.

  18. Importance sampling with imperfect cloning for the computation of generalized Lyapunov exponents

    NASA Astrophysics Data System (ADS)

    Anteneodo, Celia; Camargo, Sabrina; Vallejos, Raúl O.

    2017-12-01

    We revisit the numerical calculation of generalized Lyapunov exponents, L (q ) , in deterministic dynamical systems. The standard method consists of adding noise to the dynamics in order to use importance sampling algorithms. Then L (q ) is obtained by taking the limit noise-amplitude → 0 after the calculation. We focus on a particular method that involves periodic cloning and pruning of a set of trajectories. However, instead of considering a noisy dynamics, we implement an imperfect (noisy) cloning. This alternative method is compared with the standard one and, when possible, with analytical results. As a workbench we use the asymmetric tent map, the standard map, and a system of coupled symplectic maps. The general conclusion of this study is that the imperfect-cloning method performs as well as the standard one, with the advantage of preserving the deterministic dynamics.

  19. NASA Tech Briefs, September 2006

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Topics covered include: Improving Thermomechanical Properties of SiC/SiC Composites; Aerogel/Particle Composites for Thermoelectric Devices; Patches for Repairing Ceramics and Ceramic- Matrix Composites; Lower-Conductivity Ceramic Materials for Thermal-Barrier Coatings; An Alternative for Emergency Preemption of Traffic Lights; Vehicle Transponder for Preemption of Traffic Lights; Automated Announcements of Approaching Emergency Vehicles; Intersection Monitor for Traffic-Light-Preemption System; Full-Duplex Digital Communication on a Single Laser Beam; Stabilizing Microwave Frequency of a Photonic Oscillator; Microwave Oscillators Based on Nonlinear WGM Resonators; Pointing Reference Scheme for Free-Space Optical Communications Systems; High-Level Performance Modeling of SAR Systems; Spectral Analysis Tool 6.2 for Windows; Multi-Platform Avionics Simulator; Silicon-Based Optical Modulator with Ferroelectric Layer; Multiplexing Transducers Based on Tunnel-Diode Oscillators; Scheduling with Automated Resolution of Conflicts; Symbolic Constraint Maintenance Grid; Discerning Trends in Performance Across Multiple Events; Magnetic Field Solver; Computing for Aiming a Spaceborne Bistatic- Radar Transmitter; 4-Vinyl-1,3-Dioxolane-2-One as an Additive for Li-Ion Cells; Probabilistic Prediction of Lifetimes of Ceramic Parts; STRANAL-PMC Version 2.0; Micromechanics and Piezo Enhancements of HyperSizer; Single-Phase Rare-Earth Oxide/Aluminum Oxide Glasses; Tilt/Tip/Piston Manipulator with Base-Mounted Actuators; Measurement of Model Noise in a Hard-Wall Wind Tunnel; Loci-STREAM Version 0.9; The Synergistic Engineering Environment; Reconfigurable Software for Controlling Formation Flying; More About the Tetrahedral Unstructured Software System; Computing Flows Using Chimera and Unstructured Grids; Avoiding Obstructions in Aiming a High-Gain Antenna; Analyzing Aeroelastic Stability of a Tilt-Rotor Aircraft; Tracking Positions and Attitudes of Mars Rovers; Stochastic Evolutionary Algorithms for Planning Robot Paths; Compressible Flow Toolbox; Rapid Aeroelastic Analysis of Blade Flutter in Turbomachines; General Flow-Solver Code for Turbomachinery Applications; Code for Multiblock CFD and Heat-Transfer Computations; Rotating-Pump Design Code; Covering a Crucible with Metal Containing Channels; Repairing Fractured Bones by Use of Bioabsorbable Composites; Kalman Filter for Calibrating a Telescope Focal Plane; Electronic Absolute Cartesian Autocollimator; Fiber-Optic Gratings for Lidar Measurements of Water Vapor; Simulating Responses of Gravitational-Wave Instrumentation; SOFTC: A Software Correlator for VLBI; Progress in Computational Simulation of Earthquakes; Database of Properties of Meteors; Computing Spacecraft Solar-Cell Damage by Charged Particles; Thermal Model of a Current-Carrying Wire in a Vacuum; Program for Analyzing Flows in a Complex Network; Program Predicts Performance of Optical Parametric Oscillators; Processing TES Level-1B Data; Automated Camera Calibration; Tracking the Martian CO2 Polar Ice Caps in Infrared Images; Processing TES Level-2 Data; SmaggIce Version 1.8; Solving the Swath Segment Selection Problem; The Spatial Standard Observer; Less-Complex Method of Classifying MPSK; Improvement in Recursive Hierarchical Segmentation of Data; Using Heaps in Recursive Hierarchical Segmentation of Data; Tool for Statistical Analysis and Display of Landing Sites; Automated Assignment of Proposals to Reviewers; Array-Pattern-Match Compiler for Opportunistic Data Analysis; Pre-Processor for Compression of Multispectral Image Data; Compressing Image Data While Limiting the Effects of Data Losses; Flight Operations Analysis Tool; Improvement in Visual Target Tracking for a Mobile Robot; Software for Simulating Air Traffic; Automated Vectorization of Decision-Based Algorithms; Grayscale Optical Correlator Workbench; "One-Stop Shopping" for Ocean Remote-Sensing and Model Data; State Analysis Database Tool; Generating CAHV and CAHVOmages with Shadows in ROAMS; Improving UDP/IP Transmission Without Increasing Congestion; FORTRAN Versions of Reformulated HFGMC Codes; Program for Editing Spacecraft Command Sequences; Flight-Tested Prototype of BEAM Software; Mission Scenario Development Workbench; Marsviewer; Tool for Analysis and Reduction of Scientific Data; ASPEN Version 3.0; Secure Display of Space-Exploration Images; Digital Front End for Wide-Band VLBI Science Receiver; Multifunctional Tanks for Spacecraft; Lightweight, Segmented, Mostly Silicon Telescope Mirror; Assistant for Analyzing Tropical-Rain-Mapping Radar Data; and Anion-Intercalating Cathodes for High-Energy- Density Cells.

  20. 37. VIEW SOUTHEAST OF LOWER LEVEL IN WEST BUILDING (SHOWROOM ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    37. VIEW SOUTHEAST OF LOWER LEVEL IN WEST BUILDING (SHOWROOM ADDITION) WITH KAYAK DORY AND BUILDING BED IN FOREGROUND AND WORKBENCH AT WINDOWS. - Lowell's Boat Shop, 459 Main Street, Amesbury, Essex County, MA

  1. 16. SAME ROOMOAK FRAMES, CUT FROM PATTERNS, ARE READIED FOR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    16. SAME ROOM-OAK FRAMES, CUT FROM PATTERNS, ARE READIED FOR CLAMPING AND GLUING ON WORKBENCH. COMPLETED BOATS CAN BE SEEN ON RIVER OUTSIDE WINDOW. - Lowell's Boat Shop, 459 Main Street, Amesbury, Essex County, MA

  2. Finite element fatigue analysis of rectangular clutch spring of automatic slack adjuster

    NASA Astrophysics Data System (ADS)

    Xu, Chen-jie; Luo, Zai; Hu, Xiao-feng; Jiang, Wen-song

    2015-02-01

    The failure of rectangular clutch spring of automatic slack adjuster directly affects the work of automatic slack adjuster. We establish the structural mechanics model of automatic slack adjuster rectangular clutch spring based on its working principle and mechanical structure. In addition, we upload such structural mechanics model to ANSYS Workbench FEA system to predict the fatigue life of rectangular clutch spring. FEA results show that the fatigue life of rectangular clutch spring is 2.0403×105 cycle under the effect of braking loads. In the meantime, fatigue tests of 20 automatic slack adjusters are carried out on the fatigue test bench to verify the conclusion of the structural mechanics model. The experimental results show that the mean fatigue life of rectangular clutch spring is 1.9101×105, which meets the results based on the finite element analysis using ANSYS Workbench FEA system.

  3. Analysis of static and dynamic characteristic of spindle system and its structure optimization in camshaft grinding machine

    NASA Astrophysics Data System (ADS)

    Feng, Jianjun; Li, Chengzhe; Wu, Zhi

    2017-08-01

    As an important part of the valve opening and closing controller in engine, camshaft has high machining accuracy requirement in designing. Taking the high-speed camshaft grinder spindle system as the research object and the spindle system performance as the optimizing target, this paper firstly uses Solidworks to establish the three-dimensional finite element model (FEM) of spindle system, then conducts static analysis and the modal analysis by applying the established FEM in ANSYS Workbench, and finally uses the design optimization function of the ANSYS Workbench to optimize the structure parameter in the spindle system. The study results prove that the design of the spindle system fully meets the production requirements, and the performance of the optimized spindle system is promoted. Besides, this paper provides an analysis and optimization method for other grinder spindle systems.

  4. [Design and Analysis of CT High-speed Data Transmission Rotating Connector Ring System Retaining Ring].

    PubMed

    Pan, Li; Cao, Jujiang; Liu, Min; Fu, Weiwei

    2017-11-30

    High speed data transmission rotating connector system for signal high-speed transmission used in the fixed end and rotating end, it is one of the core component in the CT system. This paper involves structure design and analysis of the retaining ring in the CT high speed data transmission rotating connector system based on the principle of off-axis free space optical transmission. According to the problem of the actual engineering application of space limitations, optical fiber fixed and collimator installation location, we designed the structure of the retaining ring. Using the static analysis function of ANSYS Workbench, it verifies rationality and safety of the strength of retaining ring structure. And based on modal analysis function of ANSYS Workbench, it evaluates the effect of the retaining ring on the stability of the system date transmission, and provides theoretical basis for the feasibility of the structure in practical application.

  5. [System design of small intellectualized ultrasound hyperthermia instrument in the LabVIEW environment].

    PubMed

    Jiang, Feng; Bai, Jingfeng; Chen, Yazhu

    2005-08-01

    Small-scale intellectualized medical instrument has attracted great attention in the field of biomedical engineering, and LabVIEW (Laboratory Virtual Instrument Engineering Workbench) provides a convenient environment for this application due to its inherent advantages. The principle and system structure of the hyperthermia instrument are presented. Type T thermocouples are employed as thermotransducers, whose amplifier consists of two stages, providing built-in ice point compensation and thus improving work stability over temperature. Control signals produced by specially designed circuit drive the programmable counter/timer 8254 chip to generate PWM (Pulse width modulation) wave, which is used as ultrasound radiation energy control signal. Subroutine design topics such as inner-tissue real time feedback temperature control algorithm, water temperature control in the ultrasound applicator are also described. In the cancer tissue temperature control subroutine, the authors exert new improvments to PID (Proportional Integral Differential) algorithm according to the specific demands of the system and achieve strict temperature control to the target tissue region. The system design and PID algorithm improvement have experimentally proved to be reliable and excellent, meeting the requirements of the hyperthermia system.

  6. Investigation of wind behaviour around high-rise buildings

    NASA Astrophysics Data System (ADS)

    Mat Isa, Norasikin; Fitriah Nasir, Nurul; Sadikin, Azmahani; Ariff Hairul Bahara, Jamil

    2017-09-01

    A study on the investigation of wind behaviour around the high-rise buildings is done through an experiment using a wind tunnel and computational fluid dynamics. High-rise buildings refer to buildings or structures that have more than 12 floors. Wind is invisible to the naked eye; thus, it is hard to see and analyse its flow around and over buildings without the use of proper methods, such as the use of wind tunnel and computational fluid dynamics software.The study was conducted on buildings located in Presint 4, Putrajaya, Malaysia which is the Ministry of Rural and Regional Development, Ministry of Information Communications and Culture, Ministry of Urban Wellbeing, Housing and Local Government and the Ministry of Women, Family, and Community by making scaled models of the buildings. The parameters in which this study is conducted on are, four different wind velocities used based on the seasonal monsoons, and wind direction. ANSYS Fluent workbench software is used to compute the simulations in order to achieve the objectives of this study. The data from the computational fluid dynamics are validated with the experiment done through the wind tunnel. From the results obtained through the use of the computation fluid dynamics, this study can identify the characteristics of wind around buildings, including boundary layer of the buildings, separation flow, wake region and etc. Then analyses is conducted on the occurance resulting from the wind that passes the buildings based on the velocity difference between before and after the wind passes the buildings.

  7. The Finite Element Modelling and Dynamic Characteristics Analysis about One Kind of Armoured Vehicles’ Fuel Tanks

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Ge, Zhishang; Zhai, Weihao; Tan, Shiwang; Zhang, Feng

    2018-01-01

    The static and dynamic characteristics of fuel tank are studied for the armoured vehicle in this paper. The CATIA software is applied to build the CAD model of the armoured vehicles’ fuel tank, and the finite element model is established in ANSYS Workbench. The finite element method is carried out to analyze the static and dynamic mechanical properties of the fuel tank, and the first six orders of mode shapes and their frequencies are also computed and given in the paper, then the stress distribution diagram and the high stress areas are obtained. The results of the research provide some references to the fuel tanks’ design improvement, and give some guidance for the installation of the fuel tanks on armoured vehicles, and help to improve the properties and the service life of this kind of armoured vehicles’ fuel tanks.

  8. Customized workflow development and data modularization concepts for RNA-Sequencing and metatranscriptome experiments.

    PubMed

    Lott, Steffen C; Wolfien, Markus; Riege, Konstantin; Bagnacani, Andrea; Wolkenhauer, Olaf; Hoffmann, Steve; Hess, Wolfgang R

    2017-11-10

    RNA-Sequencing (RNA-Seq) has become a widely used approach to study quantitative and qualitative aspects of transcriptome data. The variety of RNA-Seq protocols, experimental study designs and the characteristic properties of the organisms under investigation greatly affect downstream and comparative analyses. In this review, we aim to explain the impact of structured pre-selection, classification and integration of best-performing tools within modularized data analysis workflows and ready-to-use computing infrastructures towards experimental data analyses. We highlight examples for workflows and use cases that are presented for pro-, eukaryotic and mixed dual RNA-Seq (meta-transcriptomics) experiments. In addition, we are summarizing the expertise of the laboratories participating in the project consortium "Structured Analysis and Integration of RNA-Seq experiments" (de.STAIR) and its integration with the Galaxy-workbench of the RNA Bioinformatics Center (RBC). Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Use of historic metabolic biotransformation data as a means of anticipating metabolic sites using MetaPrint2D and Bioclipse.

    PubMed

    Carlsson, Lars; Spjuth, Ola; Adams, Samuel; Glen, Robert C; Boyer, Scott

    2010-07-01

    Predicting metabolic sites is important in the drug discovery process to aid in rapid compound optimisation. No interactive tool exists and most of the useful tools are quite expensive. Here a fast and reliable method to analyse ligands and visualise potential metabolic sites is presented which is based on annotated metabolic data, described by circular fingerprints. The method is available via the graphical workbench Bioclipse, which is equipped with advanced features in cheminformatics. Due to the speed of predictions (less than 50 ms per molecule), scientists can get real time decision support when editing chemical structures. Bioclipse is a rich client, which means that all calculations are performed on the local computer and do not require network connection. Bioclipse and MetaPrint2D are free for all users, released under open source licenses, and available from http://www.bioclipse.net.

  10. Datalingvistik, 2000.

    ERIC Educational Resources Information Center

    Kjaersgaard, Poul Soren, Ed.

    2002-01-01

    Papers from the conference in this volume include the following: "Towards Corpus Annotation Standards--The MATE Workbench" (Laila Dybkjaer and Niels Ole Bernsen); "Danish Text-to-Speech Synthesis Based on Stored Acoustic Segments" (Charles Hoequist); "Toward a Method for the Automated Design of Semantic…

  11. 29 CFR 790.7 - “Preliminary” and “postliminary” activities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of the productive work,” summarized this provision as it appeared in the Senate Bill by stating: “We...) walking or riding by an employee between the plant gate and the employee's lathe, workbench or other...

  12. Measurement and Control System Based on Wireless Senor Network for Granary

    NASA Astrophysics Data System (ADS)

    Song, Jian

    A wireless measurement and control system for granary is developed for the sake of overcoming the shortcoming of the wired measurement and control system such as complex wiring and low anti-interference capacity. In this system, Zigbee technology is applied with Zigbee protocol stack development platform by TI, and wireless senor network is used to collect and control the temperature and the humidity. It is composed of the upper PC, central control node based on CC2530, sensor nodes, sensor modules and the executive device. The wireless sensor node is programmed by C language in IAR Embedded Workbench for MCS-51 Evaluation environment. The upper PC control system software is developed based on Visual C++ 6.0 platform. It is shown by experiments that data transmission in the system is accurate and reliable and the error of the temperature and humidity is below 2%, meeting the functional requirements for the granary measurement and control system.

  13. Software Package Completed for Alloy Design at the Atomic Level

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo H.; Noebe, Ronald D.; Abel, Phillip B.; Good, Brian S.

    2001-01-01

    As a result of a multidisciplinary effort involving solid-state physics, quantum mechanics, and materials and surface science, the first version of a software package dedicated to the atomistic analysis of multicomponent systems was recently completed. Based on the BFS (Bozzolo, Ferrante, and Smith) method for the calculation of alloy and surface energetics, this package includes modules devoted to the analysis of many essential features that characterize any given alloy or surface system, including (1) surface structure analysis, (2) surface segregation, (3) surface alloying, (4) bulk crystalline material properties and atomic defect structures, and (5) thermal processes that allow us to perform phase diagram calculations. All the modules of this Alloy Design Workbench 1.0 (ADW 1.0) are designed to run in PC and workstation environments, and their operation and performance are substantially linked to the needs of the user and the specific application.

  14. Real-Time Simulation

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Coryphaeus Software, founded in 1989 by former NASA electronic engineer Steve Lakowske, creates real-time 3D software. Designer's Workbench, the company flagship product, is a modeling and simulation tool for the development of both static and dynamic 3D databases. Other products soon followed. Activation, specifically designed for game developers, allows developers to play and test the 3D games before they commit to a target platform. Game publishers can shorten development time and prove the "playability" of the title, maximizing their chances of introducing a smash hit. Another product, EasyT, lets users create massive, realistic representation of Earth terrains that can be viewed and traversed in real time. Finally, EasyScene software control the actions among interactive objects within a virtual world. Coryphaeus products are used on Silican Graphics workstation and supercomputers to simulate real-world performance in synthetic environments. Customers include aerospace, aviation, architectural and engineering firms, game developers, and the entertainment industry.

  15. Low-Cost Virtual Laboratory Workbench for Electronic Engineering

    ERIC Educational Resources Information Center

    Achumba, Ifeyinwa E.; Azzi, Djamel; Stocker, James

    2010-01-01

    The laboratory component of undergraduate engineering education poses challenges in resource constrained engineering faculties. The cost, time, space and physical presence requirements of the traditional (real) laboratory approach are the contributory factors. These resource constraints may mitigate the acquisition of meaningful laboratory…

  16. The Simulation of Read-time Scalable Coherent Interface

    NASA Technical Reports Server (NTRS)

    Li, Qiang; Grant, Terry; Grover, Radhika S.

    1997-01-01

    Scalable Coherent Interface (SCI, IEEE/ANSI Std 1596-1992) (SCI1, SCI2) is a high performance interconnect for shared memory multiprocessor systems. In this project we investigate an SCI Real Time Protocols (RTSCI1) using Directed Flow Control Symbols. We studied the issues of efficient generation of control symbols, and created a simulation model of the protocol on a ring-based SCI system. This report presents the results of the study. The project has been implemented using SES/Workbench. The details that follow encompass aspects of both SCI and Flow Control Protocols, as well as the effect of realistic client/server processing delay. The report is organized as follows. Section 2 provides a description of the simulation model. Section 3 describes the protocol implementation details. The next three sections of the report elaborate on the workload, results and conclusions. Appended to the report is a description of the tool, SES/Workbench, used in our simulation, and internal details of our implementation of the protocol.

  17. Design and simulation of a MEM pressure microgripper based on electrothermal microactuators

    NASA Astrophysics Data System (ADS)

    Tecpoyotl-T., Margarita; Vargas Ch., Pedro; Koshevaya, Svetlana; Cabello-R., Ramón; Ocampo-D., Alejandra; Vera-D., J. Gerardo

    2016-09-01

    Design and simulation of a novel pressure microgripper based on Microelectromechanical, MEM technology, and composed by several electrothermal microactuators were carried out in order to increment the displacement and the cutoff force. The implementation of an element of press or gripping in the arrow of chevron actuator was implemented to supply stability in the manipulation of micro-objects. Each device of the microgripper and its fundamental equations will be described. The fundamental parameters to understand the operation and behaviour of the device are analyzed through sweeps of temperature (from 30 °C up to 100 °C) and voltage (from 0.25 V up to 5 V), showing the feasibility to operate the microgripper with electrical or thermal feeding. The design and simulation were development with Finite Element Method (FEM) in Ansys-Workbench 16.0. In this work, the fundamental parameters were calculated in Ansys-Workbench. It is shown, that structural modifications have great impact in the displacement and the cut-off force of the microgripper.

  18. Ciência & Saúde Coletiva: scientific production analysis and collaborative research networks.

    PubMed

    Conner, Norma; Provedel, Attilio; Maciel, Ethel Leonor Noia

    2017-03-01

    The purpose of this metric and descriptive study was to identify the most productive authors and their collaborative research networks from articles published in Ciência & Saúde Coletiva between, 2005, and 2014. Authors meeting the cutoff criteria of at least 10 articles were considered the most productive authors. VOSviewer and Network Workbench technologies were applied for visual representations of collaborative research networks involving the most productive authors in the period. Initial analysis recovered 2511 distinct articles, with 8920 total authors with an average of 3.55 authors per article. Author analysis revealed 6288 distinct authors, 24 of these authors were identified as the most productive. These 24 authors generated 287 articles with an average of 4.31 authors per article, and represented 8 separate collaborative partnerships, the largest of which had 14 authors, indicating a significant degree of collaboration among these authors. This analysis provides a visual representation of networks of knowledge development in public health and demonstrates the usefulness of VOSviewer and Network Workbench technologies in future research.

  19. Pre-calculated protein structure alignments at the RCSB PDB website.

    PubMed

    Prlic, Andreas; Bliven, Spencer; Rose, Peter W; Bluhm, Wolfgang F; Bizon, Chris; Godzik, Adam; Bourne, Philip E

    2010-12-01

    With the continuous growth of the RCSB Protein Data Bank (PDB), providing an up-to-date systematic structure comparison of all protein structures poses an ever growing challenge. Here, we present a comparison tool for calculating both 1D protein sequence and 3D protein structure alignments. This tool supports various applications at the RCSB PDB website. First, a structure alignment web service calculates pairwise alignments. Second, a stand-alone application runs alignments locally and visualizes the results. Third, pre-calculated 3D structure comparisons for the whole PDB are provided and updated on a weekly basis. These three applications allow users to discover novel relationships between proteins available either at the RCSB PDB or provided by the user. A web user interface is available at http://www.rcsb.org/pdb/workbench/workbench.do. The source code is available under the LGPL license from http://www.biojava.org. A source bundle, prepared for local execution, is available from http://source.rcsb.org andreas@sdsc.edu; pbourne@ucsd.edu.

  20. VEG: An intelligent workbench for analysing spectral reflectance data

    NASA Technical Reports Server (NTRS)

    Harrison, P. Ann; Harrison, Patrick R.; Kimes, Daniel S.

    1994-01-01

    An Intelligent Workbench (VEG) was developed for the systematic study of remotely sensed optical data from vegetation. A goal of the remote sensing community is to infer the physical and biological properties of vegetation cover (e.g. cover type, hemispherical reflectance, ground cover, leaf area index, biomass, and photosynthetic capacity) using directional spectral data. VEG collects together, in a common format, techniques previously available from many different sources in a variety of formats. The decision as to when a particular technique should be applied is nonalgorithmic and requires expert knowledge. VEG has codified this expert knowledge into a rule-based decision component for determining which technique to use. VEG provides a comprehensive interface that makes applying the techniques simple and aids a researcher in developing and testing new techniques. VEG also provides a classification algorithm that can learn new classes of surface features. The learning system uses the database of historical cover types to learn class descriptions of one or more classes of cover types.

  1. Atlas2 Cloud: a framework for personal genome analysis in the cloud

    PubMed Central

    2012-01-01

    Background Until recently, sequencing has primarily been carried out in large genome centers which have invested heavily in developing the computational infrastructure that enables genomic sequence analysis. The recent advancements in next generation sequencing (NGS) have led to a wide dissemination of sequencing technologies and data, to highly diverse research groups. It is expected that clinical sequencing will become part of diagnostic routines shortly. However, limited accessibility to computational infrastructure and high quality bioinformatic tools, and the demand for personnel skilled in data analysis and interpretation remains a serious bottleneck. To this end, the cloud computing and Software-as-a-Service (SaaS) technologies can help address these issues. Results We successfully enabled the Atlas2 Cloud pipeline for personal genome analysis on two different cloud service platforms: a community cloud via the Genboree Workbench, and a commercial cloud via the Amazon Web Services using Software-as-a-Service model. We report a case study of personal genome analysis using our Atlas2 Genboree pipeline. We also outline a detailed cost structure for running Atlas2 Amazon on whole exome capture data, providing cost projections in terms of storage, compute and I/O when running Atlas2 Amazon on a large data set. Conclusions We find that providing a web interface and an optimized pipeline clearly facilitates usage of cloud computing for personal genome analysis, but for it to be routinely used for large scale projects there needs to be a paradigm shift in the way we develop tools, in standard operating procedures, and in funding mechanisms. PMID:23134663

  2. Atlas2 Cloud: a framework for personal genome analysis in the cloud.

    PubMed

    Evani, Uday S; Challis, Danny; Yu, Jin; Jackson, Andrew R; Paithankar, Sameer; Bainbridge, Matthew N; Jakkamsetti, Adinarayana; Pham, Peter; Coarfa, Cristian; Milosavljevic, Aleksandar; Yu, Fuli

    2012-01-01

    Until recently, sequencing has primarily been carried out in large genome centers which have invested heavily in developing the computational infrastructure that enables genomic sequence analysis. The recent advancements in next generation sequencing (NGS) have led to a wide dissemination of sequencing technologies and data, to highly diverse research groups. It is expected that clinical sequencing will become part of diagnostic routines shortly. However, limited accessibility to computational infrastructure and high quality bioinformatic tools, and the demand for personnel skilled in data analysis and interpretation remains a serious bottleneck. To this end, the cloud computing and Software-as-a-Service (SaaS) technologies can help address these issues. We successfully enabled the Atlas2 Cloud pipeline for personal genome analysis on two different cloud service platforms: a community cloud via the Genboree Workbench, and a commercial cloud via the Amazon Web Services using Software-as-a-Service model. We report a case study of personal genome analysis using our Atlas2 Genboree pipeline. We also outline a detailed cost structure for running Atlas2 Amazon on whole exome capture data, providing cost projections in terms of storage, compute and I/O when running Atlas2 Amazon on a large data set. We find that providing a web interface and an optimized pipeline clearly facilitates usage of cloud computing for personal genome analysis, but for it to be routinely used for large scale projects there needs to be a paradigm shift in the way we develop tools, in standard operating procedures, and in funding mechanisms.

  3. Guide to the expression of uncertainty in measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathew, Kattathu Joseph

    The enabling objectives of this presentation are to: Provide a working knowledge of the ISO GUM method to estimation of uncertainties in safeguards measurements; Introduce GUM terminology; Provide brief historical background of the GUM methodology; Introduce GUM Workbench software; Isotope ratio measurements by MS will be discussed in the next session.

  4. Molecular Dynamics Simulations of Chemical Reactions for Use in Education

    ERIC Educational Resources Information Center

    Qian Xie; Tinker, Robert

    2006-01-01

    One of the simulation engines of an open-source program called the Molecular Workbench, which can simulate thermodynamics of chemical reactions, is described. This type of real-time, interactive simulation and visualization of chemical reactions at the atomic scale could help students understand the connections between chemical reaction equations…

  5. Computer aided reliability, availability, and safety modeling for fault-tolerant computer systems with commentary on the HARP program

    NASA Technical Reports Server (NTRS)

    Shooman, Martin L.

    1991-01-01

    Many of the most challenging reliability problems of our present decade involve complex distributed systems such as interconnected telephone switching computers, air traffic control centers, aircraft and space vehicles, and local area and wide area computer networks. In addition to the challenge of complexity, modern fault-tolerant computer systems require very high levels of reliability, e.g., avionic computers with MTTF goals of one billion hours. Most analysts find that it is too difficult to model such complex systems without computer aided design programs. In response to this need, NASA has developed a suite of computer aided reliability modeling programs beginning with CARE 3 and including a group of new programs such as: HARP, HARP-PC, Reliability Analysts Workbench (Combination of model solvers SURE, STEM, PAWS, and common front-end model ASSIST), and the Fault Tree Compiler. The HARP program is studied and how well the user can model systems using this program is investigated. One of the important objectives will be to study how user friendly this program is, e.g., how easy it is to model the system, provide the input information, and interpret the results. The experiences of the author and his graduate students who used HARP in two graduate courses are described. Some brief comparisons were made with the ARIES program which the students also used. Theoretical studies of the modeling techniques used in HARP are also included. Of course no answer can be any more accurate than the fidelity of the model, thus an Appendix is included which discusses modeling accuracy. A broad viewpoint is taken and all problems which occurred in the use of HARP are discussed. Such problems include: computer system problems, installation manual problems, user manual problems, program inconsistencies, program limitations, confusing notation, long run times, accuracy problems, etc.

  6. Comparison of effects of different screw materials in the triangle fixation of femoral neck fractures.

    PubMed

    Gok, Kadir; Inal, Sermet; Gok, Arif; Gulbandilar, Eyyup

    2017-05-01

    In this study, biomechanical behaviors of three different screw materials (stainless steel, titanium and cobalt-chromium) have analyzed to fix with triangle fixation under axial loading in femoral neck fracture and which material is best has been investigated. Point cloud obtained after scanning the human femoral model with the three dimensional (3D) scanner and this point cloud has been converted to 3D femoral model by Geomagic Studio software. Femoral neck fracture was modeled by SolidWorks software for only triangle configuration and computer-aided numerical analyses of three different materials have been carried out by AnsysWorkbench finite element analysis (FEA) software. The loading, boundary conditions and material properties have prepared for FEA and Von-Misses stress values on upper and lower proximity of the femur and screws have been calculated. At the end of numerical analyses, the best advantageous screw material has calculated as titanium because it creates minimum stress at the upper and lower proximity of the fracture line.

  7. Successful Completion of FY18/Q1 ASC L2 Milestone 6355: Electrical Analysis Calibration Workflow Capability Demonstration.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Copps, Kevin D.

    The Sandia Analysis Workbench (SAW) project has developed and deployed a production capability for SIERRA computational mechanics analysis workflows. However, the electrical analysis workflow capability requirements have only been demonstrated in early prototype states, with no real capability deployed for analysts’ use. This milestone aims to improve the electrical analysis workflow capability (via SAW and related tools) and deploy it for ongoing use. We propose to focus on a QASPR electrical analysis calibration workflow use case. We will include a number of new capabilities (versus today’s SAW), such as: 1) support for the XYCE code workflow component, 2) data managementmore » coupled to electrical workflow, 3) human-in-theloop workflow capability, and 4) electrical analysis workflow capability deployed on the restricted (and possibly classified) network at Sandia. While far from the complete set of capabilities required for electrical analysis workflow over the long term, this is a substantial first step toward full production support for the electrical analysts.« less

  8. System of Systems Analytic Workbench - 2017

    DTIC Science & Technology

    2017-08-31

    and transitional activities with key collaborators. The tools include: System Operational Dependency Analysis/System Developmental Dependency Analysis...in the methods of the SoS-AWB involve the following: 1. System Operability Dependency Analysis (SODA)/System Development Dependency Analysis...available f. Development of standard dependencies with combinations of low-medium-high parameters Report No. SERC-2017-TR-111

  9. Two-Wire to Four-Wire Audio Converter

    NASA Technical Reports Server (NTRS)

    Talley, G. L., Jr; Seale, B. L.

    1983-01-01

    Simple circuit provides interface between normally incompatible voicecommunication lines. Circuit maintains 40 dB of isolation between input and output halves of four-wire line permitting two-wire line to be connected. Balancing potentiometer, Rg, adjusts gain of IC2 to null feed through from input to output. Adjustment is done on workbench just after assembly.

  10. Between Scientific Playground and Industrial Workbench

    ERIC Educational Resources Information Center

    Kaffka, Gabi

    2009-01-01

    The focus of this article is on the impact of cultural influences in academic knowledge transfer (KT). This aspect of the KT process was studied at Dutch and German technical universities. The analysis shows that professional values and identities play an important role in academic KT. Administrators in university KT offices were found to be…

  11. Ramping up to the Biology Workbench: A Multi-Stage Approach to Bioinformatics Education

    ERIC Educational Resources Information Center

    Greene, Kathleen; Donovan, Sam

    2005-01-01

    In the process of designing and field-testing bioinformatics curriculum materials, we have adopted a three-stage, progressive model that emphasizes collaborative scientific inquiry. The elements of the model include: (1) context setting, (2) introduction to concepts, processes, and tools, and (3) development of competent use of technologically…

  12. CMG-biotools, a free workbench for basic comparative microbial genomics.

    PubMed

    Vesth, Tammi; Lagesen, Karin; Acar, Öncel; Ussery, David

    2013-01-01

    Today, there are more than a hundred times as many sequenced prokaryotic genomes than were present in the year 2000. The economical sequencing of genomic DNA has facilitated a whole new approach to microbial genomics. The real power of genomics is manifested through comparative genomics that can reveal strain specific characteristics, diversity within species and many other aspects. However, comparative genomics is a field not easily entered into by scientists with few computational skills. The CMG-biotools package is designed for microbiologists with limited knowledge of computational analysis and can be used to perform a number of analyses and comparisons of genomic data. The CMG-biotools system presents a stand-alone interface for comparative microbial genomics. The package is a customized operating system, based on Xubuntu 10.10, available through the open source Ubuntu project. The system can be installed on a virtual computer, allowing the user to run the system alongside any other operating system. Source codes for all programs are provided under GNU license, which makes it possible to transfer the programs to other systems if so desired. We here demonstrate the package by comparing and analyzing the diversity within the class Negativicutes, represented by 31 genomes including 10 genera. The analyses include 16S rRNA phylogeny, basic DNA and codon statistics, proteome comparisons using BLAST and graphical analyses of DNA structures. This paper shows the strength and diverse use of the CMG-biotools system. The system can be installed on a vide range of host operating systems and utilizes as much of the host computer as desired. It allows the user to compare multiple genomes, from various sources using standardized data formats and intuitive visualizations of results. The examples presented here clearly shows that users with limited computational experience can perform complicated analysis without much training.

  13. Investigation of the piezoelectric thimble tactile device operating modes.

    PubMed

    Bansevicius, Ramutis; Dragasius, Egidijus; Grigas, Vytautas; Jurenas, Vytautas; Mazeika, Darius; Zvironas, Arunas

    2014-01-01

    A multifunctional device to transfer graphical or text information for blind or visually impaired is presented. The prototype using tactile perception has been designed where information displayed on the screen of electronic device (mobile phone, PC) is transferred by oscillating needle, touching the fingertip. Having the aim to define optimal parameters of the fingertip excitation by needle, the computational analysis of different excitation modes has been carried out. A 3D solid computational finite element model of the skin segment, comprising four main fingertip skin layers (stratum corneum, epidermis, dermis and hypodermis) was built by using ANSYS Workbench FEA software. Harmonic analysis of its stress-strain state under excitation with different frequency (up to 10000 Hz) and harmonic force (0.01 N), acting outer stratum corneum layer in normal direction at one, two or three points has been performed. The influence of the mode of dynamic loading of skin was evaluated (in terms of the tactile signal level) on the basis of the normal and shear elastic strain in dermis, where mechanoreceptors are placed. It is shown that the tactile perception of information, delivered by three vibrating pins, may be influenced by configuration of excitation points (their number and phase of loading) and the frequency of excitation.

  14. Comparison of Decontamination Efficacy of Cleaning Solutions on a Biological Safety Cabinet Workbench Contaminated by Cyclophosphamide

    PubMed Central

    Adé, Apolline; Chauchat, Laure; Frève, Johann-François Ouellette; Gagné, Sébastien; Caron, Nicolas; Bussières, Jean-François

    2017-01-01

    Background Several studies have compared cleaning procedures for decontaminating surfaces exposed to antineoplastic drugs. All of the cleaning products tested were successful in reducing most of the antineoplastic drug quantities spilled on surfaces, but none of them completely removed residual traces. Objective To assess the efficacy of various cleaning solutions for decontaminating a biological safety cabinet workbench exposed to a defined amount of cyclophosphamide. Methods In this pilot study, specific areas of 2 biological safety cabinets (class II, type B2) were deliberately contaminated with a defined quantity of cyclophosphamide (10 μg or 107 pg). Three cleaning solutions were tested: quaternary ammonium, sodium hypochlorite 0.02%, and sodium hypochlorite 2%. After cleaning, the cyclophosphamide remaining on the areas was quantified by wipe sampling. Each cleaning solution was tested 3 times, with cleaning and wipe sampling being performed 5 times for each test. Results A total of 57 wipe samples were collected and analyzed. The average recovery efficiency was 121.690% (standard deviation 5.058%). The decontamination efficacy increased with the number of successive cleaning sessions: from 98.710% after session 1 to 99.997% after session 5 for quaternary ammonium; from 97.027% to 99.997% for sodium hypochlorite 0.02%; and from 98.008% to 100% for sodium hypochlorite 2%. Five additional cleaning sessions performed after the main study (with detergent and sodium hypochlorite 2%) were effective to complete the decontamination, leaving no detectable traces of the drug. Conclusions All of the cleaning solutions reduced contamination of biological safety cabinet workbenches exposed to a defined amount of cyclophosphamide. Quaternary ammonium and sodium hypochlorite (0.02% and 2%) had mean efficacy greater than 97% for removal of the initial quantity of the drug (107 pg) after the first cleaning session. When sodium hypochlorite 2% was used, fewer cleaning sessions were required to complete decontamination. Further studies should be conducted to identify optimal cleaning strategies to fully eliminate traces of hazardous drugs. PMID:29298999

  15. Text-mining-assisted biocuration workflows in Argo

    PubMed Central

    Rak, Rafal; Batista-Navarro, Riza Theresa; Rowley, Andrew; Carter, Jacob; Ananiadou, Sophia

    2014-01-01

    Biocuration activities have been broadly categorized into the selection of relevant documents, the annotation of biological concepts of interest and identification of interactions between the concepts. Text mining has been shown to have a potential to significantly reduce the effort of biocurators in all the three activities, and various semi-automatic methodologies have been integrated into curation pipelines to support them. We investigate the suitability of Argo, a workbench for building text-mining solutions with the use of a rich graphical user interface, for the process of biocuration. Central to Argo are customizable workflows that users compose by arranging available elementary analytics to form task-specific processing units. A built-in manual annotation editor is the single most used biocuration tool of the workbench, as it allows users to create annotations directly in text, as well as modify or delete annotations created by automatic processing components. Apart from syntactic and semantic analytics, the ever-growing library of components includes several data readers and consumers that support well-established as well as emerging data interchange formats such as XMI, RDF and BioC, which facilitate the interoperability of Argo with other platforms or resources. To validate the suitability of Argo for curation activities, we participated in the BioCreative IV challenge whose purpose was to evaluate Web-based systems addressing user-defined biocuration tasks. Argo proved to have the edge over other systems in terms of flexibility of defining biocuration tasks. As expected, the versatility of the workbench inevitably lengthened the time the curators spent on learning the system before taking on the task, which may have affected the usability of Argo. The participation in the challenge gave us an opportunity to gather valuable feedback and identify areas of improvement, some of which have already been introduced. Database URL: http://argo.nactem.ac.uk PMID:25037308

  16. A Real-Time Cardiac Arrhythmia Classification System with Wearable Sensor Networks

    PubMed Central

    Hu, Sheng; Wei, Hongxing; Chen, Youdong; Tan, Jindong

    2012-01-01

    Long term continuous monitoring of electrocardiogram (ECG) in a free living environment provides valuable information for prevention on the heart attack and other high risk diseases. This paper presents the design of a real-time wearable ECG monitoring system with associated cardiac arrhythmia classification algorithms. One of the striking advantages is that ECG analog front-end and on-node digital processing are designed to remove most of the noise and bias. In addition, the wearable sensor node is able to monitor the patient's ECG and motion signal in an unobstructive way. To realize the real-time medical analysis, the ECG is digitalized and transmitted to a smart phone via Bluetooth. On the smart phone, the ECG waveform is visualized and a novel layered hidden Markov model is seamlessly integrated to classify multiple cardiac arrhythmias in real time. Experimental results demonstrate that the clean and reliable ECG waveform can be captured in multiple stressed conditions and the real-time classification on cardiac arrhythmia is competent to other workbenches. PMID:23112746

  17. Using RGB-D sensors and evolutionary algorithms for the optimization of workstation layouts.

    PubMed

    Diego-Mas, Jose Antonio; Poveda-Bautista, Rocio; Garzon-Leal, Diana

    2017-11-01

    RGB-D sensors can collect postural data in an automatized way. However, the application of these devices in real work environments requires overcoming problems such as lack of accuracy or body parts' occlusion. This work presents the use of RGB-D sensors and genetic algorithms for the optimization of workstation layouts. RGB-D sensors are used to capture workers' movements when they reach objects on workbenches. Collected data are then used to optimize workstation layout by means of genetic algorithms considering multiple ergonomic criteria. Results show that typical drawbacks of using RGB-D sensors for body tracking are not a problem for this application, and that the combination with intelligent algorithms can automatize the layout design process. The procedure described can be used to automatically suggest new layouts when workers or processes of production change, to adapt layouts to specific workers based on their ways to do the tasks, or to obtain layouts simultaneously optimized for several production processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Nencki Genomics Database--Ensembl funcgen enhanced with intersections, user data and genome-wide TFBS motifs.

    PubMed

    Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal

    2013-01-01

    We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql -h database.nencki-genomics.org -u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface.

  19. Nencki Genomics Database—Ensembl funcgen enhanced with intersections, user data and genome-wide TFBS motifs

    PubMed Central

    Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal

    2013-01-01

    We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql –h database.nencki-genomics.org –u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface. Database URL: http://www.nencki-genomics.org. PMID:24089456

  20. Square Wheels and Other Easy-To-Build Hands-On Science Activities. An Exploratorium Science Snackbook.

    ERIC Educational Resources Information Center

    Rathjen, Don; Doherty, Paul

    This book, part of The Exploratorium science "snackbook" series, explains science with a hands-on approach. Activities include: (1) "3-D Shadow"; (2) "Bits and Bytes"; (3) "Circuit Workbench"; (4) "Diamagnetic Repulsion"; (5) "Film Can Racer"; (6) "Fractal Patterns"; (7) "Hoop Nightmares"; (8) "Hydraulic Arm"; (9) "Hyperbolic Slot"; (10) "Light…

  1. 12. BUILDING 621, INTERIOR, GROUND FLOOR, LOOKING NORTHWEST AT SCREENING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. BUILDING 621, INTERIOR, GROUND FLOOR, LOOKING NORTHWEST AT SCREENING MACHINE THAT REMOVES SHELL FRAGMENTS. METALLIC DUST REMOVED BY MAGNETIC SEPERATOR UNDERNEATH SCREEN. SAWDUST IS RETURNED TO SAWDUST HOPPER BY ELEVATOR. HOODS OVER SCREENING MACHINE AT WORKBENCH REMOVE FINE SAWDUST. - Picatinny Arsenal, 600 Area, Test Areas District, State Route 15 near I-80, Dover, Morris County, NJ

  2. Development of a Workbench to Address the Educational Data Mining Bottleneck

    ERIC Educational Resources Information Center

    Rodrigo, Ma. Mercedes T.; Baker, Ryan S. J. d.; McLaren, Bruce M.; Jayme, Alejandra; Dy, Thomas T.

    2012-01-01

    In recent years, machine-learning software packages have made it easier for educational data mining researchers to create real-time detectors of cognitive skill as well as of metacognitive and motivational behavior that can be used to improve student learning. However, there remain challenges to overcome for these methods to become available to…

  3. Microgravity Science Glovebox (MSG) Space Science's Past, Present, and Future on the International Space Station (ISS)

    NASA Technical Reports Server (NTRS)

    Spivey, Reggie A.; Spearing, Scott F.; Jordan, Lee P.; McDaniel S. Greg

    2012-01-01

    The Microgravity Science Glovebox (MSG) is a double rack facility designed for microgravity investigation handling aboard the International Space Station (ISS). The unique design of the facility allows it to accommodate science and technology investigations in a "workbench" type environment. MSG facility provides an enclosed working area for investigation manipulation and observation in the ISS. Provides two levels of containment via physical barrier, negative pressure, and air filtration. The MSG team and facilities provide quick access to space for exploratory and National Lab type investigations to gain an understanding of the role of gravity in the physics associated research areas. The MSG is a very versatile and capable research facility on the ISS. The Microgravity Science Glovebox (MSG) on the International Space Station (ISS) has been used for a large body or research in material science, heat transfer, crystal growth, life sciences, smoke detection, combustion, plant growth, human health, and technology demonstration. MSG is an ideal platform for gravity-dependent phenomena related research. Moreover, the MSG provides engineers and scientists a platform for research in an environment similar to the one that spacecraft and crew members will actually experience during space travel and exploration. The MSG facility is ideally suited to provide quick, relatively inexpensive access to space for National Lab type investigations.

  4. USGS Science Data Life Cycle Tools - Lessons Learned in moving to the Cloud

    NASA Astrophysics Data System (ADS)

    Frame, M. T.; Mancuso, T.; Hutchison, V.; Zolly, L.; Wheeler, B.; Urbanowski, S.; Devarakonda, R.; Palanisamy, G.

    2016-12-01

    The U.S Geological Survey (USGS) Core Science Systems has been working for the past year to design, re-architect, and implement several key tools and systems within the USGS Cloud Hosting Service supported by Amazon Web Services (AWS). As a result of emerging USGS data management policies that align with federal Open Data mandates, and as part of a concerted effort to respond to potential increasing user demand due to these policies, the USGS strategically began migrating its core data management tools and services to the AWS environment in hopes of leveraging cloud capabilities (i.e. auto-scaling, replication, etc.). The specific tools included: USGS Online Metadata Editor (OME); USGS Digital Object Identifier (DOI) generation tool; USGS Science Data Catalog (SDC); USGS ScienceBase system; and an integrative tool, the USGS Data Release Workbench, which steps bureau personnel through the process of releasing data. All of these tools existed long before the Cloud was available and presented significant challenges in migrating, re-architecting, securing, and moving to a Cloud based environment. Initially, a `lift and shift' approach, essentially moving as is, was attempted and various lessons learned about that approach will be discussed, along with recommendations that resulted from the development and eventual operational implementation of these tools. The session will discuss lessons learned related to management of these tools in an AWS environment; re-architecture strategies utilized for the tools; time investments through sprint allocations; initial benefits observed from operating within a Cloud based environment; and initial costs to support these data management tools.

  5. Plastohydrodynamic drawing and coating of stainless steel wire using a tapered bore die of no metal to metal contact

    NASA Astrophysics Data System (ADS)

    Hasan, S.; Basmage, O.; Stokes, J. T.; Hashmi, M. S. J.

    2018-05-01

    A review of wire coating studies using plasto-hydrodynamic pressure shows that most of the works were carried out by conducting experiments simultaneously with simulation analysis based upon Bernoulli's principle and Euler and Navier-Stokes (N-S) equations. These characteristics relate to the domain of Computational Fluid Dynamics (CFD) which is an interdisciplinary topic (Fluid Mechanics, Numerical Analysis of Fluid flow and Computer Science). This research investigates two aspects: (i) simulation work and (ii) experimentation. A mathematical model was developed to investigate the flow pattern of the molten polymer and pressure distribution within the wire-drawing dies, assessment of polymer coating thickness on the coated wires and speed of coating on the wires at the outlet of the drawing dies, without deploying any pressurizing pump. In addition to a physical model which was developed within ANSYS™ environment through the simulation design of ANSYS™ Workbench. The design was customized to simulate the process of wire-coating on the fine stainless-steel wires using drawing dies having different bore geometries such as: stepped parallel bore, tapered bore and combined parallel and tapered bore. The convergence of the designed CFD model and numerical and physical solution parameters for simulation were dynamically monitored for the viscous flow of the polypropylene (PP) polymer. Simulation results were validated against experimental results and used to predict the ideal bore shape to produce a thin coating on stainless wires with different diameter. Simulation studies confirmed that a specific speed should be attained by the stainless-steel wires while passing through the drawing dies. It has been observed that all the speed values within specific speed range did not produce a coating thickness having the desired coating characteristic features. Therefore, some optimization of the experimental set up through design of experiments (Stat-Ease) was applied to validate the results. Further rapid solidification of the viscous coating on the wires was targeted so that the coated wires do not stick to the winding spool after the coating process.

  6. Using the Eclipse Parallel Tools Platform to Assist Earth Science Model Development and Optimization on High Performance Computers

    NASA Astrophysics Data System (ADS)

    Alameda, J. C.

    2011-12-01

    Development and optimization of computational science models, particularly on high performance computers, and with the advent of ubiquitous multicore processor systems, practically on every system, has been accomplished with basic software tools, typically, command-line based compilers, debuggers, performance tools that have not changed substantially from the days of serial and early vector computers. However, model complexity, including the complexity added by modern message passing libraries such as MPI, and the need for hybrid code models (such as openMP and MPI) to be able to take full advantage of high performance computers with an increasing core count per shared memory node, has made development and optimization of such codes an increasingly arduous task. Additional architectural developments, such as many-core processors, only complicate the situation further. In this paper, we describe how our NSF-funded project, "SI2-SSI: A Productive and Accessible Development Workbench for HPC Applications Using the Eclipse Parallel Tools Platform" (WHPC) seeks to improve the Eclipse Parallel Tools Platform, an environment designed to support scientific code development targeted at a diverse set of high performance computing systems. Our WHPC project to improve Eclipse PTP takes an application-centric view to improve PTP. We are using a set of scientific applications, each with a variety of challenges, and using PTP to drive further improvements to both the scientific application, as well as to understand shortcomings in Eclipse PTP from an application developer perspective, to drive our list of improvements we seek to make. We are also partnering with performance tool providers, to drive higher quality performance tool integration. We have partnered with the Cactus group at Louisiana State University to improve Eclipse's ability to work with computational frameworks and extremely complex build systems, as well as to develop educational materials to incorporate into computational science and engineering codes. Finally, we are partnering with the lead PTP developers at IBM, to ensure we are as effective as possible within the Eclipse community development. We are also conducting training and outreach to our user community, including conference BOF sessions, monthly user calls, and an annual user meeting, so that we can best inform the improvements we make to Eclipse PTP. With these activities we endeavor to encourage use of modern software engineering practices, as enabled through the Eclipse IDE, with computational science and engineering applications. These practices include proper use of source code repositories, tracking and rectifying issues, measuring and monitoring code performance changes against both optimizations as well as ever-changing software stacks and configurations on HPC systems, as well as ultimately encouraging development and maintenance of testing suites -- things that have become commonplace in many software endeavors, but have lagged in the development of science applications. We view that the challenge with the increased complexity of both HPC systems and science applications demands the use of better software engineering methods, preferably enabled by modern tools such as Eclipse PTP, to help the computational science community thrive as we evolve the HPC landscape.

  7. Recommended Financial Plan for the Construction of a Permanent Campus for San Joaquin Delta College.

    ERIC Educational Resources Information Center

    Bortolazzo, Julio L.

    The financial plan for the San Joaquin Delta College (California) permanent campus is presented in a table showing the gross square footage, the unit cost (including such fixed equipment as workbenches, laboratory tables, etc.), and the estimated total cost for each department. The unit costs per square foot vary from $18.00 for warehousing to…

  8. 29 CFR 785.34 - Effect of section 4 of the Portal-to-Portal Act.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... failure to pay the minimum wage or overtime compensation for time spent in “walking, riding, or traveling... employee is employed to perform either prior to the time on any particular workday at which such employee... from time clock to work-bench) need not be counted as working time unless it is compensable by contract...

  9. Virtual Instrument Systems in Reality (VISIR) for Remote Wiring and Measurement of Electronic Circuits on Breadboard

    ERIC Educational Resources Information Center

    Tawfik, M.; Sancristobal, E.; Martin, S.; Gil, R.; Diaz, G.; Colmenar, A.; Peire, J.; Castro, M.; Nilsson, K.; Zackrisson, J.; Hakansson, L.; Gustavsson, I.

    2013-01-01

    This paper reports on a state-of-the-art remote laboratory project called Virtual Instrument Systems in Reality (VISIR). VISIR allows wiring and measuring of electronic circuits remotely on a virtual workbench that replicates physical circuit breadboards. The wiring mechanism is developed by means of a relay switching matrix connected to a PCI…

  10. CellLineNavigator: a workbench for cancer cell line analysis

    PubMed Central

    Krupp, Markus; Itzel, Timo; Maass, Thorsten; Hildebrandt, Andreas; Galle, Peter R.; Teufel, Andreas

    2013-01-01

    The CellLineNavigator database, freely available at http://www.medicalgenomics.org/celllinenavigator, is a web-based workbench for large scale comparisons of a large collection of diverse cell lines. It aims to support experimental design in the fields of genomics, systems biology and translational biomedical research. Currently, this compendium holds genome wide expression profiles of 317 different cancer cell lines, categorized into 57 different pathological states and 28 individual tissues. To enlarge the scope of CellLineNavigator, the database was furthermore closely linked to commonly used bioinformatics databases and knowledge repositories. To ensure easy data access and search ability, a simple data and an intuitive querying interface were implemented. It allows the user to explore and filter gene expression, focusing on pathological or physiological conditions. For a more complex search, the advanced query interface may be used to query for (i) differentially expressed genes; (ii) pathological or physiological conditions; or (iii) gene names or functional attributes, such as Kyoto Encyclopaedia of Genes and Genomes pathway maps. These queries may also be combined. Finally, CellLineNavigator allows additional advanced analysis of differentially regulated genes by a direct link to the Database for Annotation, Visualization and Integrated Discovery (DAVID) Bioinformatics Resources. PMID:23118487

  11. Fast probabilistic file fingerprinting for big data

    PubMed Central

    2013-01-01

    Background Biological data acquisition is raising new challenges, both in data analysis and handling. Not only is it proving hard to analyze the data at the rate it is generated today, but simply reading and transferring data files can be prohibitively slow due to their size. This primarily concerns logistics within and between data centers, but is also important for workstation users in the analysis phase. Common usage patterns, such as comparing and transferring files, are proving computationally expensive and are tying down shared resources. Results We present an efficient method for calculating file uniqueness for large scientific data files, that takes less computational effort than existing techniques. This method, called Probabilistic Fast File Fingerprinting (PFFF), exploits the variation present in biological data and computes file fingerprints by sampling randomly from the file instead of reading it in full. Consequently, it has a flat performance characteristic, correlated with data variation rather than file size. We demonstrate that probabilistic fingerprinting can be as reliable as existing hashing techniques, with provably negligible risk of collisions. We measure the performance of the algorithm on a number of data storage and access technologies, identifying its strengths as well as limitations. Conclusions Probabilistic fingerprinting may significantly reduce the use of computational resources when comparing very large files. Utilisation of probabilistic fingerprinting techniques can increase the speed of common file-related workflows, both in the data center and for workbench analysis. The implementation of the algorithm is available as an open-source tool named pfff, as a command-line tool as well as a C library. The tool can be downloaded from http://biit.cs.ut.ee/pfff. PMID:23445565

  12. New approaches to virtual environment surgery

    NASA Technical Reports Server (NTRS)

    Ross, M. D.; Twombly, A.; Lee, A. W.; Cheng, R.; Senger, S.

    1999-01-01

    This research focused on two main problems: 1) low cost, high fidelity stereoscopic imaging of complex tissues and organs; and 2) virtual cutting of tissue. A further objective was to develop these images and virtual tissue cutting methods for use in a telemedicine project that would connect remote sites using the Next Generation Internet. For goal one we used a CT scan of a human heart, a desktop PC with an OpenGL graphics accelerator card, and LCD stereoscopic glasses. Use of multiresolution meshes ranging from approximately 1,000,000 to 20,000 polygons speeded interactive rendering rates enormously while retaining general topography of the dataset. For goal two, we used a CT scan of an infant skull with premature closure of the right coronal suture, a Silicon Graphics Onyx workstation, a Fakespace Immersive WorkBench and CrystalEyes LCD glasses. The high fidelity mesh of the skull was reduced from one million to 50,000 polygons. The cut path was automatically calculated as the shortest distance along the mesh between a small number of hand selected vertices. The region outlined by the cut path was then separated from the skull and translated/rotated to assume a new position. The results indicate that widespread high fidelity imaging in virtual environment is possible using ordinary PC capabilities if appropriate mesh reduction methods are employed. The software cutting tool is applicable to heart and other organs for surgery planning, for training surgeons in a virtual environment, and for telemedicine purposes.

  13. Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud

    PubMed Central

    Afgan, Enis; Sloggett, Clare; Goonasekera, Nuwan; Makunin, Igor; Benson, Derek; Crowe, Mark; Gladman, Simon; Kowsar, Yousef; Pheasant, Michael; Horst, Ron; Lonie, Andrew

    2015-01-01

    Background Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise. Results We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic. Conclusions This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the value added to the research community through the suite of services and resources provided by our implementation. PMID:26501966

  14. Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud.

    PubMed

    Afgan, Enis; Sloggett, Clare; Goonasekera, Nuwan; Makunin, Igor; Benson, Derek; Crowe, Mark; Gladman, Simon; Kowsar, Yousef; Pheasant, Michael; Horst, Ron; Lonie, Andrew

    2015-01-01

    Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise. We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic. This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the value added to the research community through the suite of services and resources provided by our implementation.

  15. Media-fill simulation tests in manual and robotic aseptic preparation of injection solutions in syringes.

    PubMed

    Krämer, Irene; Federici, Matteo; Kaiser, Vanessa; Thiesen, Judith

    2016-04-01

    The purpose of this study was to evaluate the contamination rate of media-fill products either prepared automated with a robotic system (APOTECAchemo™) or prepared manually at cytotoxic workbenches in the same cleanroom environment and by experienced operators. Media fills were completed by microbiological environmental control in the critical zones and used to validate the cleaning and disinfection procedures of the robotic system. The aseptic preparation of patient individual ready-to-use injection solutions was simulated by using double concentrated tryptic soy broth as growth medium, water for injection and plastic syringes as primary packaging materials. Media fills were either prepared automated (500 units) in the robot or manually (500 units) in cytotoxic workbenches in the same cleanroom over a period of 18 working days. The test solutions were incubated at room temperature (22℃) over 4 weeks. Products were visually inspected for turbidity after a 2-week and 4-week period. Following incubation, growth promotion tests were performed with Staphylococcus epidermidis. During the media-fill procedures, passive air monitoring was performed with settle plates and surface monitoring with contact plates on predefined locations as well as fingerprints. The plates got incubated for 5-7 days at room temperature, followed by 2-3 days at 30-35℃ and the colony forming units (cfu) counted after both periods. The robot was cleaned and disinfected according to the established standard operating procedure on two working days prior to the media-fill session, while on six other working days only six critical components were sanitized at the end of the media-fill sessions. Every day UV irradiation was operated for 4 h after finishing work. None of the 1000 media-fill products prepared in the two different settings showed turbidity after the incubation period thereby indicating no contamination with microorganisms. All products remained uniform, clear, and light-amber solutions. In addition, the reliability of the nutrient medium and the process was demonstrated by positive growth promotion tests with S. epidermidis. During automated preparation the recommended limits < 1 cfu per settle/contact plate set for cleanroom Grade A zones were not succeeded in the carousel and working area, but in the loading area of the robot. During manual preparation, the number of cfus detected on settle/contact plates inside the workbenches lay far below the limits. The number of cfus detected on fingertips succeeded several times the limit during manual preparation but not during automated preparation. There was no difference in the microbial contamination rate depending on the extent of cleaning and disinfection of the robot. Extensive media-fill tests simulating manual and automated preparation of ready-to-use cytotoxic injection solutions revealed the same level of sterility for both procedures. The results of supplemental environmental controls confirmed that the aseptic procedures are well controlled. As there was no difference in the microbial contamination rates of the media preparations depending on the extent of cleaning and disinfection of the robot, the results were used to adapt the respective standard operating procedures. © The Author(s) 2014.

  16. HRGFish: A database of hypoxia responsive genes in fishes

    NASA Astrophysics Data System (ADS)

    Rashid, Iliyas; Nagpure, Naresh Sahebrao; Srivastava, Prachi; Kumar, Ravindra; Pathak, Ajey Kumar; Singh, Mahender; Kushwaha, Basdeo

    2017-02-01

    Several studies have highlighted the changes in the gene expression due to the hypoxia response in fishes, but the systematic organization of the information and the analytical platform for such genes are lacking. In the present study, an attempt was made to develop a database of hypoxia responsive genes in fishes (HRGFish), integrated with analytical tools, using LAMPP technology. Genes reported in hypoxia response for fishes were compiled through literature survey and the database presently covers 818 gene sequences and 35 gene types from 38 fishes. The upstream fragments (3,000 bp), covered in this database, enables to compute CG dinucleotides frequencies, motif finding of the hypoxia response element, identification of CpG island and mapping with the reference promoter of zebrafish. The database also includes functional annotation of genes and provides tools for analyzing sequences and designing primers for selected gene fragments. This may be the first database on the hypoxia response genes in fishes that provides a workbench to the scientific community involved in studying the evolution and ecological adaptation of the fish species in relation to hypoxia.

  17. Designing highly flexible and usable cyberinfrastructures for convergence.

    PubMed

    Herr, Bruce W; Huang, Weixia; Penumarthy, Shashikant; Börner, Katy

    2006-12-01

    This article presents the results of a 7-year-long quest into the development of a "dream tool" for our research in information science and scientometrics and more recently, network science. The results are two cyberinfrastructures (CI): The Cyberinfrastructure for Information Visualization and the Network Workbench that enjoy a growing national and interdisciplinary user community. Both CIs use the cyberinfrastructure shell (CIShell) software specification, which defines interfaces between data sets and algorithms/services and provides a means to bundle them into powerful tools and (Web) services. In fact, CIShell might be our major contribution to progress in convergence. Just as Wikipedia is an "empty shell" that empowers lay persons to share text, a CIShell implementation is an "empty shell" that empowers user communities to plug-and-play, share, compare and combine data sets, algorithms, and compute resources across national and disciplinary boundaries. It is argued here that CIs will not only transform the way science is conducted but also will play a major role in the diffusion of expertise, data sets, algorithms, and technologies across multiple disciplines and business sectors leading to a more integrative science.

  18. Pneumafil casing blower through moving reference frame (MRF) - A CFD simulation

    NASA Astrophysics Data System (ADS)

    Manivel, R.; Vijayanandh, R.; Babin, T.; Sriram, G.

    2018-05-01

    In this analysis work, the ring frame of Pneumafil casing blower of the textile mills with a power rating of 5 kW have been simulated using Computational Fluid Dynamics (CFD) code. The CFD analysis of the blower is carried out in Ansys Workbench 16.2 with Fluent using MRF solver settings. The simulation settings and boundary conditions are based on literature study and field data acquired. The main objective of this work is to reduce the energy consumption of the blower. The flow analysis indicated that the power consumption is influenced by the deflector plate orientation and deflector plate strip situated at the outlet casing of the blower. The energy losses occurred in the blower is due to the recirculation zones formed around the deflector plate strip. The deflector plate orientation is changed and optimized to reduce the energy consumption. The proposed optimized model is based on the simulation results which had relatively lesser power consumption than the existing and other cases. The energy losses in the Pneumafil casing blower are reduced through CFD analysis.

  19. On the matter of the reliability of the chemical monitoring system based on the modern control and monitoring devices

    NASA Astrophysics Data System (ADS)

    Andriushin, A. V.; Dolbikova, N. S.; Kiet, S. V.; Merzlikina, E. I.; Nikitina, I. S.

    2017-11-01

    The reliability of the main equipment of any power station depends on the correct water chemistry. In order to provide it, it is necessary to monitor the heat carrier quality, which, in its turn, is provided by the chemical monitoring system. Thus, the monitoring system reliability plays an important part in providing reliability of the main equipment. The monitoring system reliability is determined by the reliability and structure of its hardware and software consisting of sensors, controllers, HMI and so on [1,2]. Workers of a power plant dealing with the measuring equipment must be informed promptly about any breakdowns in the monitoring system, in this case they are able to remove the fault quickly. A computer consultant system for personnel maintaining the sensors and other chemical monitoring equipment can help to notice faults quickly and identify their possible causes. Some technical solutions for such a system are considered in the present paper. The experimental results were obtained on the laboratory and experimental workbench representing a physical model of a part of the chemical monitoring system.

  20. 24. BUILDING NO. 452, ORDNANCE FACILITY (BAG CHARGE FILLING PLANT), ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    24. BUILDING NO. 452, ORDNANCE FACILITY (BAG CHARGE FILLING PLANT), INTERIOR VIEW LOOKING WEST AT NORTH END OF CENTRAL CORRIDOR (ROOM 3). STAIRWAY WORKBENCH WITH COMPRESSED-AIR POWERED CARTRIDGE LOADER. ARMORED PASS-THROUGH OF TRANSFER BOX FOR PASSING EXPLOSIVES MATERIALS THROUGH TO NEXT ROOM TO THE NORTH. - Picatinny Arsenal, 400 Area, Gun Bag Loading District, State Route 15 near I-80, Dover, Morris County, NJ

  1. Empirical Network Model of Human Higher Cognitive Brain Functions

    DTIC Science & Technology

    1990-03-31

    If applicable) AFOSR j’ F49620-87-0047 8c. ADDRESS (City, State, and ZIP Code) 10. SOURCE OF FUNDING NUMBERS USAF/AFSC, AIR FORCE OFFICE OF PROGRAM ...8217 Workbench", an interactive exploratory data analysis and display program . Other technical developments include development of methods and programs ...feedback. Electroencephalogr. clin. Neurophysiol., 74:147-160. 11. Illes, J. (1989) Neurolinguistic features of spontaneous language production

  2. Simulation Analysis of Fluid-Structure Interaction of High Velocity Environment Influence on Aircraft Wing Materials under Different Mach Numbers.

    PubMed

    Zhang, Lijun; Sun, Changyan

    2018-04-18

    Aircraft service process is in a state of the composite load of pressure and temperature for a long period of time, which inevitably affects the inherent characteristics of some components in aircraft accordingly. The flow field of aircraft wing materials under different Mach numbers is simulated by Fluent in order to extract pressure and temperature on the wing in this paper. To determine the effect of coupling stress on the wing’s material and structural properties, the fluid-structure interaction (FSI) method is used in ANSYS-Workbench to calculate the stress that is caused by pressure and temperature. Simulation analysis results show that with the increase of Mach number, the pressure and temperature on the wing’s surface both increase exponentially and thermal stress that is caused by temperature will be the main factor in the coupled stress. When compared with three kinds of materials, titanium alloy, aluminum alloy, and Haynes alloy, carbon fiber composite material has better performance in service at high speed, and natural frequency under coupling pre-stressing will get smaller.

  3. Simulation Analysis of Fluid-Structure Interaction of High Velocity Environment Influence on Aircraft Wing Materials under Different Mach Numbers

    PubMed Central

    Sun, Changyan

    2018-01-01

    Aircraft service process is in a state of the composite load of pressure and temperature for a long period of time, which inevitably affects the inherent characteristics of some components in aircraft accordingly. The flow field of aircraft wing materials under different Mach numbers is simulated by Fluent in order to extract pressure and temperature on the wing in this paper. To determine the effect of coupling stress on the wing’s material and structural properties, the fluid-structure interaction (FSI) method is used in ANSYS-Workbench to calculate the stress that is caused by pressure and temperature. Simulation analysis results show that with the increase of Mach number, the pressure and temperature on the wing’s surface both increase exponentially and thermal stress that is caused by temperature will be the main factor in the coupled stress. When compared with three kinds of materials, titanium alloy, aluminum alloy, and Haynes alloy, carbon fiber composite material has better performance in service at high speed, and natural frequency under coupling pre-stressing will get smaller. PMID:29670023

  4. The Lagrangian Ensemble metamodel for simulating plankton ecosystems

    NASA Astrophysics Data System (ADS)

    Woods, J. D.

    2005-10-01

    This paper presents a detailed account of the Lagrangian Ensemble (LE) metamodel for simulating plankton ecosystems. It uses agent-based modelling to describe the life histories of many thousands of individual plankters. The demography of each plankton population is computed from those life histories. So too is bio-optical and biochemical feedback to the environment. The resulting “virtual ecosystem” is a comprehensive simulation of the plankton ecosystem. It is based on phenotypic equations for individual micro-organisms. LE modelling differs significantly from population-based modelling. The latter uses prognostic equations to compute demography and biofeedback directly. LE modelling diagnoses them from the properties of individual micro-organisms, whose behaviour is computed from prognostic equations. That indirect approach permits the ecosystem to adjust gracefully to changes in exogenous forcing. The paper starts with theory: it defines the Lagrangian Ensemble metamodel and explains how LE code performs a number of computations “behind the curtain”. They include budgeting chemicals, and deriving biofeedback and demography from individuals. The next section describes the practice of LE modelling. It starts with designing a model that complies with the LE metamodel. Then it describes the scenario for exogenous properties that provide the computation with initial and boundary conditions. These procedures differ significantly from those used in population-based modelling. The next section shows how LE modelling is used in research, teaching and planning. The practice depends largely on hindcasting to overcome the limits to predictability of weather forecasting. The scientific method explains observable ecosystem phenomena in terms of finer-grained processes that cannot be observed, but which are controlled by the basic laws of physics, chemistry and biology. What-If? Prediction ( WIP), used for planning, extends hindcasting by adding events that describe natural or man-made hazards and remedial actions. Verification is based on the Ecological Turing Test, which takes account of uncertainties in the observed and simulated versions of a target ecological phenomenon. The rest of the paper is devoted to a case study designed to show what LE modelling offers the biological oceanographer. The case study is presented in two parts. The first documents the WB model (Woods & Barkmann, 1994) and scenario used to simulate the ecosystem in a mesocosm moored in deep water off the Azores. The second part illustrates the emergent properties of that virtual ecosystem. The behaviour and development of an individual plankton lineage are revealed by an audit trail of the agent used in the computation. The fields of environmental properties reveal the impact of biofeedback. The fields of demographic properties show how changes in individuals cumulatively affect the birth and death rates of their population. This case study documents the virtual ecosystem used by Woods, Perilli and Barkmann (2005; hereafter WPB); to investigate the stability of simulations created by the Lagrangian Ensemble metamodel. The Azores virtual ecosystem was created and analysed on the Virtual Ecology Workbench (VEW) which is described briefly in the Appendix.

  5. CMG-Biotools, a Free Workbench for Basic Comparative Microbial Genomics

    PubMed Central

    Vesth, Tammi; Lagesen, Karin; Acar, Öncel; Ussery, David

    2013-01-01

    Background Today, there are more than a hundred times as many sequenced prokaryotic genomes than were present in the year 2000. The economical sequencing of genomic DNA has facilitated a whole new approach to microbial genomics. The real power of genomics is manifested through comparative genomics that can reveal strain specific characteristics, diversity within species and many other aspects. However, comparative genomics is a field not easily entered into by scientists with few computational skills. The CMG-biotools package is designed for microbiologists with limited knowledge of computational analysis and can be used to perform a number of analyses and comparisons of genomic data. Results The CMG-biotools system presents a stand-alone interface for comparative microbial genomics. The package is a customized operating system, based on Xubuntu 10.10, available through the open source Ubuntu project. The system can be installed on a virtual computer, allowing the user to run the system alongside any other operating system. Source codes for all programs are provided under GNU license, which makes it possible to transfer the programs to other systems if so desired. We here demonstrate the package by comparing and analyzing the diversity within the class Negativicutes, represented by 31 genomes including 10 genera. The analyses include 16S rRNA phylogeny, basic DNA and codon statistics, proteome comparisons using BLAST and graphical analyses of DNA structures. Conclusion This paper shows the strength and diverse use of the CMG-biotools system. The system can be installed on a vide range of host operating systems and utilizes as much of the host computer as desired. It allows the user to compare multiple genomes, from various sources using standardized data formats and intuitive visualizations of results. The examples presented here clearly shows that users with limited computational experience can perform complicated analysis without much training. PMID:23577086

  6. Computational Materials Program for Alloy Design

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo

    2005-01-01

    The research program sponsored by this grant, "Computational Materials Program for Alloy Design", covers a period of time of enormous change in the emerging field of computational materials science. The computational materials program started with the development of the BFS method for alloys, a quantum approximate method for atomistic analysis of alloys specifically tailored to effectively deal with the current challenges in the area of atomistic modeling and to support modern experimental programs. During the grant period, the program benefited from steady growth which, as detailed below, far exceeds its original set of goals and objectives. Not surprisingly, by the end of this grant, the methodology and the computational materials program became an established force in the materials communitiy, with substantial impact in several areas. Major achievements during the duration of the grant include the completion of a Level 1 Milestone for the HITEMP program at NASA Glenn, consisting of the planning, development and organization of an international conference held at the Ohio Aerospace Institute in August of 2002, finalizing a period of rapid insertion of the methodology in the research community worlwide. The conference, attended by citizens of 17 countries representing various fields of the research community, resulted in a special issue of the leading journal in the area of applied surface science. Another element of the Level 1 Milestone was the presentation of the first version of the Alloy Design Workbench software package, currently known as "adwTools". This software package constitutes the first PC-based piece of software for atomistic simulations for both solid alloys and surfaces in the market.Dissemination of results and insertion in the materials community worldwide was a primary focus during this period. As a result, the P.I. was responsible for presenting 37 contributed talks, 19 invited talks, and publishing 71 articles in peer-reviewed journals, as detailed later in this Report.

  7. Workstations and gloveboxes for space station

    NASA Technical Reports Server (NTRS)

    Junge, Maria

    1990-01-01

    Lockheed Missiles and Space Company is responsible for designing, developing, and building the Life Sciences Glovebox, the Laboratory Sciences Workbench, and the Maintenance Workstation plus 16 other pieces of equipment for the U.S. Laboratory Module of the Space Station Freedom. The Laboratory Sciences Workbench and the Maintenance Workstation were functionally combined into a double structure to save weight and volume which are important commodities on the Space Station Freedom. The total volume of these items is approximately 180 cubic feet. These workstations and the glovebox will be delivered to NASA in 1994 and will be launched in 1995. The very long lifetime of 30 years presents numerous technical challenges in the areas of design and reliability. The equipment must be easy to use by international crew members and also easy to maintain on-orbit. For example, seals must be capable of on-orbit changeout and reverification. The stringent contamination requirements established for Space Station Freedom equipment also complicate the zero gravity glovebox design. The current contamination control system for the Life Sciences Glovebox and the Maintenance Workstation is presented. The requirement for the Life Sciences Glovebox to safely contain toxic, reactive, and radioactive materials presents challenges. Trade studies, CAD simulation techniques and design challenges are discussed to illustrate the current baseline conceptual designs. Areas which need input from the user community are identified.

  8. AWOB: A Collaborative Workbench for Astronomers

    NASA Astrophysics Data System (ADS)

    Kim, J. W.; Lemson, G.; Bulatovic, N.; Makarenko, V.; Vogler, A.; Voges, W.; Yao, Y.; Kiefl, R.; Koychev, S.

    2015-09-01

    We present the Astronomers Workbench (AWOB1), a web-based collaboration and publication platform for a scientific project of any size, developed in collaboration between the Max-Planck institutes of Astrophysics (MPA) and Extra-terrestrial Physics (MPE) and the Max-Planck Digital Library (MPDL). AWOB facilitates the collaboration between geographically distributed astronomers working on a common project throughout its whole scientific life cycle. AWOB does so by making it very easy for scientists to set up and manage a collaborative workspace for individual projects, where data can be uploaded and shared. It supports inviting project collaborators, provides wikis, automated mailing lists, calendars and event notification and has a built in chat facility. It allows the definition and tracking of tasks within projects and supports easy creation of e-publications for the dissemination of data and images and other resources that cannot be added to submitted papers. AWOB extends the project concept to larger scale consortia, within which it is possible to manage working groups and sub-projects. The existing AWOB instance has so far been limited to Max-Planck members and their collaborators, but will be opened to the whole astronomical community. AWOB is an open-source project and its source code is available upon request. We intend to extend AWOB's functionality also to other disciplines, and would greatly appreciate contributions from the community.

  9. Visualization of Uncertainty

    NASA Astrophysics Data System (ADS)

    Jones, P. W.; Strelitz, R. A.

    2012-12-01

    The output of a simulation is best comprehended through the agency and methods of visualization, but a vital component of good science is knowledge of uncertainty. While great strides have been made in the quantification of uncertainty, especially in simulation, there is still a notable gap: there is no widely accepted means of simultaneously viewing the data and the associated uncertainty in one pane. Visualization saturates the screen, using the full range of color, shadow, opacity and tricks of perspective to display even a single variable. There is no room in the visualization expert's repertoire left for uncertainty. We present a method of visualizing uncertainty without sacrificing the clarity and power of the underlying visualization that works as well in 3-D and time-varying visualizations as it does in 2-D. At its heart, it relies on a principal tenet of continuum mechanics, replacing the notion of value at a point with a more diffuse notion of density as a measure of content in a region. First, the uncertainties calculated or tabulated at each point are transformed into a piecewise continuous field of uncertainty density . We next compute a weighted Voronoi tessellation of a user specified N convex polygonal/polyhedral cells such that each cell contains the same amount of uncertainty as defined by . The problem thus devolves into minimizing . Computation of such a spatial decomposition is O(N*N ), and can be computed iteratively making it possible to update easily over time as well as faster. The polygonal mesh does not interfere with the visualization of the data and can be easily toggled on or off. In this representation, a small cell implies a great concentration of uncertainty, and conversely. The content weighted polygons are identical to the cartogram familiar to the information visualization community in the depiction of things voting results per stat. Furthermore, one can dispense with the mesh or edges entirely to be replaced by symbols or glyphs at the generating points (effectively the center of the polygon). This methodology readily admits to rigorous statistical analysis using standard components found in R and thus entirely compatible with the visualization package we use (Visit and/or ParaView), the language we use (Python) and the UVCDAT environment that provides the programmer and analyst workbench. We will demonstrate the power and effectiveness of this methodology in climate studies. We will further argue that our method of defining (or predicting) values in a region has many advantages over the traditional visualization notion of value at a point.

  10. Performance Improvements to the Naval Postgraduate School Turbopropulsion Labs Transonic Axially Splittered Rotor

    DTIC Science & Technology

    2013-12-01

    Implementation of current NPS TPL design procedure that uses COTS software (MATLAB, SolidWorks, and ANSYS - CFX ) for the geometric rendering and...procedure that uses commercial-off-the-shelf software (MATLAB, SolidWorks, and ANSYS - CFX ) for the geometric rendering and analysis was modified and... CFX The CFD simulation program in ANSYS Workbench. CFX -Pre CFX boundary conditions and solver settings module. CFX -Solver CFX solver program. CFX

  11. Development of an ATR Workbench for SAR Imagery

    DTIC Science & Technology

    2002-12-01

    containing a representation of the object. Each image representation contains only a subset of information about t, and that information is often...GUI. In the case of HNeT under Windows, the native COM/ ActiveX automation interface is used. This provides Python with direct access to the many...that can contain other objects such as a menu bar, buttons, an image display area, text box etc. The library also provides an event handling mechanism

  12. Extending the Kerberos Protocol for Distributed Data as a Service

    DTIC Science & Technology

    2012-09-20

    exported as a UIMA [11] PEAR file for deployment to IBM Content Analytics (ICA). A UIMA PEAR file is a deployable text analytics “pipeline” (analogous...to a web application packaged in a WAR file). ICA is a text analysis and search application that supports UIMA . The key entities targeted by NLP rules...workbench. [Online]. Available: https: //www.ibm.com/developerworks/community/alphaworks/lrw/ [11] Apache UIMA . [Online]. Available: http

  13. Interoperable Open-Source Sensor-Net Frameworks with Sensor-Package Workbench Capabilities: Motivation, Survey of Resources, and Exploratory Results

    DTIC Science & Technology

    2010-06-01

    Military Scenario Definition Language (MSDL) for Nontraditional Warfare Scenarios," Paper 09S- SIW -001, Proceedings of the Spring Simulation...Update to the M&S Community," Paper 09S- SIW -002, Proceedings of the Spring Simulation Interoperability Workshop, Simulation Interoperability...Multiple Simulations: An Application of the Military Scenario Definition Language (MSDL)," Paper 09S- SIW -003, Proc. of the Spring Simulation

  14. Argo: an integrative, interactive, text mining-based workbench supporting curation

    PubMed Central

    Rak, Rafal; Rowley, Andrew; Black, William; Ananiadou, Sophia

    2012-01-01

    Curation of biomedical literature is often supported by the automatic analysis of textual content that generally involves a sequence of individual processing components. Text mining (TM) has been used to enhance the process of manual biocuration, but has been focused on specific databases and tasks rather than an environment integrating TM tools into the curation pipeline, catering for a variety of tasks, types of information and applications. Processing components usually come from different sources and often lack interoperability. The well established Unstructured Information Management Architecture is a framework that addresses interoperability by defining common data structures and interfaces. However, most of the efforts are targeted towards software developers and are not suitable for curators, or are otherwise inconvenient to use on a higher level of abstraction. To overcome these issues we introduce Argo, an interoperable, integrative, interactive and collaborative system for text analysis with a convenient graphic user interface to ease the development of processing workflows and boost productivity in labour-intensive manual curation. Robust, scalable text analytics follow a modular approach, adopting component modules for distinct levels of text analysis. The user interface is available entirely through a web browser that saves the user from going through often complicated and platform-dependent installation procedures. Argo comes with a predefined set of processing components commonly used in text analysis, while giving the users the ability to deposit their own components. The system accommodates various areas and levels of user expertise, from TM and computational linguistics to ontology-based curation. One of the key functionalities of Argo is its ability to seamlessly incorporate user-interactive components, such as manual annotation editors, into otherwise completely automatic pipelines. As a use case, we demonstrate the functionality of an in-built manual annotation editor that is well suited for in-text corpus annotation tasks. Database URL: http://www.nactem.ac.uk/Argo PMID:22434844

  15. Investigation of biomechanical behavior of lumbar vertebral segments with dynamic stabilization device using finite element approach

    NASA Astrophysics Data System (ADS)

    Deoghare, Ashish B.; Kashyap, Siddharth; Padole, Pramod M.

    2013-03-01

    Degenerative disc disease is a major source of lower back pain and significantly alters the biomechanics of the lumbar spine. Dynamic stabilization device is a remedial technique which uses flexible materials to stabilize the affected lumbar region while preserving the natural anatomy of the spine. The main objective of this research work is to investigate the stiffness variation of dynamic stabilization device under various loading conditions under compression, axial rotation and flexion. Three dimensional model of the two segment lumbar spine is developed using computed tomography (CT) scan images. The lumbar structure developed is analyzed in ANSYS workbench. Two types of dynamic stabilization are considered: one with stabilizing device as pedicle instrumentation and second with stabilization device inserted around the inter-vertebral disc. Analysis suggests that proper positioning of the dynamic stabilization device is of paramount significance prior to the surgery. Inserting the device in the posterior region indicates the adverse effects as it shows increase in the deformation of the inter-vertebral disc. Analysis executed by positioning stabilizing device around the inter-vertebral disc yields better result for various stiffness values under compression and other loadings. [Figure not available: see fulltext.

  16. Analysis of Three-dimension Viscous Flow in the Model Axial Compressor Stage K1002L

    NASA Astrophysics Data System (ADS)

    Tribunskaia, K.; Kozhukhov, Y. V.

    2017-08-01

    The main investigation subject considered in this paper is axial compressor model stage K1002L. Three simulation models were designed: Scheme 1 - inlet stage model consisting of IGV (Inlet Guide Vane), rotor and diffuser; Scheme 2 - two-stage model: IGV, first-stage rotor, first-stage diffuser, second-stage rotor, EGV (Exit Guide Vane); Scheme 3 - full-round model: IGV, rotor, diffuser. Numerical investigation of the model stage was held for four circumferential velocities at the outer diameter (Uout=125,160,180,210 m/s) within the range of flow coefficient: ϕ = 0.4 - 0.6. The computational domain was created with ANSYS CFX Workbench. According to simulation results, there were constructed aerodynamic characteristic curves of adiabatic efficiency and the adiabatic head coefficient calculated for total parameters were compared with data from the full-scale test received at the Central Boiler and Turbine Institution (CBTI), thus, verification of the calculated data was carried out. Moreover, there were conducted the following studies: comparison of aerodynamic characteristics of the schemes 1, 2; comparison of the sector and full-round models. The analysis and conclusions are supplemented by gas-dynamic method calculation for axial compressor stages.

  17. The TimeStudio Project: An open source scientific workflow system for the behavioral and brain sciences.

    PubMed

    Nyström, Pär; Falck-Ytter, Terje; Gredebäck, Gustaf

    2016-06-01

    This article describes a new open source scientific workflow system, the TimeStudio Project, dedicated to the behavioral and brain sciences. The program is written in MATLAB and features a graphical user interface for the dynamic pipelining of computer algorithms developed as TimeStudio plugins. TimeStudio includes both a set of general plugins (for reading data files, modifying data structures, visualizing data structures, etc.) and a set of plugins specifically developed for the analysis of event-related eyetracking data as a proof of concept. It is possible to create custom plugins to integrate new or existing MATLAB code anywhere in a workflow, making TimeStudio a flexible workbench for organizing and performing a wide range of analyses. The system also features an integrated sharing and archiving tool for TimeStudio workflows, which can be used to share workflows both during the data analysis phase and after scientific publication. TimeStudio thus facilitates the reproduction and replication of scientific studies, increases the transparency of analyses, and reduces individual researchers' analysis workload. The project website ( http://timestudioproject.com ) contains the latest releases of TimeStudio, together with documentation and user forums.

  18. Computational Modeling of Blast Wave Transmission Through Human Ear.

    PubMed

    Leckness, Kegan; Nakmali, Don; Gan, Rong Z

    2018-03-01

    Hearing loss has become the most common disability among veterans. Understanding how blast waves propagate through the human ear is a necessary step in the development of effective hearing protection devices (HPDs). This article presents the first 3D finite element (FE) model of the human ear to simulate blast wave transmission through the ear. The 3D FE model of the human ear consisting of the ear canal, tympanic membrane, ossicular chain, and middle ear cavity was imported into ANSYS Workbench for coupled fluid-structure interaction analysis in the time domain. Blast pressure waveforms recorded external to the ear in human cadaver temporal bone tests were applied at the entrance of the ear canal in the model. The pressure waveforms near the tympanic membrane (TM) in the canal (P1) and behind the TM in the middle ear cavity (P2) were calculated. The model-predicted results were then compared with measured P1 and P2 waveforms recorded in human cadaver ears during blast tests. Results show that the model-derived P1 waveforms were in an agreement with the experimentally recorded waveforms with statistic Kurtosis analysis. The FE model will be used for the evaluation of HPDs in future studies.

  19. Influenza Research Database: An integrated bioinformatics resource for influenza virus research.

    PubMed

    Zhang, Yun; Aevermann, Brian D; Anderson, Tavis K; Burke, David F; Dauphin, Gwenaelle; Gu, Zhiping; He, Sherry; Kumar, Sanjeev; Larsen, Christopher N; Lee, Alexandra J; Li, Xiaomei; Macken, Catherine; Mahaffey, Colin; Pickett, Brett E; Reardon, Brian; Smith, Thomas; Stewart, Lucy; Suloway, Christian; Sun, Guangyu; Tong, Lei; Vincent, Amy L; Walters, Bryan; Zaremba, Sam; Zhao, Hongtao; Zhou, Liwei; Zmasek, Christian; Klem, Edward B; Scheuermann, Richard H

    2017-01-04

    The Influenza Research Database (IRD) is a U.S. National Institute of Allergy and Infectious Diseases (NIAID)-sponsored Bioinformatics Resource Center dedicated to providing bioinformatics support for influenza virus research. IRD facilitates the research and development of vaccines, diagnostics and therapeutics against influenza virus by providing a comprehensive collection of influenza-related data integrated from various sources, a growing suite of analysis and visualization tools for data mining and hypothesis generation, personal workbench spaces for data storage and sharing, and active user community support. Here, we describe the recent improvements in IRD including the use of cloud and high performance computing resources, analysis and visualization of user-provided sequence data with associated metadata, predictions of novel variant proteins, annotations of phenotype-associated sequence markers and their predicted phenotypic effects, hemagglutinin (HA) clade classifications, an automated tool for HA subtype numbering conversion, linkouts to disease event data and the addition of host factor and antiviral drug components. All data and tools are freely available without restriction from the IRD website at https://www.fludb.org. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. An expert system shell for inferring vegetation characteristics: The learning system (tasks C and D)

    NASA Technical Reports Server (NTRS)

    Harrison, P. Ann; Harrison, Patrick R.

    1992-01-01

    This report describes the implementation of a learning system that uses a data base of historical cover type reflectance data taken at different solar zenith angles and wavelengths to learn class descriptions of classes of cover types. It has been integrated with the VEG system and requires that the VEG system be loaded to operate. VEG is the NASA VEGetation workbench - an expert system for inferring vegetation characteristics from reflectance data. The learning system provides three basic options. Using option one, the system learns class descriptions of one or more classes. Using option two, the system learns class descriptions of one or more classes and then uses the learned classes to classify an unknown sample. Using option three, the user can test the system's classification performance. The learning system can also be run in an automatic mode. In this mode, options two and three are executed on each sample from an input file. The system was developed using KEE. It is menu driven and contains a sophisticated window and mouse driven interface which guides the user through various computations. Input and output file management and data formatting facilities are also provided.

  1. A Python library for FAIRer access and deposition to the Metabolomics Workbench Data Repository.

    PubMed

    Smelter, Andrey; Moseley, Hunter N B

    2018-01-01

    The Metabolomics Workbench Data Repository is a public repository of mass spectrometry and nuclear magnetic resonance data and metadata derived from a wide variety of metabolomics studies. The data and metadata for each study is deposited, stored, and accessed via files in the domain-specific 'mwTab' flat file format. In order to improve the accessibility, reusability, and interoperability of the data and metadata stored in 'mwTab' formatted files, we implemented a Python library and package. This Python package, named 'mwtab', is a parser for the domain-specific 'mwTab' flat file format, which provides facilities for reading, accessing, and writing 'mwTab' formatted files. Furthermore, the package provides facilities to validate both the format and required metadata elements of a given 'mwTab' formatted file. In order to develop the 'mwtab' package we used the official 'mwTab' format specification. We used Git version control along with Python unit-testing framework as well as continuous integration service to run those tests on multiple versions of Python. Package documentation was developed using sphinx documentation generator. The 'mwtab' package provides both Python programmatic library interfaces and command-line interfaces for reading, writing, and validating 'mwTab' formatted files. Data and associated metadata are stored within Python dictionary- and list-based data structures, enabling straightforward, 'pythonic' access and manipulation of data and metadata. Also, the package provides facilities to convert 'mwTab' files into a JSON formatted equivalent, enabling easy reusability of the data by all modern programming languages that implement JSON parsers. The 'mwtab' package implements its metadata validation functionality based on a pre-defined JSON schema that can be easily specialized for specific types of metabolomics studies. The library also provides a command-line interface for interconversion between 'mwTab' and JSONized formats in raw text and a variety of compressed binary file formats. The 'mwtab' package is an easy-to-use Python package that provides FAIRer utilization of the Metabolomics Workbench Data Repository. The source code is freely available on GitHub and via the Python Package Index. Documentation includes a 'User Guide', 'Tutorial', and 'API Reference'. The GitHub repository also provides 'mwtab' package unit-tests via a continuous integration service.

  2. VERCE, Virtual Earthquake and Seismology Research Community in Europe, a new ESFRI initiative integrating data infrastructure, Grid and HPC infrastructures for data integration, data analysis and data modeling in seismology

    NASA Astrophysics Data System (ADS)

    van Hemert, Jano; Vilotte, Jean-Pierre

    2010-05-01

    Research in earthquake and seismology addresses fundamental problems in understanding Earth's internal wave sources and structures, and augment applications to societal concerns about natural hazards, energy resources and environmental change. This community is central to the European Plate Observing System (EPOS)—the ESFRI initiative in solid Earth Sciences. Global and regional seismology monitoring systems are continuously operated and are transmitting a growing wealth of data from Europe and from around the world. These tremendous volumes of seismograms, i.e., records of ground motions as a function of time, have a definite multi-use attribute, which puts a great premium on open-access data infrastructures that are integrated globally. In Europe, the earthquake and seismology community is part of the European Integrated Data Archives (EIDA) infrastructure and is structured as "horizontal" data services. On top of this distributed data archive system, the community has developed recently within the EC project NERIES advanced SOA-based web services and a unified portal system. Enabling advanced analysis of these data by utilising a data-aware distributed computing environment is instrumental to fully exploit the cornucopia of data and to guarantee optimal operation of the high-cost monitoring facilities. The strategy of VERCE is driven by the needs of data-intensive applications in data mining and modelling and will be illustrated through a set of applications. It aims to provide a comprehensive architecture and framework adapted to the scale and the diversity of these applications, and to integrate the community data infrastructure with Grid and HPC infrastructures. A first novel aspect is a service-oriented architecture that provides well-equipped integrated workbenches, with an efficient communication layer between data and Grid infrastructures, augmented with bridges to the HPC facilities. A second novel aspect is the coupling between Grid data analysis and HPC data modelling applications through workflow and data sharing mechanisms. VERCE will develop important interactions with the European infrastructure initiatives in Grid and HPC computing. The VERCE team: CNRS-France (IPG Paris, LGIT Grenoble), UEDIN (UK), KNMI-ORFEUS (Holland), EMSC, INGV (Italy), LMU (Germany), ULIV (UK), BADW-LRZ (Germany), SCAI (Germany), CINECA (Italy)

  3. Biomechanical Modeling and Measurement of Blast Injury and Hearing Protection Mechanisms

    DTIC Science & Technology

    2015-10-01

    12 software into Workbench V. 15 in CFX/ANSYS; 2) building the geometry of the ear model with ossicular chain and cochlear load in CFX; 3...the ear canal to middle ear. The model consists of the ear canal, TM, middle ear ossicles and suspensory ligaments, middle ear cavity, and cochlear ...the TM, ossicles, and ligaments/muscle tendons with the cochlear load applied on the stapes footplate. 17 Fig. 21. Time-history plots of

  4. Tardigrade workbench: comparing stress-related proteins, sequence-similar and functional protein clusters as well as RNA elements in tardigrades

    PubMed Central

    2009-01-01

    Background Tardigrades represent an animal phylum with extraordinary resistance to environmental stress. Results To gain insights into their stress-specific adaptation potential, major clusters of related and similar proteins are identified, as well as specific functional clusters delineated comparing all tardigrades and individual species (Milnesium tardigradum, Hypsibius dujardini, Echiniscus testudo, Tulinus stephaniae, Richtersius coronifer) and functional elements in tardigrade mRNAs are analysed. We find that 39.3% of the total sequences clustered in 58 clusters of more than 20 proteins. Among these are ten tardigrade specific as well as a number of stress-specific protein clusters. Tardigrade-specific functional adaptations include strong protein, DNA- and redox protection, maintenance and protein recycling. Specific regulatory elements regulate tardigrade mRNA stability such as lox P DICE elements whereas 14 other RNA elements of higher eukaryotes are not found. Further features of tardigrade specific adaption are rapidly identified by sequence and/or pattern search on the web-tool tardigrade analyzer http://waterbear.bioapps.biozentrum.uni-wuerzburg.de. The work-bench offers nucleotide pattern analysis for promotor and regulatory element detection (tardigrade specific; nrdb) as well as rapid COG search for function assignments including species-specific repositories of all analysed data. Conclusion Different protein clusters and regulatory elements implicated in tardigrade stress adaptations are analysed including unpublished tardigrade sequences. PMID:19821996

  5. Tardigrade workbench: comparing stress-related proteins, sequence-similar and functional protein clusters as well as RNA elements in tardigrades.

    PubMed

    Förster, Frank; Liang, Chunguang; Shkumatov, Alexander; Beisser, Daniela; Engelmann, Julia C; Schnölzer, Martina; Frohme, Marcus; Müller, Tobias; Schill, Ralph O; Dandekar, Thomas

    2009-10-12

    Tardigrades represent an animal phylum with extraordinary resistance to environmental stress. To gain insights into their stress-specific adaptation potential, major clusters of related and similar proteins are identified, as well as specific functional clusters delineated comparing all tardigrades and individual species (Milnesium tardigradum, Hypsibius dujardini, Echiniscus testudo, Tulinus stephaniae, Richtersius coronifer) and functional elements in tardigrade mRNAs are analysed. We find that 39.3% of the total sequences clustered in 58 clusters of more than 20 proteins. Among these are ten tardigrade specific as well as a number of stress-specific protein clusters. Tardigrade-specific functional adaptations include strong protein, DNA- and redox protection, maintenance and protein recycling. Specific regulatory elements regulate tardigrade mRNA stability such as lox P DICE elements whereas 14 other RNA elements of higher eukaryotes are not found. Further features of tardigrade specific adaption are rapidly identified by sequence and/or pattern search on the web-tool tardigrade analyzer http://waterbear.bioapps.biozentrum.uni-wuerzburg.de. The work-bench offers nucleotide pattern analysis for promotor and regulatory element detection (tardigrade specific; nrdb) as well as rapid COG search for function assignments including species-specific repositories of all analysed data. Different protein clusters and regulatory elements implicated in tardigrade stress adaptations are analysed including unpublished tardigrade sequences.

  6. Applying a visual language for image processing as a graphical teaching tool in medical imaging

    NASA Astrophysics Data System (ADS)

    Birchman, James J.; Tanimoto, Steven L.; Rowberg, Alan H.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Typical user interaction in image processing is with command line entries, pull-down menus, or text menu selections from a list, and as such is not generally graphical in nature. Although applying these interactive methods to construct more sophisticated algorithms from a series of simple image processing steps may be clear to engineers and programmers, it may not be clear to clinicians. A solution to this problem is to implement a visual programming language using visual representations to express image processing algorithms. Visual representations promote a more natural and rapid understanding of image processing algorithms by providing more visual insight into what the algorithms do than the interactive methods mentioned above can provide. Individuals accustomed to dealing with images will be more likely to understand an algorithm that is represented visually. This is especially true of referring physicians, such as surgeons in an intensive care unit. With the increasing acceptance of picture archiving and communications system (PACS) workstations and the trend toward increasing clinical use of image processing, referring physicians will need to learn more sophisticated concepts than simply image access and display. If the procedures that they perform commonly, such as window width and window level adjustment and image enhancement using unsharp masking, are depicted visually in an interactive environment, it will be easier for them to learn and apply these concepts. The software described in this paper is a visual programming language for imaging processing which has been implemented on the NeXT computer using NeXTstep user interface development tools and other tools in an object-oriented environment. The concept is based upon the description of a visual language titled `Visualization of Vision Algorithms' (VIVA). Iconic representations of simple image processing steps are placed into a workbench screen and connected together into a dataflow path by the user. As the user creates and edits a dataflow path, more complex algorithms can be built on the screen. Once the algorithm is built, it can be executed, its results can be reviewed, and operator parameters can be interactively adjusted until an optimized output is produced. The optimized algorithm can then be saved and added to the system as a new operator. This system has been evaluated as a graphical teaching tool for window width and window level adjustment, image enhancement using unsharp masking, and other techniques.

  7. Numerical simulation of airflow around the evaporator in the closed space

    NASA Astrophysics Data System (ADS)

    Puchor, Tomáš; Banovčan, Roman; Lenhard, Richard

    2018-06-01

    The article deals with a numerical simulation of the forced airflow around a evaporator with the finned tubes in the electrotechnical box, by finite volume method in the program ANSYS Workbench. The work contains an analysis of the impact of forced airflow on the evaporator with the various seated the electrical components. The aim of the work is to find out the most effective way of heat dissipation by forced convection from the electrical components in the closed space with lowest pressure loss.

  8. Scientific Workflow Management in Proteomics

    PubMed Central

    de Bruin, Jeroen S.; Deelder, André M.; Palmblad, Magnus

    2012-01-01

    Data processing in proteomics can be a challenging endeavor, requiring extensive knowledge of many different software packages, all with different algorithms, data format requirements, and user interfaces. In this article we describe the integration of a number of existing programs and tools in Taverna Workbench, a scientific workflow manager currently being developed in the bioinformatics community. We demonstrate how a workflow manager provides a single, visually clear and intuitive interface to complex data analysis tasks in proteomics, from raw mass spectrometry data to protein identifications and beyond. PMID:22411703

  9. Game design in virtual reality systems for stroke rehabilitation.

    PubMed

    Goude, Daniel; Björk, Staffan; Rydmark, Martin

    2007-01-01

    We propose a model for the structured design of games for post-stroke rehabilitation. The model is based on experiences with game development for a haptic and stereo vision immersive workbench intended for daily use in stroke patients' homes. A central component of this rehabilitation system is a library of games that are simultaneously entertaining for the patient and beneficial for rehabilitation [1], and where each game is designed for specific training tasks through the use of the model.

  10. How Many Feathers for the War Bonnet? A Groundwork for Distributing the Planning Function in Objectives Force Units of Employment

    DTIC Science & Technology

    2002-05-23

    22/02; David Tate, “VR in the Field: Hunter Warrior & JCOS/MCM Situational Awareness Using the Virtual Reality Responsive Workbench;” available from... Fetterman Union-Pacific Railroad N. Platte River Missouri River Missouri River Yellowstone River Bighorn R. Black Hills Powder R. Little Missouri R... Fetterman on March 1, 1876, and made contact with a Sioux band on the Powder River two weeks later. However, the lead Unit of Action failed to defeat

  11. Laser Doppler anemometry measurements of steady flow through two bi-leaflet prosthetic heart valves

    PubMed Central

    Bazan, Ovandir; Ortiz, Jayme Pinto; Vieira Junior, Francisco Ubaldo; Vieira, Reinaldo Wilson; Antunes, Nilson; Tabacow, Fabio Bittencourt Dutra; Costa, Eduardo Tavares; Petrucci Junior, Orlando

    2013-01-01

    Introduction In vitro hydrodynamic characterization of prosthetic heart valves provides important information regarding their operation, especially if performed by noninvasive techniques of anemometry. Once velocity profiles for each valve are provided, it is possible to compare them in terms of hydrodynamic performance. In this first experimental study using laser doppler anemometry with mechanical valves, the simulations were performed at a steady flow workbench. Objective To compare unidimensional velocity profiles at the central plane of two bi-leaflet aortic prosthesis from St. Jude (AGN 21 - 751 and 21 AJ - 501 models) exposed to a steady flow regime, on four distinct sections, three downstream and one upstream. Methods To provide similar conditions for the flow through each prosthesis by a steady flow workbench (water, flow rate of 17L/min. ) and, for the same sections and sweeps, to obtain the velocity profiles of each heart valve by unidimensional measurements. Results It was found that higher velocities correspond to the prosthesis with smaller inner diameter and instabilities of flow are larger as the section of interest is closer to the valve. Regions of recirculation, stagnation of flow, low pressure, and flow peak velocities were also found. Conclusions Considering the hydrodynamic aspect and for every section measured, it could be concluded that the prosthesis model AGN 21 - 751 (RegentTM) is superior to the 21 AJ - 501 model (Master Series). Based on the results, future studies can choose to focus on specific regions of the these valves. PMID:24598950

  12. Enabling a new Paradigm to Address Big Data and Open Science Challenges

    NASA Astrophysics Data System (ADS)

    Ramamurthy, Mohan; Fisher, Ward

    2017-04-01

    Data are not only the lifeblood of the geosciences but they have become the currency of the modern world in science and society. Rapid advances in computing, communi¬cations, and observational technologies — along with concomitant advances in high-resolution modeling, ensemble and coupled-systems predictions of the Earth system — are revolutionizing nearly every aspect of our field. Modern data volumes from high-resolution ensemble prediction/projection/simulation systems and next-generation remote-sensing systems like hyper-spectral satellite sensors and phased-array radars are staggering. For example, CMIP efforts alone will generate many petabytes of climate projection data for use in assessments of climate change. And NOAA's National Climatic Data Center projects that it will archive over 350 petabytes by 2030. For researchers and educators, this deluge and the increasing complexity of data brings challenges along with the opportunities for discovery and scientific breakthroughs. The potential for big data to transform the geosciences is enormous, but realizing the next frontier depends on effectively managing, analyzing, and exploiting these heterogeneous data sources, extracting knowledge and useful information from heterogeneous data sources in ways that were previously impossible, to enable discoveries and gain new insights. At the same time, there is a growing focus on the area of "Reproducibility or Replicability in Science" that has implications for Open Science. The advent of cloud computing has opened new avenues for not only addressing both big data and Open Science challenges to accelerate scientific discoveries. However, to successfully leverage the enormous potential of cloud technologies, it will require the data providers and the scientific communities to develop new paradigms to enable next-generation workflows and transform the conduct of science. Making data readily available is a necessary but not a sufficient condition. Data providers also need to give scientists an ecosystem that includes data, tools, workflows and other services needed to perform analytics, integration, interpretation, and synthesis - all in the same environment or platform. Instead of moving data to processing systems near users, as is the tradition, the cloud permits one to bring processing, computing, analysis and visualization to data - so called data proximate workbench capabilities, also known as server-side processing. In this talk, I will present the ongoing work at Unidata to facilitate a new paradigm for doing science by offering a suite of tools, resources, and platforms to leverage cloud technologies for addressing both big data and Open Science/reproducibility challenges. That work includes the development and deployment of new protocols for data access and server-side operations and Docker container images of key applications, JupyterHub Python notebook tools, and cloud-based analysis and visualization capability via the CloudIDV tool to enable reproducible workflows and effectively use the accessed data.

  13. Theory for the three-dimensional Mercedes-Benz model of water.

    PubMed

    Bizjak, Alan; Urbic, Tomaz; Vlachy, Vojko; Dill, Ken A

    2009-11-21

    The two-dimensional Mercedes-Benz (MB) model of water has been widely studied, both by Monte Carlo simulations and by integral equation methods. Here, we study the three-dimensional (3D) MB model. We treat water as spheres that interact through Lennard-Jones potentials and through a tetrahedral Gaussian hydrogen bonding function. As the "right answer," we perform isothermal-isobaric Monte Carlo simulations on the 3D MB model for different pressures and temperatures. The purpose of this work is to develop and test Wertheim's Ornstein-Zernike integral equation and thermodynamic perturbation theories. The two analytical approaches are orders of magnitude more efficient than the Monte Carlo simulations. The ultimate goal is to find statistical mechanical theories that can efficiently predict the properties of orientationally complex molecules, such as water. Also, here, the 3D MB model simply serves as a useful workbench for testing such analytical approaches. For hot water, the analytical theories give accurate agreement with the computer simulations. For cold water, the agreement is not as good. Nevertheless, these approaches are qualitatively consistent with energies, volumes, heat capacities, compressibilities, and thermal expansion coefficients versus temperature and pressure. Such analytical approaches offer a promising route to a better understanding of water and also the aqueous solvation.

  14. Theory for the three-dimensional Mercedes-Benz model of water

    PubMed Central

    Bizjak, Alan; Urbic, Tomaz; Vlachy, Vojko; Dill, Ken A.

    2009-01-01

    The two-dimensional Mercedes-Benz (MB) model of water has been widely studied, both by Monte Carlo simulations and by integral equation methods. Here, we study the three-dimensional (3D) MB model. We treat water as spheres that interact through Lennard-Jones potentials and through a tetrahedral Gaussian hydrogen bonding function. As the “right answer,” we perform isothermal-isobaric Monte Carlo simulations on the 3D MB model for different pressures and temperatures. The purpose of this work is to develop and test Wertheim’s Ornstein–Zernike integral equation and thermodynamic perturbation theories. The two analytical approaches are orders of magnitude more efficient than the Monte Carlo simulations. The ultimate goal is to find statistical mechanical theories that can efficiently predict the properties of orientationally complex molecules, such as water. Also, here, the 3D MB model simply serves as a useful workbench for testing such analytical approaches. For hot water, the analytical theories give accurate agreement with the computer simulations. For cold water, the agreement is not as good. Nevertheless, these approaches are qualitatively consistent with energies, volumes, heat capacities, compressibilities, and thermal expansion coefficients versus temperature and pressure. Such analytical approaches offer a promising route to a better understanding of water and also the aqueous solvation. PMID:19929057

  15. Theory for the three-dimensional Mercedes-Benz model of water

    NASA Astrophysics Data System (ADS)

    Bizjak, Alan; Urbic, Tomaz; Vlachy, Vojko; Dill, Ken A.

    2009-11-01

    The two-dimensional Mercedes-Benz (MB) model of water has been widely studied, both by Monte Carlo simulations and by integral equation methods. Here, we study the three-dimensional (3D) MB model. We treat water as spheres that interact through Lennard-Jones potentials and through a tetrahedral Gaussian hydrogen bonding function. As the "right answer," we perform isothermal-isobaric Monte Carlo simulations on the 3D MB model for different pressures and temperatures. The purpose of this work is to develop and test Wertheim's Ornstein-Zernike integral equation and thermodynamic perturbation theories. The two analytical approaches are orders of magnitude more efficient than the Monte Carlo simulations. The ultimate goal is to find statistical mechanical theories that can efficiently predict the properties of orientationally complex molecules, such as water. Also, here, the 3D MB model simply serves as a useful workbench for testing such analytical approaches. For hot water, the analytical theories give accurate agreement with the computer simulations. For cold water, the agreement is not as good. Nevertheless, these approaches are qualitatively consistent with energies, volumes, heat capacities, compressibilities, and thermal expansion coefficients versus temperature and pressure. Such analytical approaches offer a promising route to a better understanding of water and also the aqueous solvation.

  16. Computational tissue volume reconstruction of a peripheral nerve using high-resolution light-microscopy and reconstruct.

    PubMed

    Gierthmuehlen, Mortimer; Freiman, Thomas M; Haastert-Talini, Kirsten; Mueller, Alexandra; Kaminsky, Jan; Stieglitz, Thomas; Plachta, Dennis T T

    2013-01-01

    The development of neural cuff-electrodes requires several in vivo studies and revisions of the electrode design before the electrode is completely adapted to its target nerve. It is therefore favorable to simulate many of the steps involved in this process to reduce costs and animal testing. As the restoration of motor function is one of the most interesting applications of cuff-electrodes, the position and trajectories of myelinated fibers in the simulated nerve are important. In this paper, we investigate a method for building a precise neuroanatomical model of myelinated fibers in a peripheral nerve based on images obtained using high-resolution light microscopy. This anatomical model describes the first aim of our "Virtual workbench" project to establish a method for creating realistic neural simulation models based on image datasets. The imaging, processing, segmentation and technical limitations are described, and the steps involved in the transition into a simulation model are presented. The results showed that the position and trajectories of the myelinated axons were traced and virtualized using our technique, and small nerves could be reliably modeled based on of light microscopy images using low-cost OpenSource software and standard hardware. The anatomical model will be released to the scientific community.

  17. Prototype of the Modular Equipment Transporter (MET)

    NASA Technical Reports Server (NTRS)

    1970-01-01

    A prototype of the Modular Equipment Transporter (MET), nicknamed the 'Rickshaw' after its shape and method of propulsion. This equipment was used by the Apollo 14 astronauts during their geological and lunar surface simulation training in the Pinacate volcanic area of northwestern Sonora, Mexico. The Apollo 14 crew will be the first one to use the MET. It will be a portable workbench with a place for the lunar handtools and their carrier, three cameras, two sample container bags, a Special Environmental Sample Container, spare film magazines, and a Lunar Surface Penetrometer.

  18. 29. SOUTHEAST ACROSS BLACKSMITH SHOP AREA TOWARD TWO CIRCA 1900 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    29. SOUTHEAST ACROSS BLACKSMITH SHOP AREA TOWARD TWO CIRCA 1900 DRILL PRESSES ALONG THE EAST INTERIOR WALL AT THE NORTHEAST CORNER OF THE FACTORY BUILDING. THE HOODED FORGE IS VISIBLE IN THE LEFT FOREGROUND, SHOWING LADLES USED FOR POURING BABBITT BEARINGS. MOUNTED ON THE WORK BENCH IS THE MAIN CASTING FROM AN ELI WINDMILL, USED AS A JIG TO SUPPORT PARTS DURING THE BABBITT BEARING POURING OPERATION. THE WALL ABOVE THE WORKBENCH SHOWS THE BOARDED-UP OPENING FOR A FORMER WINDOW. - Kregel Windmill Company Factory, 1416 Central Avenue, Nebraska City, Otoe County, NE

  19. Scientific Research Program for Power, Energy, and Thermal Technologies. Task Order 0001: Energy, Power, and Thermal Technologies and Processes Experimental Research. Subtask: Thermal Management of Electromechanical Actuation System for Aircraft Primary Flight Control Surfaces

    DTIC Science & Technology

    2014-05-01

    utilizing buoyancy differences in vapor and liquid phases to pump the heat transfer fluid between the evaporator and condenser. In this particular...Virtual Instrumentation Engineering Workbench LHP Loop Heat Pipe LVDT Linear Voltage Displacement Transducer MACE Micro -technologies for Air...Bland 1992). This type of duty cycle lends itself to thermal energy storage, which when coupled with an effective heat transfer mechanism can

  20. ReGaTE: Registration of Galaxy Tools in Elixir

    PubMed Central

    Mareuil, Fabien; Deveaud, Eric; Kalaš, Matúš; Soranzo, Nicola; van den Beek, Marius; Grüning, Björn; Ison, Jon; Ménager, Hervé

    2017-01-01

    Abstract Background: Bioinformaticians routinely use multiple software tools and data sources in their day-to-day work and have been guided in their choices by a number of cataloguing initiatives. The ELIXIR Tools and Data Services Registry (bio.tools) aims to provide a central information point, independent of any specific scientific scope within bioinformatics or technological implementation. Meanwhile, efforts to integrate bioinformatics software in workbench and workflow environments have accelerated to enable the design, automation, and reproducibility of bioinformatics experiments. One such popular environment is the Galaxy framework, with currently more than 80 publicly available Galaxy servers around the world. In the context of a generic registry for bioinformatics software, such as bio.tools, Galaxy instances constitute a major source of valuable content. Yet there has been, to date, no convenient mechanism to register such services en masse. Findings: We present ReGaTE (Registration of Galaxy Tools in Elixir), a software utility that automates the process of registering the services available in a Galaxy instance. This utility uses the BioBlend application program interface to extract service metadata from a Galaxy server, enhance the metadata with the scientific information required by bio.tools, and push it to the registry. Conclusions: ReGaTE provides a fast and convenient way to publish Galaxy services in bio.tools. By doing so, service providers may increase the visibility of their services while enriching the software discovery function that bio.tools provides for its users. The source code of ReGaTE is freely available on Github at https://github.com/C3BI-pasteur-fr/ReGaTE. PMID:28402416

  1. Visual Environments for CFD Research

    NASA Technical Reports Server (NTRS)

    Watson, Val; George, Michael W. (Technical Monitor)

    1994-01-01

    This viewgraph presentation gives an overview of the visual environments for computational fluid dynamics (CFD) research. It includes details on critical needs from the future computer environment, features needed to attain this environment, prospects for changes in and the impact of the visualization revolution on the human-computer interface, human processing capabilities, limits of personal environment and the extension of that environment with computers. Information is given on the need for more 'visual' thinking (including instances of visual thinking), an evaluation of the alternate approaches for and levels of interactive computer graphics, a visual analysis of computational fluid dynamics, and an analysis of visualization software.

  2. Sensor data fusion of radar, ESM, IFF, and data LINK of the Canadian Patrol Frigate and the data alignment issues

    NASA Astrophysics Data System (ADS)

    Couture, Jean; Boily, Edouard; Simard, Marc-Alain

    1996-05-01

    The research and development group at Loral Canada is now at the second phase of the development of a data fusion demonstration model (DFDM) for a naval anti-air warfare to be used as a workbench tool to perform exploratory research. This project has emphatically addressed how the concepts related to fusion could be implemented within the Canadian Patrol Frigate (CPF) software environment. The project has been designed to read data passively on the CPF bus without any modification to the CPF software. This has brought to light important time alignment issues since the CPF sensors and the CPF command and control system were not important time alignment issues since the CPF sensors and the CPF command and control system were not originally designed to support a track management function which fuses information. The fusion of data from non-organic sensors with the tactical Link-11 data has produced stimulating spatial alignment problems which have been overcome by the use of a geodetic referencing coordinate system. Some benchmark scenarios have been selected to quantitatively demonstrate the capabilities of this fusion implementation. This paper describes the implementation design of DFDM (version 2), and summarizes the results obtained so far when fusing the scenarios simulated data.

  3. WormQTLHD—a web database for linking human disease to natural variation data in C. elegans

    PubMed Central

    van der Velde, K. Joeri; de Haan, Mark; Zych, Konrad; Arends, Danny; Snoek, L. Basten; Kammenga, Jan E.; Jansen, Ritsert C.; Swertz, Morris A.; Li, Yang

    2014-01-01

    Interactions between proteins are highly conserved across species. As a result, the molecular basis of multiple diseases affecting humans can be studied in model organisms that offer many alternative experimental opportunities. One such organism—Caenorhabditis elegans—has been used to produce much molecular quantitative genetics and systems biology data over the past decade. We present WormQTLHD (Human Disease), a database that quantitatively and systematically links expression Quantitative Trait Loci (eQTL) findings in C. elegans to gene–disease associations in man. WormQTLHD, available online at http://www.wormqtl-hd.org, is a user-friendly set of tools to reveal functionally coherent, evolutionary conserved gene networks. These can be used to predict novel gene-to-gene associations and the functions of genes underlying the disease of interest. We created a new database that links C. elegans eQTL data sets to human diseases (34 337 gene–disease associations from OMIM, DGA, GWAS Central and NHGRI GWAS Catalogue) based on overlapping sets of orthologous genes associated to phenotypes in these two species. We utilized QTL results, high-throughput molecular phenotypes, classical phenotypes and genotype data covering different developmental stages and environments from WormQTL database. All software is available as open source, built on MOLGENIS and xQTL workbench. PMID:24217915

  4. WormQTLHD--a web database for linking human disease to natural variation data in C. elegans.

    PubMed

    van der Velde, K Joeri; de Haan, Mark; Zych, Konrad; Arends, Danny; Snoek, L Basten; Kammenga, Jan E; Jansen, Ritsert C; Swertz, Morris A; Li, Yang

    2014-01-01

    Interactions between proteins are highly conserved across species. As a result, the molecular basis of multiple diseases affecting humans can be studied in model organisms that offer many alternative experimental opportunities. One such organism-Caenorhabditis elegans-has been used to produce much molecular quantitative genetics and systems biology data over the past decade. We present WormQTL(HD) (Human Disease), a database that quantitatively and systematically links expression Quantitative Trait Loci (eQTL) findings in C. elegans to gene-disease associations in man. WormQTL(HD), available online at http://www.wormqtl-hd.org, is a user-friendly set of tools to reveal functionally coherent, evolutionary conserved gene networks. These can be used to predict novel gene-to-gene associations and the functions of genes underlying the disease of interest. We created a new database that links C. elegans eQTL data sets to human diseases (34 337 gene-disease associations from OMIM, DGA, GWAS Central and NHGRI GWAS Catalogue) based on overlapping sets of orthologous genes associated to phenotypes in these two species. We utilized QTL results, high-throughput molecular phenotypes, classical phenotypes and genotype data covering different developmental stages and environments from WormQTL database. All software is available as open source, built on MOLGENIS and xQTL workbench.

  5. Purple Computational Environment With Mappings to ACE Requirements for the General Availability User Environment Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barney, B; Shuler, J

    2006-08-21

    Purple is an Advanced Simulation and Computing (ASC) funded massively parallel supercomputer located at Lawrence Livermore National Laboratory (LLNL). The Purple Computational Environment documents the capabilities and the environment provided for the FY06 LLNL Level 1 General Availability Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Los Alamos National Laboratory, Lawrence Livermore National Laboratory, and Sandia National Laboratories, but also documents needs of the LLNL and Alliance users working in the unclassified environment. Additionally,more » the Purple Computational Environment maps the provided capabilities to the Trilab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the General Availability user environment capabilities of the ASC community. Appendix A lists these requirements and includes a description of ACE requirements met and those requirements that are not met for each section of this document. The Purple Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the Tri-lab community.« less

  6. Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vigil,Benny Manuel; Ballance, Robert; Haskell, Karen

    Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model ismore » focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.« less

  7. Should Computing Be Taught in Single-Sex Environments? An Analysis of the Computing Learning Environment of Upper Secondary Students

    ERIC Educational Resources Information Center

    Logan, Keri

    2007-01-01

    It has been well established in the literature that girls are turning their backs on computing courses at all levels of the education system. One reason given for this is that the computer learning environment is not conducive to girls, and it is often suggested that they would benefit from learning computing in a single-sex environment. The…

  8. Computing environment logbook

    DOEpatents

    Osbourn, Gordon C; Bouchard, Ann M

    2012-09-18

    A computing environment logbook logs events occurring within a computing environment. The events are displayed as a history of past events within the logbook of the computing environment. The logbook provides search functionality to search through the history of past events to find one or more selected past events, and further, enables an undo of the one or more selected past events.

  9. Thickness optimization of auricular silicone scaffold based on finite element analysis.

    PubMed

    Jiang, Tao; Shang, Jianzhong; Tang, Li; Wang, Zhuo

    2016-01-01

    An optimized thickness of a transplantable auricular silicone scaffold was researched. The original image data were acquired from CT scans, and reverse modeling technology was used to build a digital 3D model of an auricle. The transplant process was simulated in ANSYS Workbench by finite element analysis (FEA), solid scaffolds were manufactured based on the FEA results, and the transplantable artificial auricle was finally obtained with an optimized thickness, as well as sufficient intensity and hardness. This paper provides a reference for clinical transplant surgery. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Uncertainty propagation for the coulometric measurement of the plutonium concentration in CRM126 solution provided by JAEA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morales-Arteaga, Maria

    This GUM WorkbenchTM propagation of uncertainty is for the coulometric measurement of the plutonium concentration in a Pu standard material (C126) supplied as individual aliquots that were prepared by mass. The C126 solution had been prepared and as aliquoted as standard material. Samples are aliquoted into glass vials and heated to dryness for distribution as dried nitrate. The individual plutonium aliquots were not separated chemically or otherwise purified prior to measurement by coulometry in the F/H Laboratory. Hydrogen peroxide was used for valence adjustment.

  11. Fast Response Shape Memory Effect Titanium Nickel (TiNi) Foam Torque Tubes

    NASA Technical Reports Server (NTRS)

    Jardine, Peter

    2014-01-01

    Shape Change Technologies has developed a process to manufacture net-shaped TiNi foam torque tubes that demonstrate the shape memory effect. The torque tubes dramatically reduce response time by a factor of 10. This Phase II project matured the actuator technology by rigorously characterizing the process to optimize the quality of the TiNi and developing a set of metrics to provide ISO 9002 quality assurance. A laboratory virtual instrument engineering workbench (LabVIEW'TM')-based, real-time control of the torsional actuators was developed. These actuators were developed with The Boeing Company for aerospace applications.

  12. Policy Issues in Computer Education. Assessing the Cognitive Consequences of Computer Environments for Learning (ACCCEL).

    ERIC Educational Resources Information Center

    Linn, Marcia

    This paper analyzes the capabilities of the computer learning environment identified by the Assessing the Cognitive Consequences of Computer Environments for Learning (ACCCEL) Project, augments the analysis with experimental work, and discusses how schools can implement policies which provide for the maximum potential of computers. The ACCCEL…

  13. Design of a Single-Cell Positioning Controller Using Electroosmotic Flow and Image Processing

    PubMed Central

    Ay, Chyung; Young, Chao-Wang; Chen, Jhong-Yin

    2013-01-01

    The objective of the current research was not only to provide a fast and automatic positioning platform for single cells, but also improved biomolecular manipulation techniques. In this study, an automatic platform for cell positioning using electroosmotic flow and image processing technology was designed. The platform was developed using a PCI image acquisition interface card for capturing images from a microscope and then transferring them to a computer using human-machine interface software. This software was designed by the Laboratory Virtual Instrument Engineering Workbench, a graphical language for finding cell positions and viewing the driving trace, and the fuzzy logic method for controlling the voltage or time of an electric field. After experiments on real human leukemic cells (U-937), the success of the cell positioning rate achieved by controlling the voltage factor reaches 100% within 5 s. A greater precision is obtained when controlling the time factor, whereby the success rate reaches 100% within 28 s. Advantages in both high speed and high precision are attained if these two voltage and time control methods are combined. The control speed with the combined method is about 5.18 times greater than that achieved by the time method, and the control precision with the combined method is more than five times greater than that achieved by the voltage method. PMID:23698272

  14. Toxicology ontology perspectives.

    PubMed

    Hardy, Barry; Apic, Gordana; Carthew, Philip; Clark, Dominic; Cook, David; Dix, Ian; Escher, Sylvia; Hastings, Janna; Heard, David J; Jeliazkova, Nina; Judson, Philip; Matis-Mitchell, Sherri; Mitic, Dragana; Myatt, Glenn; Shah, Imran; Spjuth, Ola; Tcheremenskaia, Olga; Toldo, Luca; Watson, David; White, Andrew; Yang, Chihae

    2012-01-01

    The field of predictive toxicology requires the development of open, public, computable, standardized toxicology vocabularies and ontologies to support the applications required by in silico, in vitro, and in vivo toxicology methods and related analysis and reporting activities. In this article we review ontology developments based on a set of perspectives showing how ontologies are being used in predictive toxicology initiatives and applications. Perspectives on resources and initiatives reviewed include OpenTox, eTOX, Pistoia Alliance, ToxWiz, Virtual Liver, EU-ADR, BEL, ToxML, and Bioclipse. We also review existing ontology developments in neighboring fields that can contribute to establishing an ontological framework for predictive toxicology. A significant set of resources is already available to provide a foundation for an ontological framework for 21st century mechanistic-based toxicology research. Ontologies such as ToxWiz provide a basis for application to toxicology investigations, whereas other ontologies under development in the biological, chemical, and biomedical communities could be incorporated in an extended future framework. OpenTox has provided a semantic web framework for the implementation of such ontologies into software applications and linked data resources. Bioclipse developers have shown the benefit of interoperability obtained through ontology by being able to link their workbench application with remote OpenTox web services. Although these developments are promising, an increased international coordination of efforts is greatly needed to develop a more unified, standardized, and open toxicology ontology framework.

  15. Cone-beam micro-CT system based on LabVIEW software.

    PubMed

    Ionita, Ciprian N; Hoffmann, Keneth R; Bednarek, Daniel R; Chityala, Ravishankar; Rudin, Stephen

    2008-09-01

    Construction of a cone-beam computed tomography (CBCT) system for laboratory research usually requires integration of different software and hardware components. As a result, building and operating such a complex system require the expertise of researchers with significantly different backgrounds. Additionally, writing flexible code to control the hardware components of a CBCT system combined with designing a friendly graphical user interface (GUI) can be cumbersome and time consuming. An intuitive and flexible program structure, as well as the program GUI for CBCT acquisition, is presented in this note. The program was developed in National Instrument's Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW) graphical language and is designed to control a custom-built CBCT system but has been also used in a standard angiographic suite. The hardware components are commercially available to researchers and are in general provided with software drivers which are LabVIEW compatible. The program structure was designed as a sequential chain. Each step in the chain takes care of one or two hardware commands at a time; the execution of the sequence can be modified according to the CBCT system design. We have scanned and reconstructed over 200 specimens using this interface and present three examples which cover different areas of interest encountered in laboratory research. The resulting 3D data are rendered using a commercial workstation. The program described in this paper is available for use or improvement by other researchers.

  16. Continuous-waveform constant-current isolated physiological stimulator

    NASA Astrophysics Data System (ADS)

    Holcomb, Mark R.; Devine, Jack M.; Harder, Rene; Sidorov, Veniamin Y.

    2012-04-01

    We have developed an isolated continuous-waveform constant-current physiological stimulator that is powered and controlled by universal serial bus (USB) interface. The stimulator is composed of a custom printed circuit board (PCB), 16-MHz MSP430F2618 microcontroller with two integrated 12-bit digital to analog converters (DAC0, DAC1), high-speed H-Bridge, voltage-controlled current source (VCCS), isolated USB communication and power circuitry, two isolated transistor-transistor logic (TTL) inputs, and a serial 16 × 2 character liquid crystal display. The stimulators are designed to produce current stimuli in the range of ±15 mA indefinitely using a 20V source and to be used in ex vivo cardiac experiments, but they are suitable for use in a wide variety of research or student experiments that require precision control of continuous waveforms or synchronization with external events. The device was designed with customization in mind and has features that allow it to be integrated into current and future experimental setups. Dual TTL inputs allow replacement by two or more traditional stimulators in common experimental configurations. The MSP430 software is written in C++ and compiled with IAR Embedded Workbench 5.20.2. A control program written in C++ runs on a Windows personal computer and has a graphical user interface that allows the user to control all aspects of the device.

  17. ReGaTE: Registration of Galaxy Tools in Elixir.

    PubMed

    Doppelt-Azeroual, Olivia; Mareuil, Fabien; Deveaud, Eric; Kalaš, Matúš; Soranzo, Nicola; van den Beek, Marius; Grüning, Björn; Ison, Jon; Ménager, Hervé

    2017-06-01

    Bioinformaticians routinely use multiple software tools and data sources in their day-to-day work and have been guided in their choices by a number of cataloguing initiatives. The ELIXIR Tools and Data Services Registry (bio.tools) aims to provide a central information point, independent of any specific scientific scope within bioinformatics or technological implementation. Meanwhile, efforts to integrate bioinformatics software in workbench and workflow environments have accelerated to enable the design, automation, and reproducibility of bioinformatics experiments. One such popular environment is the Galaxy framework, with currently more than 80 publicly available Galaxy servers around the world. In the context of a generic registry for bioinformatics software, such as bio.tools, Galaxy instances constitute a major source of valuable content. Yet there has been, to date, no convenient mechanism to register such services en masse. We present ReGaTE (Registration of Galaxy Tools in Elixir), a software utility that automates the process of registering the services available in a Galaxy instance. This utility uses the BioBlend application program interface to extract service metadata from a Galaxy server, enhance the metadata with the scientific information required by bio.tools, and push it to the registry. ReGaTE provides a fast and convenient way to publish Galaxy services in bio.tools. By doing so, service providers may increase the visibility of their services while enriching the software discovery function that bio.tools provides for its users. The source code of ReGaTE is freely available on Github at https://github.com/C3BI-pasteur-fr/ReGaTE . © The Author 2017. Published by Oxford University Press.

  18. Reach and get capability in a computing environment

    DOEpatents

    Bouchard, Ann M [Albuquerque, NM; Osbourn, Gordon C [Albuquerque, NM

    2012-06-05

    A reach and get technique includes invoking a reach command from a reach location within a computing environment. A user can then navigate to an object within the computing environment and invoke a get command on the object. In response to invoking the get command, the computing environment is automatically navigated back to the reach location and the object copied into the reach location.

  19. Comprehensive processing of high-throughput small RNA sequencing data including quality checking, normalization, and differential expression analysis using the UEA sRNA Workbench

    PubMed Central

    Beckers, Matthew; Mohorianu, Irina; Stocks, Matthew; Applegate, Christopher; Dalmay, Tamas; Moulton, Vincent

    2017-01-01

    Recently, high-throughput sequencing (HTS) has revealed compelling details about the small RNA (sRNA) population in eukaryotes. These 20 to 25 nt noncoding RNAs can influence gene expression by acting as guides for the sequence-specific regulatory mechanism known as RNA silencing. The increase in sequencing depth and number of samples per project enables a better understanding of the role sRNAs play by facilitating the study of expression patterns. However, the intricacy of the biological hypotheses coupled with a lack of appropriate tools often leads to inadequate mining of the available data and thus, an incomplete description of the biological mechanisms involved. To enable a comprehensive study of differential expression in sRNA data sets, we present a new interactive pipeline that guides researchers through the various stages of data preprocessing and analysis. This includes various tools, some of which we specifically developed for sRNA analysis, for quality checking and normalization of sRNA samples as well as tools for the detection of differentially expressed sRNAs and identification of the resulting expression patterns. The pipeline is available within the UEA sRNA Workbench, a user-friendly software package for the processing of sRNA data sets. We demonstrate the use of the pipeline on a H. sapiens data set; additional examples on a B. terrestris data set and on an A. thaliana data set are described in the Supplemental Information. A comparison with existing approaches is also included, which exemplifies some of the issues that need to be addressed for sRNA analysis and how the new pipeline may be used to do this. PMID:28289155

  20. CoLIde

    PubMed Central

    Mohorianu, Irina; Stocks, Matthew Benedict; Wood, John; Dalmay, Tamas; Moulton, Vincent

    2013-01-01

    Small RNAs (sRNAs) are 20–25 nt non-coding RNAs that act as guides for the highly sequence-specific regulatory mechanism known as RNA silencing. Due to the recent increase in sequencing depth, a highly complex and diverse population of sRNAs in both plants and animals has been revealed. However, the exponential increase in sequencing data has also made the identification of individual sRNA transcripts corresponding to biological units (sRNA loci) more challenging when based exclusively on the genomic location of the constituent sRNAs, hindering existing approaches to identify sRNA loci.   To infer the location of significant biological units, we propose an approach for sRNA loci detection called CoLIde (Co-expression based sRNA Loci Identification) that combines genomic location with the analysis of other information such as variation in expression levels (expression pattern) and size class distribution. For CoLIde, we define a locus as a union of regions sharing the same pattern and located in close proximity on the genome. Biological relevance, detected through the analysis of size class distribution, is also calculated for each locus. CoLIde can be applied on ordered (e.g., time-dependent) or un-ordered (e.g., organ, mutant) series of samples both with or without biological/technical replicates. The method reliably identifies known types of loci and shows improved performance on sequencing data from both plants (e.g., A. thaliana, S. lycopersicum) and animals (e.g., D. melanogaster) when compared with existing locus detection techniques. CoLIde is available for use within the UEA Small RNA Workbench which can be downloaded from: http://srna-workbench.cmp.uea.ac.uk. PMID:23851377

  1. CoLIde: a bioinformatics tool for CO-expression-based small RNA Loci Identification using high-throughput sequencing data.

    PubMed

    Mohorianu, Irina; Stocks, Matthew Benedict; Wood, John; Dalmay, Tamas; Moulton, Vincent

    2013-07-01

    Small RNAs (sRNAs) are 20-25 nt non-coding RNAs that act as guides for the highly sequence-specific regulatory mechanism known as RNA silencing. Due to the recent increase in sequencing depth, a highly complex and diverse population of sRNAs in both plants and animals has been revealed. However, the exponential increase in sequencing data has also made the identification of individual sRNA transcripts corresponding to biological units (sRNA loci) more challenging when based exclusively on the genomic location of the constituent sRNAs, hindering existing approaches to identify sRNA loci. To infer the location of significant biological units, we propose an approach for sRNA loci detection called CoLIde (Co-expression based sRNA Loci Identification) that combines genomic location with the analysis of other information such as variation in expression levels (expression pattern) and size class distribution. For CoLIde, we define a locus as a union of regions sharing the same pattern and located in close proximity on the genome. Biological relevance, detected through the analysis of size class distribution, is also calculated for each locus. CoLIde can be applied on ordered (e.g., time-dependent) or un-ordered (e.g., organ, mutant) series of samples both with or without biological/technical replicates. The method reliably identifies known types of loci and shows improved performance on sequencing data from both plants (e.g., A. thaliana, S. lycopersicum) and animals (e.g., D. melanogaster) when compared with existing locus detection techniques. CoLIde is available for use within the UEA Small RNA Workbench which can be downloaded from: http://srna-workbench.cmp.uea.ac.uk.

  2. The Simultaneous Production Model; A Model for the Construction, Testing, Implementation and Revision of Educational Computer Simulation Environments.

    ERIC Educational Resources Information Center

    Zillesen, Pieter G. van Schaick

    This paper introduces a hardware and software independent model for producing educational computer simulation environments. The model, which is based on the results of 32 studies of educational computer simulations program production, implies that educational computer simulation environments are specified, constructed, tested, implemented, and…

  3. The role of physicality in rich programming environments

    NASA Astrophysics Data System (ADS)

    Liu, Allison S.; Schunn, Christian D.; Flot, Jesse; Shoop, Robin

    2013-12-01

    Computer science proficiency continues to grow in importance, while the number of students entering computer science-related fields declines. Many rich programming environments have been created to motivate student interest and expertise in computer science. In the current study, we investigated whether a recently created environment, Robot Virtual Worlds (RVWs), can be used to teach computer science principles within a robotics context by examining its use in high-school classrooms. We also investigated whether the lack of physicality in these environments impacts student learning by comparing classrooms that used either virtual or physical robots for the RVW curriculum. Results suggest that the RVW environment leads to significant gains in computer science knowledge, that virtual robots lead to faster learning, and that physical robots may have some influence on algorithmic thinking. We discuss the implications of physicality in these programming environments for learning computer science.

  4. A Theory for Rapid Charging Events on the International Space Station

    NASA Technical Reports Server (NTRS)

    Ferguson, Dale C.; Craven, Paul D.; Minow, Joseph I.; Wright, Kenneth H., Jr.

    2009-01-01

    The Floating Potential Measurement Unit (FPMU) has detected high negative amplitude rapid charging events (RCEs) on the International Space Station (ISS) at the morning terminator. These events are larger and more rapid than the ISS morning charging events first seen by the Floating Potential Probe (FPP) on ISS in 2001. In this paper, we describe a theory for the RCEs that further elucidates the nature of spacecraft charging in low Earth orbit (LEO) in a non-equilibrium situation. The model accounts for all essential aspects of the newly discovered phenomenon, and is amenable to testing on-orbit. Predictions of the model for the amplitude of the ISS RCEs for the full set of ISS solar arrays and for the coming solar cycle are given, and the results of modeling by the Environments WorkBench (EWB) are compared to the observed events to show that the phenomenon can be explained by solar array driven charging. The situation is unique because the coverglasses have not yet reached equilibrium with the surrounding plasma during the RCEs. Finally, a prescription for further use of the ISS for investigating fundamental plasma physics in LEO is given. Already, plasma and charging monitoring instruments on ISS have taught us much about spacecraft interactions with the dense LEO plasma, and we expect they will continue to yield more valuable science when the Japanese Experiment Module (JEM) is in place.

  5. Integrated structural and optical modeling of the orbiting stellar interferometer

    NASA Astrophysics Data System (ADS)

    Shaklan, Stuart B.; Yu, Jeffrey W.; Briggs, Hugh C.

    1993-11-01

    The Integrated Modeling of Optical Systems (IMOS) Integration Workbench at JPL has been used to model the effects of structural perturbations on the optics in the proposed Orbiting Stellar Interferometer (OSI). OSI consists of 3 pairs of interferometers and delay lines attached to a 7.5 meter truss. They are interferometrically monitored from a separate boom by a laser metrology system. The spatially distributed nature of the science instrument calls for a high level of integration between the optics and support structure. Because OSI is designed to achieve micro-arcsecond astrometry, many of its alignment, stability, and knowledge tolerances are in the submicron regime. The spacecraft will be subject to vibrations caused by reaction wheels and on-board equipment, as well as thermal strain due to solar and terrestrial heating. These perturbations affect optical parameters such as optical path differences and beam co-parallelism which are critical to instrument performance. IMOS provides an environment that allows one to design and perturb the structure, attach optics to structural or non-structural nodes, trace rays, and analyze the impact of mechanical perturbations on optical performance. This tool makes it simple to change the structure and immediately see performance enhancement/degradation. We have employed IMOS to analyze the effect of reaction wheel disturbances on the optical path difference in both the science and metrology interferometers.

  6. CSNS computing environment Based on OpenStack

    NASA Astrophysics Data System (ADS)

    Li, Yakang; Qi, Fazhi; Chen, Gang; Wang, Yanming; Hong, Jianshu

    2017-10-01

    Cloud computing can allow for more flexible configuration of IT resources and optimized hardware utilization, it also can provide computing service according to the real need. We are applying this computing mode to the China Spallation Neutron Source(CSNS) computing environment. So, firstly, CSNS experiment and its computing scenarios and requirements are introduced in this paper. Secondly, the design and practice of cloud computing platform based on OpenStack are mainly demonstrated from the aspects of cloud computing system framework, network, storage and so on. Thirdly, some improvments to openstack we made are discussed further. Finally, current status of CSNS cloud computing environment are summarized in the ending of this paper.

  7. Ultrasonic detection of solid phase mass flow ratio of pneumatic conveying fly ash

    NASA Astrophysics Data System (ADS)

    Duan, Guang Bin; Pan, Hong Li; Wang, Yong; Liu, Zong Ming

    2014-04-01

    In this paper, ultrasonic attenuation detection and weight balance are adopted to evaluate the solid mass ratio in this paper. Fly ash is transported on the up extraction fluidization pneumatic conveying workbench. In the ultrasonic test. McClements model and Bouguer-Lambert-Beer law model were applied to formulate the ultrasonic attenuation properties of gas-solid flow, which can give the solid mass ratio. While in the method of weigh balance, the averaged mass addition per second can reveal the solids mass flow ratio. By contrast these two solid phase mass ratio detection methods, we can know, the relative error is less.

  8. Animal research on the Space Station

    NASA Technical Reports Server (NTRS)

    Bonting, S. L.; Arno, R. D.; Corbin, S. D.

    1987-01-01

    The need for in-depth, long- and short-term animal experimentation in space to qualify man for long-duration space missions, and to study the effects of the absence and presence of Earth's gravity and of heavy particle radiation on the development and functioning of vertebrates is described. The major facilities required for these investigations and to be installed on the Space Station are: modular habitats for holding rodents and small primates in full bioisolation; a habitat holding facility; 1.8 and 4.0 m dia centrifuges; a multipurpose workbench; and a cage cleaner/disposal system. The design concepts, functions, and characteristics of these facilities are described.

  9. 5. Credit USAF, ca. 1944. Original housed in the Muroc ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. Credit USAF, ca. 1944. Original housed in the Muroc Flight Test Base, Unit History, 1 September 1942 - 30 June 1945. Alfred F. Simpson Historical Research Agency. United States Air Force. Maxwell AFB, Alabama. Interior view of hangar, looking north northwest. Note exposed wooden construction. Two jet engines lie partially concealed by tarpaulins in the background, along with a combustion chamber assembly (horizontal cylinders in a circular array). On the workbench in the foreground lie an engine rotor hub and what appears to be an engine fuel line assembly. - Edwards Air Force Base, North Base, Hangar No. 1, First & B Streets, Boron, Kern County, CA

  10. DBCreate: A SUPCRT92-based program for producing EQ3/6, TOUGHREACT, and GWB thermodynamic databases at user-defined T and P

    NASA Astrophysics Data System (ADS)

    Kong, Xiang-Zhao; Tutolo, Benjamin M.; Saar, Martin O.

    2013-02-01

    SUPCRT92 is a widely used software package for calculating the standard thermodynamic properties of minerals, gases, aqueous species, and reactions. However, it is labor-intensive and error-prone to use it directly to produce databases for geochemical modeling programs such as EQ3/6, the Geochemist's Workbench, and TOUGHREACT. DBCreate is a SUPCRT92-based software program written in FORTRAN90/95 and was developed in order to produce the required databases for these programs in a rapid and convenient way. This paper describes the overall structure of the program and provides detailed usage instructions.

  11. Study on loading and unloading performance of new energy vehicle battery sensor

    NASA Astrophysics Data System (ADS)

    Wu, Bin; Ren, Kai; Liu, Ying

    2017-04-01

    This paper first introduces the 18650 battery, describes the importance of the battery temperature sensor, uses Ansys Workbench finite element simulation software and the mean of the combination of displacement constraint and reaction force, studies the force and the size of the change of new energy vehicle battery temperature sensor in the loading, translation and unloading of the three cases, then make the test to verify its accuracy. At last, the test results are compared with the usual maximum acceleration of the vehicle in driving which verified the sensor of the car will not fall off in the car driving process and work normally.

  12. Design and Implement of Astronomical Cloud Computing Environment In China-VO

    NASA Astrophysics Data System (ADS)

    Li, Changhua; Cui, Chenzhou; Mi, Linying; He, Boliang; Fan, Dongwei; Li, Shanshan; Yang, Sisi; Xu, Yunfei; Han, Jun; Chen, Junyi; Zhang, Hailong; Yu, Ce; Xiao, Jian; Wang, Chuanjun; Cao, Zihuang; Fan, Yufeng; Liu, Liang; Chen, Xiao; Song, Wenming; Du, Kangyu

    2017-06-01

    Astronomy cloud computing environment is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on virtualization technology, astronomy cloud computing environment was designed and implemented by China-VO team. It consists of five distributed nodes across the mainland of China. Astronomer can get compuitng and storage resource in this cloud computing environment. Through this environments, astronomer can easily search and analyze astronomical data collected by different telescopes and data centers , and avoid the large scale dataset transportation.

  13. A compositional approach to building applications in a computational environment

    NASA Astrophysics Data System (ADS)

    Roslovtsev, V. V.; Shumsky, L. D.; Wolfengagen, V. E.

    2014-04-01

    The paper presents an approach to creating an applicative computational environment to feature computational processes and data decomposition, and a compositional approach to application building. The approach in question is based on the notion of combinator - both in systems with variable binding (such as λ-calculi) and those allowing programming without variables (combinatory logic style). We present a computation decomposition technique based on objects' structural decomposition, with the focus on computation decomposition. The computational environment's architecture is based on a network with nodes playing several roles simultaneously.

  14. Designing Computer Learning Environments for Engineering and Computer Science: The Scaffolded Knowledge Integration Framework.

    ERIC Educational Resources Information Center

    Linn, Marcia C.

    1995-01-01

    Describes a framework called scaffolded knowledge integration and illustrates how it guided the design of two successful course enhancements in the field of computer science and engineering: the LISP Knowledge Integration Environment and the spatial reasoning environment. (101 references) (Author/MKR)

  15. The Role of Physicality in Rich Programming Environments

    ERIC Educational Resources Information Center

    Liu, Allison S.; Schunn, Christian D.; Flot, Jesse; Shoop, Robin

    2013-01-01

    Computer science proficiency continues to grow in importance, while the number of students entering computer science-related fields declines. Many rich programming environments have been created to motivate student interest and expertise in computer science. In the current study, we investigated whether a recently created environment, Robot…

  16. Low cost, high performance processing of single particle cryo-electron microscopy data in the cloud.

    PubMed

    Cianfrocco, Michael A; Leschziner, Andres E

    2015-05-08

    The advent of a new generation of electron microscopes and direct electron detectors has realized the potential of single particle cryo-electron microscopy (cryo-EM) as a technique to generate high-resolution structures. Calculating these structures requires high performance computing clusters, a resource that may be limiting to many likely cryo-EM users. To address this limitation and facilitate the spread of cryo-EM, we developed a publicly available 'off-the-shelf' computing environment on Amazon's elastic cloud computing infrastructure. This environment provides users with single particle cryo-EM software packages and the ability to create computing clusters with 16-480+ CPUs. We tested our computing environment using a publicly available 80S yeast ribosome dataset and estimate that laboratories could determine high-resolution cryo-EM structures for $50 to $1500 per structure within a timeframe comparable to local clusters. Our analysis shows that Amazon's cloud computing environment may offer a viable computing environment for cryo-EM.

  17. Co-Regulation of Learning in Computer-Supported Collaborative Learning Environments: A Discussion

    ERIC Educational Resources Information Center

    Chan, Carol K. K.

    2012-01-01

    This discussion paper for this special issue examines co-regulation of learning in computer-supported collaborative learning (CSCL) environments extending research on self-regulated learning in computer-based environments. The discussion employs a socio-cognitive perspective focusing on social and collective views of learning to examine how…

  18. Design & implementation of distributed spatial computing node based on WPS

    NASA Astrophysics Data System (ADS)

    Liu, Liping; Li, Guoqing; Xie, Jibo

    2014-03-01

    Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed.

  19. Ubiquitous computing in the military environment

    NASA Astrophysics Data System (ADS)

    Scholtz, Jean

    2001-08-01

    Increasingly people work and live on the move. To support this mobile lifestyle, especially as our work becomes more intensely information-based, companies are producing various portable and embedded information devices. The late Mark Weiser coined the term, 'ubiquitous computing' to describe an environment where computers have disappeared and are integrated into physical objects. Much industry research today is concerned with ubiquitous computing in the work and home environments. A ubiquitous computing environment would facilitate mobility by allowing information users to easily access and use information anytime, anywhere. As war fighters are inherently mobile, the question is what effect a ubiquitous computing environment would have on current military operations and doctrine. And, if ubiquitous computing is viewed as beneficial for the military, what research would be necessary to achieve a military ubiquitous computing environment? What is a vision for the use of mobile information access in a battle space? Are there different requirements for civilian and military users of this technology? What are those differences? Are there opportunities for research that will support both worlds? What type of research has been supported by the military and what areas need to be investigated? Although we don't yet have all the answers to these questions, this paper discusses the issues and presents the work we are doing to address these issues.

  20. ComputerTown: A Do-It-Yourself Community Computer Project. [Computer Town, USA and Other Microcomputer Based Alternatives to Traditional Learning Environments].

    ERIC Educational Resources Information Center

    Zamora, Ramon M.

    Alternative learning environments offering computer-related instruction are developing around the world. Storefront learning centers, museum-based computer facilities, and special theme parks are some of the new concepts. ComputerTown, USA! is a public access computer literacy project begun in 1979 to serve both adults and children in Menlo Park…

  1. A Research Program in Computer Technology. 1982 Annual Technical Report

    DTIC Science & Technology

    1983-03-01

    for the Defense Advanced Research Projects Agency. The research applies computer science and technology to areas of high DoD/ military impact. The ISI...implement the plan; New Computing Environment - investigation and adaptation of developing computer technologies to serve the research and military ...Computing Environment - ,.*_i;.;"’.)n and adaptation of developing computer technologies to serve the research and military tser communities; and Computer

  2. Telescience workstation

    NASA Technical Reports Server (NTRS)

    Brown, Robert L.; Doyle, Dee; Haines, Richard F.; Slocum, Michael

    1989-01-01

    As part of the Telescience Testbed Pilot Program, the Universities Space Research Association/ Research Institute for Advanced Computer Science (USRA/RIACS) proposed to support remote communication by providing a network of human/machine interfaces, computer resources, and experimental equipment which allows: remote science, collaboration, technical exchange, and multimedia communication. The telescience workstation is intended to provide a local computing environment for telescience. The purpose of the program are as follows: (1) to provide a suitable environment to integrate existing and new software for a telescience workstation; (2) to provide a suitable environment to develop new software in support of telescience activities; (3) to provide an interoperable environment so that a wide variety of workstations may be used in the telescience program; (4) to provide a supportive infrastructure and a common software base; and (5) to advance, apply, and evaluate the telescience technolgy base. A prototype telescience computing environment designed to bring practicing scientists in domains other than their computer science into a modern style of doing their computing was created and deployed. This environment, the Telescience Windowing Environment, Phase 1 (TeleWEn-1), met some, but not all of the goals stated above. The TeleWEn-1 provided a window-based workstation environment and a set of tools for text editing, document preparation, electronic mail, multimedia mail, raster manipulation, and system management.

  3. JABAWS 2.2 distributed web services for Bioinformatics: protein disorder, conservation and RNA secondary structure.

    PubMed

    Troshin, Peter V; Procter, James B; Sherstnev, Alexander; Barton, Daniel L; Madeira, Fábio; Barton, Geoffrey J

    2018-06-01

    JABAWS 2.2 is a computational framework that simplifies the deployment of web services for Bioinformatics. In addition to the five multiple sequence alignment (MSA) algorithms in JABAWS 1.0, JABAWS 2.2 includes three additional MSA programs (Clustal Omega, MSAprobs, GLprobs), four protein disorder prediction methods (DisEMBL, IUPred, Ronn, GlobPlot), 18 measures of protein conservation as implemented in AACon, and RNA secondary structure prediction by the RNAalifold program. JABAWS 2.2 can be deployed on a variety of in-house or hosted systems. JABAWS 2.2 web services may be accessed from the Jalview multiple sequence analysis workbench (Version 2.8 and later), as well as directly via the JABAWS command line interface (CLI) client. JABAWS 2.2 can be deployed on a local virtual server as a Virtual Appliance (VA) or simply as a Web Application Archive (WAR) for private use. Improvements in JABAWS 2.2 also include simplified installation and a range of utility tools for usage statistics collection, and web services querying and monitoring. The JABAWS CLI client has been updated to support all the new services and allow integration of JABAWS 2.2 services into conventional scripts. A public JABAWS 2 server has been in production since December 2011 and served over 800 000 analyses for users worldwide. JABAWS 2.2 is made freely available under the Apache 2 license and can be obtained from: http://www.compbio.dundee.ac.uk/jabaws. g.j.barton@dundee.ac.uk.

  4. NASA Tech Briefs, February 2007

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Topics covered include: Calibration Test Set for a Phase-Comparison Digital Tracker; Wireless Acoustic Measurement System; Spiral Orbit Tribometer; Arrays of Miniature Microphones for Aeroacoustic Testing; Predicting Rocket or Jet Noise in Real Time; Computational Workbench for Multibody Dynamics; High-Power, High-Efficiency Ka-Band Space Traveling-Wave Tube; Gratings and Random Reflectors for Near-Infrared PIN Diodes; Optically Transparent Split-Ring Antennas for 1 to 10 GHz; Ice-Penetrating Robot for Scientific Exploration; Power-Amplifier Module for 145 to 165 GHz; Aerial Videography From Locally Launched Rockets; SiC Multi-Chip Power Modules as Power-System Building Blocks; Automated Design of Restraint Layer of an Inflatable Vessel; TMS for Instantiating a Knowledge Base With Incomplete Data; Simulating Flights of Future Launch Vehicles and Spacecraft; Control Code for Bearingless Switched- Reluctance Motor; Machine Aided Indexing and the NASA Thesaurus; Arbitrating Control of Control and Display Units; Web-Based Software for Managing Research; Driver Code for Adaptive Optics; Ceramic Paste for Patching High-Temperature Insulation; Fabrication of Polyimide-Matrix/Carbon and Boron-Fiber Tape; Protective Skins for Aerogel Monoliths; Code Assesses Risks Posed by Meteoroids and Orbital Debris; Asymmetric Bulkheads for Cylindrical Pressure Vessels; Self-Regulating Water-Separator System for Fuel Cells; Self-Advancing Step-Tap Drills; Array of Bolometers for Submillimeter- Wavelength Operation; Delta-Doped CCDs as Detector Arrays in Mass Spectrometers; Arrays of Bundles of Carbon Nanotubes as Field Emitters; Staggering Inflation To Stabilize Attitude of a Solar Sail; and Bare Conductive Tether for Decelerating a Spacecraft.

  5. Biogeochemical Modeling of Ureolytically-Driven Calcium Carbonate Precipitation for Contaminant Immobilization

    NASA Astrophysics Data System (ADS)

    Smith, R. W.; Fujita, Y.; Taylor, J. L.

    2008-12-01

    Radionuclide and metal contaminants such as strontium-90 are present beneath U.S. Department of Energy (DOE) lands in both the groundwater (e.g., 100-N area at Hanford, WA) and vadose zone (e.g., Idaho Nuclear Technology and Engineering Center at the Idaho National Laboratory [INL]). Manipulation of in situ biogeochemical conditions to induce immobilization of these contaminants is a promising remediation approach that could yield significant risk and cost benefits to DOE. However, the effective design and interpretation of such field remediation activities requires the availability of numerical tools to model the biogeochemical processes underlying the remediation strategy. We are evaluating the use of microbial urea hydrolysis coupled to calcite precipitation as a means for the cost effective in situ stabilization of trace inorganic contaminants in groundwater and vadose zone systems. The approach relies upon the activity of indigenous ureolytic bacteria to hydrolyze introduced urea and causing an increase in pH and alkalinity, thereby accelerating calcium carbonate precipitation. The precipitation reaction results in the co- precipitation of trace metals and is sustained by the release of cations (both calcium and trace metals) from the aquifer matrix via exchange reactions involving the ammonium ions produced by urea hydrolysis. We have developed and parameterized a mixed kinetic-equilibrium reaction model using the Geochemist's Workbench computer code. Simulation results based on laboratory- and field-scale studies demonstrate the importance of transient events in systems with geochemical fluxes as well as of the coupling of biogeochemical processes.

  6. A synthetic computational environment: To control the spread of respiratory infections in a virtual university

    NASA Astrophysics Data System (ADS)

    Ge, Yuanzheng; Chen, Bin; liu, Liang; Qiu, Xiaogang; Song, Hongbin; Wang, Yong

    2018-02-01

    Individual-based computational environment provides an effective solution to study complex social events by reconstructing scenarios. Challenges remain in reconstructing the virtual scenarios and reproducing the complex evolution. In this paper, we propose a framework to reconstruct a synthetic computational environment, reproduce the epidemic outbreak, and evaluate management interventions in a virtual university. The reconstructed computational environment includes 4 fundamental components: the synthetic population, behavior algorithms, multiple social networks, and geographic campus environment. In the virtual university, influenza H1N1 transmission experiments are conducted, and gradually enhanced interventions are evaluated and compared quantitatively. The experiment results indicate that the reconstructed virtual environment provides a solution to reproduce complex emergencies and evaluate policies to be executed in the real world.

  7. Changes in Student Attitudes and Student Computer Use in a Computer-Enriched Environment.

    ERIC Educational Resources Information Center

    Mitra, Ananda; Steffensmeier, Timothy

    2000-01-01

    Examines the pedagogic usefulness of the computer by focusing on changes in student attitudes and use of computers in a computer-enriched environment using data from a longitudinal study at Wake Forest University. Results indicate that a networked institution where students have easy access can foster positive attitudes. (Author/LRW)

  8. Low cost, high performance processing of single particle cryo-electron microscopy data in the cloud

    PubMed Central

    Cianfrocco, Michael A; Leschziner, Andres E

    2015-01-01

    The advent of a new generation of electron microscopes and direct electron detectors has realized the potential of single particle cryo-electron microscopy (cryo-EM) as a technique to generate high-resolution structures. Calculating these structures requires high performance computing clusters, a resource that may be limiting to many likely cryo-EM users. To address this limitation and facilitate the spread of cryo-EM, we developed a publicly available ‘off-the-shelf’ computing environment on Amazon's elastic cloud computing infrastructure. This environment provides users with single particle cryo-EM software packages and the ability to create computing clusters with 16–480+ CPUs. We tested our computing environment using a publicly available 80S yeast ribosome dataset and estimate that laboratories could determine high-resolution cryo-EM structures for $50 to $1500 per structure within a timeframe comparable to local clusters. Our analysis shows that Amazon's cloud computing environment may offer a viable computing environment for cryo-EM. DOI: http://dx.doi.org/10.7554/eLife.06664.001 PMID:25955969

  9. Testing of a work bench for handling of explosives in the laboratory

    NASA Astrophysics Data System (ADS)

    Hank, R.; Johansson, K.; Lagman, L.

    1981-01-01

    A prototype work station was developed at which jobs can be carried out with explosives up to 10 gr and deflagrating products up to 50 gr. Tests were made to investigate the consequences of a spontaneous accident during work. Conclusions are: the workbench offers good protection against splinters provided the inside walls are coated with a shock absorber; the carbonate glass should be a minimum of eight mm thick; the risk of burns, except on arms and hands, is very low; the bench withstands the explosion with the given weight of explosives (10 gr); the risk of lesions on the lung are very low, for the operator as well as for somebody nearby.

  10. A Guide to the PLAZA 3.0 Plant Comparative Genomic Database.

    PubMed

    Vandepoele, Klaas

    2017-01-01

    PLAZA 3.0 is an online resource for comparative genomics and offers a versatile platform to study gene functions and gene families or to analyze genome organization and evolution in the green plant lineage. Starting from genome sequence information for over 35 plant species, precomputed comparative genomic data sets cover homologous gene families, multiple sequence alignments, phylogenetic trees, and genomic colinearity information within and between species. Complementary functional data sets, a Workbench, and interactive visualization tools are available through a user-friendly web interface, making PLAZA an excellent starting point to translate sequence or omics data sets into biological knowledge. PLAZA is available at http://bioinformatics.psb.ugent.be/plaza/ .

  11. Modeling Fluid-Structure Interaction in ANSYS Workbench

    DTIC Science & Technology

    2016-08-31

    IB U TI O N S TA TE M EN T A .A pp ro ve d fo rp ub lic re le as e; di st rib...Q R M Si er ra L ob o, In c. M od el in g Fl ui d- St ru ct ur e In te ra ct io n in A N SY S W or kb en ch 31 A ug us t 2 01 6 2 D IS TR IB ...S) G ro up 3 D IS TR IB U TI O N S TA TE M EN T A .A pp ro ve d fo rp ub lic re le as e; di st rib

  12. Future of Department of Defense Cloud Computing Amid Cultural Confusion

    DTIC Science & Technology

    2013-03-01

    enterprise cloud - computing environment and transition to a public cloud service provider. Services have started the development of individual cloud - computing environments...endorsing cloud computing . It addresses related issues in matters of service culture changes and how strategic leaders will dictate the future of cloud ...through data center consolidation and individual Service provided cloud computing .

  13. Computer Support of Operator Training: Constructing and Testing a Prototype of a CAL (Computer Aided Learning) Supported Simulation Environment.

    ERIC Educational Resources Information Center

    Zillesen, P. G. van Schaick; And Others

    Instructional feedback given to the learners during computer simulation sessions may be greatly improved by integrating educational computer simulation programs with hypermedia-based computer-assisted learning (CAL) materials. A prototype of a learning environment of this type called BRINE PURIFICATION was developed for use in corporate training…

  14. Singularity: Scientific containers for mobility of compute.

    PubMed

    Kurtzer, Gregory M; Sochat, Vanessa; Bauer, Michael W

    2017-01-01

    Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science.

  15. Singularity: Scientific containers for mobility of compute

    PubMed Central

    Kurtzer, Gregory M.; Bauer, Michael W.

    2017-01-01

    Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science. PMID:28494014

  16. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Merriam, E. W.; Becker, J. D.

    1973-01-01

    A robot computer problem solving system which represents a robot exploration vehicle in a simulated Mars environment is described. The model exhibits changes and improvements made on a previously designed robot in a city environment. The Martian environment is modeled in Cartesian coordinates; objects are scattered about a plane; arbitrary restrictions on the robot's vision have been removed; and the robot's path contains arbitrary curves. New environmental features, particularly the visual occlusion of objects by other objects, were added to the model. Two different algorithms were developed for computing occlusion. Movement and vision capabilities of the robot were established in the Mars environment, using LISP/FORTRAN interface for computational efficiency. The graphical display program was redesigned to reflect the change to the Mars-like environment.

  17. Distributed and collaborative synthetic environments

    NASA Technical Reports Server (NTRS)

    Bajaj, Chandrajit L.; Bernardini, Fausto

    1995-01-01

    Fast graphics workstations and increased computing power, together with improved interface technologies, have created new and diverse possibilities for developing and interacting with synthetic environments. A synthetic environment system is generally characterized by input/output devices that constitute the interface between the human senses and the synthetic environment generated by the computer; and a computation system running a real-time simulation of the environment. A basic need of a synthetic environment system is that of giving the user a plausible reproduction of the visual aspect of the objects with which he is interacting. The goal of our Shastra research project is to provide a substrate of geometric data structures and algorithms which allow the distributed construction and modification of the environment, efficient querying of objects attributes, collaborative interaction with the environment, fast computation of collision detection and visibility information for efficient dynamic simulation and real-time scene display. In particular, we address the following issues: (1) A geometric framework for modeling and visualizing synthetic environments and interacting with them. We highlight the functions required for the geometric engine of a synthetic environment system. (2) A distribution and collaboration substrate that supports construction, modification, and interaction with synthetic environments on networked desktop machines.

  18. Usability Studies in Virtual and Traditional Computer Aided Design Environments for Fault Identification

    DTIC Science & Technology

    2017-08-08

    Usability Studies In Virtual And Traditional Computer Aided Design Environments For Fault Identification Dr. Syed Adeel Ahmed, Xavier University...virtual environment with wand interfaces compared directly with a workstation non-stereoscopic traditional CAD interface with keyboard and mouse. In...the differences in interaction when compared with traditional human computer interfaces. This paper provides analysis via usability study methods

  19. A Drawing and Multi-Representational Computer Environment for Beginners' Learning of Programming Using C: Design and Pilot Formative Evaluation

    ERIC Educational Resources Information Center

    Kordaki, Maria

    2010-01-01

    This paper presents both the design and the pilot formative evaluation study of a computer-based problem-solving environment (named LECGO: Learning Environment for programming using C using Geometrical Objects) for the learning of computer programming using C by beginners. In its design, constructivist and social learning theories were taken into…

  20. Distributed Computing Environment for Mine Warfare Command

    DTIC Science & Technology

    1993-06-01

    based system to a decentralized network of personal computers over the past several years. This thesis analyzes the progress of the evolution as of May of...network of personal computers over the past several years. This thesis analyzes the progress of the evolution as of May of 1992. The building blocks of a...85 A. BACKGROUND ............. .................. 85 B. PAST ENVIRONMENT ........... ............... 86 C. PRESENT ENVIRONMENT

  1. Telecommunications Options Connect OCLC and Libraries to the Future: The Co-Evolution of OCLC Connectivity Options and the Library Computing Environment.

    ERIC Educational Resources Information Center

    Breeding, Marshall

    1998-01-01

    The Online Computer Library Center's (OCLC) access options have kept pace with the evolving trends in telecommunications and the library computing environment. As libraries deploy microcomputers and develop networks, OCLC offers access methods consistent with these environments. OCLC works toward reorienting its network paradigm through TCP/IP…

  2. Workflows and Provenance: Toward Information Science Solutions for the Natural Sciences.

    PubMed

    Gryk, Michael R; Ludäscher, Bertram

    2017-01-01

    The era of big data and ubiquitous computation has brought with it concerns about ensuring reproducibility in this new research environment. It is easy to assume computational methods self-document by their very nature of being exact, deterministic processes. However, similar to laboratory experiments, ensuring reproducibility in the computational realm requires the documentation of both the protocols used (workflows) as well as a detailed description of the computational environment: algorithms, implementations, software environments as well as the data ingested and execution logs of the computation. These two aspects of computational reproducibility (workflows and execution details) are discussed in the context of biomolecular Nuclear Magnetic Resonance spectroscopy (bioNMR) as well as the PRIMAD model for computational reproducibility.

  3. Fault recovery for real-time, multi-tasking computer system

    NASA Technical Reports Server (NTRS)

    Hess, Richard (Inventor); Kelly, Gerald B. (Inventor); Rogers, Randy (Inventor); Stange, Kent A. (Inventor)

    2011-01-01

    System and methods for providing a recoverable real time multi-tasking computer system are disclosed. In one embodiment, a system comprises a real time computing environment, wherein the real time computing environment is adapted to execute one or more applications and wherein each application is time and space partitioned. The system further comprises a fault detection system adapted to detect one or more faults affecting the real time computing environment and a fault recovery system, wherein upon the detection of a fault the fault recovery system is adapted to restore a backup set of state variables.

  4. The Effects of a Robot Game Environment on Computer Programming Education for Elementary School Students

    ERIC Educational Resources Information Center

    Shim, Jaekwoun; Kwon, Daiyoung; Lee, Wongyu

    2017-01-01

    In the past, computer programming was perceived as a task only carried out by computer scientists; in the 21st century, however, computer programming is viewed as a critical and necessary skill that everyone should learn. In order to improve teaching of problem-solving abilities in a computing environment, extensive research is being done on…

  5. Software Maintenance of the Subway Environment Simulation Computer Program

    DOT National Transportation Integrated Search

    1980-12-01

    This document summarizes the software maintenance activities performed to support the Subway Environment Simulation (SES) Computer Program. The SES computer program is a design-oriented analytic tool developed during a recent five-year research proje...

  6. Conducting and Supporting a Goal-Based Scenario Learning Environment.

    ERIC Educational Resources Information Center

    Montgomery, Joel; And Others

    1994-01-01

    Discussion of goal-based scenario (GBS) learning environments focuses on a training module designed to prepare consultants with new skills in managing clients, designing user-friendly graphical computer interfaces, and working in a client/server computing environment. Transforming the environment from teaching focused to learning focused is…

  7. Technopower and Technoppression: Some Abuses of Power and Control in Computer-Assisted Writing Environments (Computers and Controversy).

    ERIC Educational Resources Information Center

    Janangelo, Joseph

    1991-01-01

    Examines the exploitation of individuals that occurs within writing classrooms by those who organize computer systems. Discusses these abuses in three categories: teachers observing students, teachers observing teachers, and students observing students. Shows how computer-enhanced writing environments, if not designed carefully, can inhibit as…

  8. An u-Service Model Based on a Smart Phone for Urban Computing Environments

    NASA Astrophysics Data System (ADS)

    Cho, Yongyun; Yoe, Hyun

    In urban computing environments, all of services should be based on the interaction between humans and environments around them, which frequently and ordinarily in home and office. This paper propose an u-service model based on a smart phone for urban computing environments. The suggested service model includes a context-aware and personalized service scenario development environment that can instantly describe user's u-service demand or situation information with smart devices. To do this, the architecture of the suggested service model consists of a graphical service editing environment for smart devices, an u-service platform, and an infrastructure with sensors and WSN/USN. The graphic editor expresses contexts as execution conditions of a new service through a context model based on ontology. The service platform deals with the service scenario according to contexts. With the suggested service model, an user in urban computing environments can quickly and easily make u-service or new service using smart devices.

  9. Execution environment for intelligent real-time control systems

    NASA Technical Reports Server (NTRS)

    Sztipanovits, Janos

    1987-01-01

    Modern telerobot control technology requires the integration of symbolic and non-symbolic programming techniques, different models of parallel computations, and various programming paradigms. The Multigraph Architecture, which has been developed for the implementation of intelligent real-time control systems is described. The layered architecture includes specific computational models, integrated execution environment and various high-level tools. A special feature of the architecture is the tight coupling between the symbolic and non-symbolic computations. It supports not only a data interface, but also the integration of the control structures in a parallel computing environment.

  10. Traffic information computing platform for big data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Zongtao, E-mail: ztduan@chd.edu.cn; Li, Ying, E-mail: ztduan@chd.edu.cn; Zheng, Xibin, E-mail: ztduan@chd.edu.cn

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

  11. Applying a Wearable Voice-Activated Computer to Instructional Applications in Clean Room Environments

    NASA Technical Reports Server (NTRS)

    Graves, Corey A.; Lupisella, Mark L.

    2004-01-01

    The use of wearable computing technology in restrictive environments related to space applications offers promise in a number of domains. The clean room environment is one such domain in which hands-free, heads-up, wearable computing is particularly attractive for education and training because of the nature of clean room work We have developed and tested a Wearable Voice-Activated Computing (WEVAC) system based on clean room applications. Results of this initial proof-of-concept work indicate that there is a strong potential for WEVAC to enhance clean room activities.

  12. Microgravity Science Glovebox (MSG), Space Science's Past, Present and Future Aboard the International Space Station (ISS)

    NASA Technical Reports Server (NTRS)

    Spivey, Reggie; Spearing, Scott; Jordan, Lee

    2012-01-01

    The Microgravity Science Glovebox (MSG) is a double rack facility aboard the International Space Station (ISS), which accommodates science and technology investigations in a "workbench' type environment. The MSG has been operating on the ISS since July 2002 and is currently located in the US Laboratory Module. In fact, the MSG has been used for over 10,000 hours of scientific payload operations and plans to continue for the life of ISS. The facility has an enclosed working volume that is held at a negative pressure with respect to the crew living area. This allows the facility to provide two levels of containment for small parts, particulates, fluids, and gases. This containment approach protects the crew from possible hazardous operations that take place inside the MSG work volume and allows researchers a controlled pristine environment for their needs. Research investigations operating inside the MSG are provided a large 255 liter enclosed work space, 1000 watts of dc power via a versatile supply interface (120, 28, + 12, and 5 Vdc), 1000 watts of cooling capability, video and data recording and real time downlink, ground commanding capabilities, access to ISS Vacuum Exhaust and Vacuum Resource Systems, and gaseous nitrogen supply. These capabilities make the MSG one of the most utilized facilities on ISS. MSG investigations have involved research in cryogenic fluid management, fluid physics, spacecraft fire safety, materials science, combustion, and plant growth technologies. Modifications to the MSG facility are currently under way to expand the capabilities and provide for investigations involving Life Science and Biological research. In addition, the MSG video system is being replaced with a state-of-the-art, digital video system with high definition/high speed capabilities, and with near real-time downlink capabilities. This paper will provide an overview of the MSG facility, a synopsis of the research that has already been accomplished in the MSG, and an overview of the facility enhancements that will shortly be available for use by future investigators.

  13. Exploring the Potential Routine Use of Electronic Healthcare Record Data to Strengthen Early Signal Assessment in UK Medicines Regulation: Proof-of-Concept Study.

    PubMed

    Donegan, Katherine; Owen, Rebecca; Bird, Helena; Burch, Brian; Smith, Alex; Tregunno, Phil

    2018-05-03

    Electronic healthcare record (EHR) databases are used within pharmacoepidemiology studies to confirm or refute safety signals arising from spontaneous adverse event reports. However, there has been limited routine use of such data earlier in the signal management process, to help rapidly contextualise signals and strengthen preliminary assessment or to inform decisions regarding action including the need for further studies. This study explores the value of EHR used in this way within a regulatory environment via an automated analysis platform. Safety signals raised at the UK Medicines and Healthcare products Regulatory Agency (MHRA) between July 2014 and June 2015 were individually reviewed by a multi-disciplinary team. They assessed the feasibility of identifying the exposure and event of interest using primary care data from the Clinical Practice Research Datalink (CPRD) within the Commonwealth Vigilance Workbench (CVW) Longitudinal Module platform, which was designed to facilitate routine descriptive analysis of signals using EHR. Three signals, where exposure and event could be well identified, were retrospectively analysed using the platform. Of 69 unique new signals, 20 were for drugs prescribed predominately in secondary care or available without prescription, which would not be identified in primary care. A further 17 were brand, formulation, or dose-specific issues, were related to mortality, were relevant only to a subgroup of patients, or were drug interactions, and hence could not be reviewed using the platform given its limitations. Analyses of exposure and incidence of the adverse event could be produced using CPRD within the CMV Longitudinal Module for 32 (46%) signals. The case studies demonstrated that the data provided supporting evidence for confirming initial assessment of the signal and deciding upon the need for further action. CPRD can routinely provide useful early insights into clinical context when assessing a large proportion of safety signals within a regulatory environment provided that a flexible approach is adopted within the analysis platform.

  14. Computers and the Environment: Minimizing the Carbon Footprint

    ERIC Educational Resources Information Center

    Kaestner, Rich

    2009-01-01

    Computers can be good and bad for the environment; one can maximize the good and minimize the bad. When dealing with environmental issues, it's difficult to ignore the computing infrastructure. With an operations carbon footprint equal to the airline industry's, computer energy use is only part of the problem; everyone is also dealing with the use…

  15. The Development and Evaluation of a Computer-Simulated Science Inquiry Environment Using Gamified Elements

    ERIC Educational Resources Information Center

    Tsai, Fu-Hsing

    2018-01-01

    This study developed a computer-simulated science inquiry environment, called the Science Detective Squad, to engage students in investigating an electricity problem that may happen in daily life. The environment combined the simulation of scientific instruments and a virtual environment, including gamified elements, such as points and a story for…

  16. Students' Perceptions of Computer-Based Learning Environments, Their Attitude towards Business Statistics, and Their Academic Achievement: Implications from a UK University

    ERIC Educational Resources Information Center

    Nguyen, ThuyUyen H.; Charity, Ian; Robson, Andrew

    2016-01-01

    This study investigates students' perceptions of computer-based learning environments, their attitude towards business statistics, and their academic achievement in higher education. Guided by learning environments concepts and attitudinal theory, a theoretical model was proposed with two instruments, one for measuring the learning environment and…

  17. Development of a Web Based Simulating System for Earthquake Modeling on the Grid

    NASA Astrophysics Data System (ADS)

    Seber, D.; Youn, C.; Kaiser, T.

    2007-12-01

    Existing cyberinfrastructure-based information, data and computational networks now allow development of state- of-the-art, user-friendly simulation environments that democratize access to high-end computational environments and provide new research opportunities for many research and educational communities. Within the Geosciences cyberinfrastructure network, GEON, we have developed the SYNSEIS (SYNthetic SEISmogram) toolkit to enable efficient computations of 2D and 3D seismic waveforms for a variety of research purposes especially for helping to analyze the EarthScope's USArray seismic data in a speedy and efficient environment. The underlying simulation software in SYNSEIS is a finite difference code, E3D, developed by LLNL (S. Larsen). The code is embedded within the SYNSEIS portlet environment and it is used by our toolkit to simulate seismic waveforms of earthquakes at regional distances (<1000km). Architecturally, SYNSEIS uses both Web Service and Grid computing resources in a portal-based work environment and has a built in access mechanism to connect to national supercomputer centers as well as to a dedicated, small-scale compute cluster for its runs. Even though Grid computing is well-established in many computing communities, its use among domain scientists still is not trivial because of multiple levels of complexities encountered. We grid-enabled E3D using our own dialect XML inputs that include geological models that are accessible through standard Web services within the GEON network. The XML inputs for this application contain structural geometries, source parameters, seismic velocity, density, attenuation values, number of time steps to compute, and number of stations. By enabling a portal based access to a such computational environment coupled with its dynamic user interface we enable a large user community to take advantage of such high end calculations in their research and educational activities. Our system can be used to promote an efficient and effective modeling environment to help scientists as well as educators in their daily activities and speed up the scientific discovery process.

  18. Toward real-time quantum imaging with a single pixel camera

    DOE PAGES

    Lawrie, B. J.; Pooser, R. C.

    2013-03-19

    In this paper, we present a workbench for the study of real-time quantum imaging by measuring the frame-by-frame quantum noise reduction of multi-spatial-mode twin beams generated by four wave mixing in Rb vapor. Exploiting the multiple spatial modes of this squeezed light source, we utilize spatial light modulators to selectively pass macropixels of quantum correlated modes from each of the twin beams to a high quantum efficiency balanced detector. Finally, in low-light-level imaging applications, the ability to measure the quantum correlations between individual spatial modes and macropixels of spatial modes with a single pixel camera will facilitate compressive quantum imagingmore » with sensitivity below the photon shot noise limit.« less

  19. Analysis of Ten Reverse Engineering Tools

    NASA Astrophysics Data System (ADS)

    Koskinen, Jussi; Lehmonen, Tero

    Reverse engineering tools can be used in satisfying the information needs of software maintainers. Especially in case of maintaining large-scale legacy systems tool support is essential. Reverse engineering tools provide various kinds of capabilities to provide the needed information to the tool user. In this paper we analyze the provided capabilities in terms of four aspects: provided data structures, visualization mechanisms, information request specification mechanisms, and navigation features. We provide a compact analysis of ten representative reverse engineering tools for supporting C, C++ or Java: Eclipse Java Development Tools, Wind River Workbench (for C and C++), Understand (for C++), Imagix 4D, Creole, Javadoc, Javasrc, Source Navigator, Doxygen, and HyperSoft. The results of the study supplement the earlier findings in this important area.

  20. High-performance computing for airborne applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even thoughmore » the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.« less

  1. Ambient belonging: how stereotypical cues impact gender participation in computer science.

    PubMed

    Cheryan, Sapna; Plaut, Victoria C; Davies, Paul G; Steele, Claude M

    2009-12-01

    People can make decisions to join a group based solely on exposure to that group's physical environment. Four studies demonstrate that the gender difference in interest in computer science is influenced by exposure to environments associated with computer scientists. In Study 1, simply changing the objects in a computer science classroom from those considered stereotypical of computer science (e.g., Star Trek poster, video games) to objects not considered stereotypical of computer science (e.g., nature poster, phone books) was sufficient to boost female undergraduates' interest in computer science to the level of their male peers. Further investigation revealed that the stereotypical broadcast a masculine stereotype that discouraged women's sense of ambient belonging and subsequent interest in the environment (Studies 2, 3, and 4) but had no similar effect on men (Studies 3, 4). This masculine stereotype prevented women's interest from developing even in environments entirely populated by other women (Study 2). Objects can thus come to broadcast stereotypes of a group, which in turn can deter people who do not identify with these stereotypes from joining that group.

  2. Proceedings of the 1993 Conference on Intelligent Computer-Aided Training and Virtual Environment Technology

    NASA Technical Reports Server (NTRS)

    Hyde, Patricia R.; Loftin, R. Bowen

    1993-01-01

    The volume 2 proceedings from the 1993 Conference on Intelligent Computer-Aided Training and Virtual Environment Technology are presented. Topics discussed include intelligent computer assisted training (ICAT) systems architectures, ICAT educational and medical applications, virtual environment (VE) training and assessment, human factors engineering and VE, ICAT theory and natural language processing, ICAT military applications, VE engineering applications, ICAT knowledge acquisition processes and applications, and ICAT aerospace applications.

  3. Adaptation of Magnetic Bubble Memory in a Standard Microcomputer Environment.

    DTIC Science & Technology

    1981-12-01

    UNCLASSIFIED 60WNvCLASVICY,@,U OV Two$ 06SCV%’ Req. 80"e. (continuation of abstract) 9both the civilian and military computing environments due to the...degree of MASTER OF SCIENCE IN COMPUTER SCIENCE from the NAVAL POSTGRADUATE SCHOOL December 1981 Authors: ,4i Approved by...vital and unigue role in both the civilian and military computing environments due to the combination of characteristics exhibited by magnetic domain

  4. Computational Tools and Facilities for the Next-Generation Analysis and Design Environment

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler); Malone, John B. (Compiler)

    1997-01-01

    This document contains presentations from the joint UVA/NASA Workshop on Computational Tools and Facilities for the Next-Generation Analysis and Design Environment held at the Virginia Consortium of Engineering and Science Universities in Hampton, Virginia on September 17-18, 1996. The presentations focused on the computational tools and facilities for analysis and design of engineering systems, including, real-time simulations, immersive systems, collaborative engineering environment, Web-based tools and interactive media for technical training. Workshop attendees represented NASA, commercial software developers, the aerospace industry, government labs, and academia. The workshop objectives were to assess the level of maturity of a number of computational tools and facilities and their potential for application to the next-generation integrated design environment.

  5. A Selective Bibliography of Building Environment and Service Systems with Particular Reference to Computer Applications. Computer Report CR20.

    ERIC Educational Resources Information Center

    Forwood, Bruce S.

    This bibliography has been produced as part of a research program attempting to develop a new approach to building environment and service systems design using computer-aided design techniques. As such it not only classifies available literature on the service systems themselves, but also contains sections on the application of computers and…

  6. Metaphors for the Nature of Human-Computer Interaction in an Empowering Environment: Interaction Style Influences the Manner of Human Accomplishment.

    ERIC Educational Resources Information Center

    Weller, Herman G.; Hartson, H. Rex

    1992-01-01

    Describes human-computer interface needs for empowering environments in computer usage in which the machine handles the routine mechanics of problem solving while the user concentrates on its higher order meanings. A closed-loop model of interaction is described, interface as illusion is discussed, and metaphors for human-computer interaction are…

  7. VBOT: Motivating computational and complex systems fluencies with constructionist virtual/physical robotics

    NASA Astrophysics Data System (ADS)

    Berland, Matthew W.

    As scientists use the tools of computational and complex systems theory to broaden science perspectives (e.g., Bar-Yam, 1997; Holland, 1995; Wolfram, 2002), so can middle-school students broaden their perspectives using appropriate tools. The goals of this dissertation project are to build, study, evaluate, and compare activities designed to foster both computational and complex systems fluencies through collaborative constructionist virtual and physical robotics. In these activities, each student builds an agent (e.g., a robot-bird) that must interact with fellow students' agents to generate a complex aggregate (e.g., a flock of robot-birds) in a participatory simulation environment (Wilensky & Stroup, 1999a). In a participatory simulation, students collaborate by acting in a common space, teaching each other, and discussing content with one another. As a result, the students improve both their computational fluency and their complex systems fluency, where fluency is defined as the ability to both consume and produce relevant content (DiSessa, 2000). To date, several systems have been designed to foster computational and complex systems fluencies through computer programming and collaborative play (e.g., Hancock, 2003; Wilensky & Stroup, 1999b); this study suggests that, by supporting the relevant fluencies through collaborative play, they become mutually reinforcing. In this work, I will present both the design of the VBOT virtual/physical constructionist robotics learning environment and a comparative study of student interaction with the virtual and physical environments across four middle-school classrooms, focusing on the contrast in systems perspectives differently afforded by the two environments. In particular, I found that while performance gains were similar overall, the physical environment supported agent perspectives on aggregate behavior, and the virtual environment supported aggregate perspectives on agent behavior. The primary research questions are: (1) What are the relative affordances of virtual and physical constructionist robotics systems towards computational and complex systems fluencies? (2) What can middle school students learn using computational/complex systems learning environments in a collaborative setting? (3) In what ways are these environments and activities effective in teaching students computational and complex systems fluencies?

  8. Human Machine Interfaces for Teleoperators and Virtual Environments Conference

    NASA Technical Reports Server (NTRS)

    1990-01-01

    In a teleoperator system the human operator senses, moves within, and operates upon a remote or hazardous environment by means of a slave mechanism (a mechanism often referred to as a teleoperator). In a virtual environment system the interactive human machine interface is retained but the slave mechanism and its environment are replaced by a computer simulation. Video is replaced by computer graphics. The auditory and force sensations imparted to the human operator are similarly computer generated. In contrast to a teleoperator system, where the purpose is to extend the operator's sensorimotor system in a manner that facilitates exploration and manipulation of the physical environment, in a virtual environment system, the purpose is to train, inform, alter, or study the human operator to modify the state of the computer and the information environment. A major application in which the human operator is the target is that of flight simulation. Although flight simulators have been around for more than a decade, they had little impact outside aviation presumably because the application was so specialized and so expensive.

  9. A New Continent of Ideas

    NASA Technical Reports Server (NTRS)

    1990-01-01

    While a new technology called 'virtual reality' is still at the 'ground floor' level, one of its basic components, 3D computer graphics is already in wide commercial use and expanding. Other components that permit a human operator to 'virtually' explore an artificial environment and to interact with it are being demonstrated routinely at Ames and elsewhere. Virtual reality might be defined as an environment capable of being virtually entered - telepresence, it is called - or interacted with by a human. The Virtual Interface Environment Workstation (VIEW) is a head-mounted stereoscopic display system in which the display may be an artificial computer-generated environment or a real environment relayed from remote video cameras. Operator can 'step into' this environment and interact with it. The DataGlove has a series of fiber optic cables and sensors that detect any movement of the wearer's fingers and transmit the information to a host computer; a computer generated image of the hand will move exactly as the operator is moving his gloved hand. With appropriate software, the operator can use the glove to interact with the computer scene by grasping an object. The DataSuit is a sensor equipped full body garment that greatly increases the sphere of performance for virtual reality simulations.

  10. A Software Laboratory Environment for Computer-Based Problem Solving.

    ERIC Educational Resources Information Center

    Kurtz, Barry L.; O'Neal, Micheal B.

    This paper describes a National Science Foundation-sponsored project at Louisiana Technological University to develop computer-based laboratories for "hands-on" introductions to major topics of computer science. The underlying strategy is to develop structured laboratory environments that present abstract concepts through the use of…

  11. Enhancing Security by System-Level Virtualization in Cloud Computing Environments

    NASA Astrophysics Data System (ADS)

    Sun, Dawei; Chang, Guiran; Tan, Chunguang; Wang, Xingwei

    Many trends are opening up the era of cloud computing, which will reshape the IT industry. Virtualization techniques have become an indispensable ingredient for almost all cloud computing system. By the virtual environments, cloud provider is able to run varieties of operating systems as needed by each cloud user. Virtualization can improve reliability, security, and availability of applications by using consolidation, isolation, and fault tolerance. In addition, it is possible to balance the workloads by using live migration techniques. In this paper, the definition of cloud computing is given; and then the service and deployment models are introduced. An analysis of security issues and challenges in implementation of cloud computing is identified. Moreover, a system-level virtualization case is established to enhance the security of cloud computing environments.

  12. Some foundational aspects of quantum computers and quantum robots.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benioff, P.; Physics

    1998-01-01

    This paper addresses foundational issues related to quantum computing. The need for a universally valid theory such as quantum mechanics to describe to some extent its own validation is noted. This includes quantum mechanical descriptions of systems that do theoretical calculations (i.e. quantum computers) and systems that perform experiments. Quantum robots interacting with an environment are a small first step in this direction. Quantum robots are described here as mobile quantum systems with on-board quantum computers that interact with environments. Included are discussions on the carrying out of tasks and the division of tasks into computation and action phases. Specificmore » models based on quantum Turing machines are described. Differences and similarities between quantum robots plus environments and quantum computers are discussed.« less

  13. Can Dynamic Visualizations with Variable Control Enhance the Acquisition of Intuitive Knowledge?

    NASA Astrophysics Data System (ADS)

    Wichmann, Astrid; Timpe, Sebastian

    2015-10-01

    An important feature of inquiry learning is to take part in science practices including exploring variables and testing hypotheses. Computer-based dynamic visualizations have the potential to open up various exploration possibilities depending on the level of learner control. It is assumed that variable control, e.g., by changing parameters of a variable, leads to deeper processing (Chang and Linn 2013; de Jong and Njoo 1992; Nerdel 2003; Trey and Khan 2008). Variable control may be helpful, in particular, for acquiring intuitive knowledge (Swaak and de Jong 2001). However, it bares the risk of mental exhaustion and thus may have detrimental effects on knowledge acquisition (Sweller 1998). Students ( N = 118) from four chemistry classes followed inquiry cycles using the software Molecular Workbench (Xie and Tinker 2006). Variable control was varied across the conditions (1) No-Manipulation group and (2) Manipulation group. By adding a third condition, (3) Manipulation-Plus group, we tested whether adding an active hypothesis phase prepares students before changing parameters of a variable. As expected, students in the Manipulation group and Manipulation-Plus group performed better concerning intuitive knowledge ( d = 1.14) than students in the No-Manipulation group. On a descriptive level, results indicated higher cognitive effort in the Manipulation group and the Manipulation-Plus group than in the No-Manipulation group. Unexpectedly, students in the Manipulation-Plus group did not benefit from the active hypothesis phase (intuitive knowledge: d = .36). Findings show that students benefit from variable control. Furthermore, findings point toward the direction that variable control evokes desirable difficulties (Bjork and Linn 2006).

  14. π Scope: python based scientific workbench with visualization tool for MDSplus data

    NASA Astrophysics Data System (ADS)

    Shiraiwa, S.

    2014-10-01

    π Scope is a python based scientific data analysis and visualization tool constructed on wxPython and Matplotlib. Although it is designed to be a generic tool, the primary motivation for developing the new software is 1) to provide an updated tool to browse MDSplus data, with functionalities beyond dwscope and jScope, and 2) to provide a universal foundation to construct interface tools to perform computer simulation and modeling for Alcator C-Mod. It provides many features to visualize MDSplus data during tokamak experiments including overplotting different signals and discharges, various plot types (line, contour, image, etc.), in-panel data analysis using python scripts, and publication quality graphics generation. Additionally, the logic to produce multi-panel plots is designed to be backward compatible with dwscope, enabling smooth migration for dwscope users. πScope uses multi-threading to reduce data transfer latency, and its object-oriented design makes it easy to modify and expand while the open source nature allows portability. A built-in tree data browser allows a user to approach the data structure both from a GUI and a script, enabling relatively complex data analysis workflow to be built quickly. As an example, an IDL-based interface to perform GENRAY/CQL3D simulations was ported on πScope, thus allowing LHCD simulation to be run between-shot using C-Mod experimental profiles. This workflow is being used to generate a large database to develop a LHCD actuator model for the plasma control system. Supported by USDoE Award DE-FC02-99ER54512.

  15. Biomechanics of cervical tooth region and noncarious cervical lesions of different morphology; three-dimensional finite element analysis.

    PubMed

    Jakupović, Selma; Anić, Ivica; Ajanović, Muhamed; Korać, Samra; Konjhodžić, Alma; Džanković, Aida; Vuković, Amra

    2016-01-01

    The present study aims to investigate the influence of presence and shape of cervical lesions on biomechanical behavior of mandibular first premolar, subjected to two types of occlusal loading using three-dimensional (3D) finite element method (FEM). 3D models of the mandibular premolar are created from a micro computed tomography X-ray image: model of sound mandibular premolar, model with the wedge-shaped cervical lesion (V lesion), and model with saucer-shaped cervical lesion (U lesion). By FEM, straining of the tooth tissues under functional and nonfunctional occlusal loading of 200 (N) is analyzed. For the analysis, the following software was used: CTAn program 1.10 and ANSYS Workbench (version 14.0). The results are presented in von Mises stress. Values of calculated stress in all tooth structures are higher under nonfunctional occlusal loading, while the functional loading is resulted in homogeneous stress distribution. Nonfunctional load in the cervical area of sound tooth model as well as in the sub-superficial layer of the enamel resulted with a significant stress (over 50 [MPa]). The highest stress concentration on models with lesions is noticed on the apex of the V-shaped lesion, while stress in saucer U lesion is significantly lower and distributed over wider area. The type of the occlusal teeth loading has the biggest influence on cervical stress intensity. Geometric shape of the existing lesion is very important in the distribution of internal stress. Compared to the U-shaped lesions, V-shaped lesions show significantly higher stress concentrations under load. Exposure to stress would lead to its progression.

  16. Large Diameter Femoral Heads Impose Significant Alterations on the Strains Developed on Femoral Component and Bone: A Finite Element Analysis

    PubMed Central

    Theodorou, E.G; Provatidis, C.G; Babis, G.C; Georgiou, C.S; Megas, P.D

    2011-01-01

    Total Hip Arthroplasty aims at fully recreating a functional hip joint. Over the past years modular implant systems have become common practice and are widely used, due to the surgical options they provide. In addition Big Femoral Heads have also been implemented in the process, providing more flexibility for the surgeon. The current study aims at investigating the effects that femoral heads of bigger diameter may impose on the mechanical behavior of the bone-implant assembly. Using data acquired by Computed Tomographies and a Coordinate Measurement Machine, a cadaveric femur and a Profemur-E modular stem were fully digitized, leading to a three dimensional finite element model in ANSYS Workbench. Strains and stresses were then calculated, focusing on areas of clinical interest, based on Gruen zones: the calcar and the corresponding below the greater trochanter area in the proximal femur, the stem tip region and a profile line along linea aspera. The performed finite elements analysis revealed that the use of large diameter heads produces significant changes in strain development within the bone volume, especially in the lateral side. The application of Frost’s law in bone remodeling, validated the hypothesis that for all diameters normal bone growth occurs. However, in the calcar area lower strain values were recorded, when comparing with the reference model featuring a 28mm femoral head. Along line aspera and for the stem tip area, higher values were recorded. Finally, stresses calculated on the modular neck revealed increased values, but without reaching the yield strength of the titanium alloy used. PMID:21792381

  17. Large diameter femoral heads impose significant alterations on the strains developed on femoral component and bone: a finite element analysis.

    PubMed

    Theodorou, E G; Provatidis, C G; Babis, G C; Georgiou, C S; Megas, P D

    2011-01-01

    Total Hip Arthroplasty aims at fully recreating a functional hip joint. Over the past years modular implant systems have become common practice and are widely used, due to the surgical options they provide. In addition Big Femoral Heads have also been implemented in the process, providing more flexibility for the surgeon. The current study aims at investigating the effects that femoral heads of bigger diameter may impose on the mechanical behavior of the bone-implant assembly. Using data acquired by Computed Tomographies and a Coordinate Measurement Machine, a cadaveric femur and a Profemur-E modular stem were fully digitized, leading to a three dimensional finite element model in ANSYS Workbench. Strains and stresses were then calculated, focusing on areas of clinical interest, based on Gruen zones: the calcar and the corresponding below the greater trochanter area in the proximal femur, the stem tip region and a profile line along linea aspera. The performed finite elements analysis revealed that the use of large diameter heads produces significant changes in strain development within the bone volume, especially in the lateral side. The application of Frost's law in bone remodeling, validated the hypothesis that for all diameters normal bone growth occurs. However, in the calcar area lower strain values were recorded, when comparing with the reference model featuring a 28mm femoral head. Along line aspera and for the stem tip area, higher values were recorded. Finally, stresses calculated on the modular neck revealed increased values, but without reaching the yield strength of the titanium alloy used.

  18. Graphical user interface for a dual-module EMCCD x-ray detector array

    NASA Astrophysics Data System (ADS)

    Wang, Weiyuan; Ionita, Ciprian; Kuhls-Gilcrist, Andrew; Huang, Ying; Qu, Bin; Gupta, Sandesh K.; Bednarek, Daniel R.; Rudin, Stephen

    2011-03-01

    A new Graphical User Interface (GUI) was developed using Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW) for a high-resolution, high-sensitivity Solid State X-ray Image Intensifier (SSXII), which is a new x-ray detector for radiographic and fluoroscopic imaging, consisting of an array of Electron-Multiplying CCDs (EMCCDs) each having a variable on-chip electron-multiplication gain of up to 2000x to reduce the effect of readout noise. To enlarge the field-of-view (FOV), each EMCCD sensor is coupled to an x-ray phosphor through a fiberoptic taper. Two EMCCD camera modules are used in our prototype to form a computer-controlled array; however, larger arrays are under development. The new GUI provides patient registration, EMCCD module control, image acquisition, and patient image review. Images from the array are stitched into a 2kx1k pixel image that can be acquired and saved at a rate of 17 Hz (faster with pixel binning). When reviewing the patient's data, the operator can select images from the patient's directory tree listed by the GUI and cycle through the images using a slider bar. Commonly used camera parameters including exposure time, trigger mode, and individual EMCCD gain can be easily adjusted using the GUI. The GUI is designed to accommodate expansion of the EMCCD array to even larger FOVs with more modules. The high-resolution, high-sensitivity EMCCD modular-array SSXII imager with the new user-friendly GUI should enable angiographers and interventionalists to visualize smaller vessels and endovascular devices, helping them to make more accurate diagnoses and to perform more precise image-guided interventions.

  19. Digital Immersive Virtual Environments and Instructional Computing

    ERIC Educational Resources Information Center

    Blascovich, Jim; Beall, Andrew C.

    2010-01-01

    This article reviews theory and research relevant to the development of digital immersive virtual environment-based instructional computing systems. The review is organized within the context of a multidimensional model of social influence and interaction within virtual environments that models the interaction of four theoretical factors: theory…

  20. Perspectives on Emerging/Novel Computing Paradigms and Future Aerospace Workforce Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    2003-01-01

    The accelerating pace of the computing technology development shows no signs of abating. Computing power reaching 100 Tflop/s is likely to be reached by 2004 and Pflop/s (10(exp 15) Flop/s) by 2007. The fundamental physical limits of computation, including information storage limits, communication limits and computation rate limits will likely be reached by the middle of the present millennium. To overcome these limits, novel technologies and new computing paradigms will be developed. An attempt is made in this overview to put the diverse activities related to new computing-paradigms in perspective and to set the stage for the succeeding presentations. The presentation is divided into five parts. In the first part, a brief historical account is given of development of computer and networking technologies. The second part provides brief overviews of the three emerging computing paradigms grid, ubiquitous and autonomic computing. The third part lists future computing alternatives and the characteristics of future computing environment. The fourth part describes future aerospace workforce research, learning and design environments. The fifth part lists the objectives of the workshop and some of the sources of information on future computing paradigms.

  1. Encapsulating model complexity and landscape-scale analyses of state-and-transition simulation models: an application of ecoinformatics and juniper encroachment in sagebrush steppe ecosystems

    USGS Publications Warehouse

    O'Donnell, Michael

    2015-01-01

    State-and-transition simulation modeling relies on knowledge of vegetation composition and structure (states) that describe community conditions, mechanistic feedbacks such as fire that can affect vegetation establishment, and ecological processes that drive community conditions as well as the transitions between these states. However, as the need for modeling larger and more complex landscapes increase, a more advanced awareness of computing resources becomes essential. The objectives of this study include identifying challenges of executing state-and-transition simulation models, identifying common bottlenecks of computing resources, developing a workflow and software that enable parallel processing of Monte Carlo simulations, and identifying the advantages and disadvantages of different computing resources. To address these objectives, this study used the ApexRMS® SyncroSim software and embarrassingly parallel tasks of Monte Carlo simulations on a single multicore computer and on distributed computing systems. The results demonstrated that state-and-transition simulation models scale best in distributed computing environments, such as high-throughput and high-performance computing, because these environments disseminate the workloads across many compute nodes, thereby supporting analysis of larger landscapes, higher spatial resolution vegetation products, and more complex models. Using a case study and five different computing environments, the top result (high-throughput computing versus serial computations) indicated an approximate 96.6% decrease of computing time. With a single, multicore compute node (bottom result), the computing time indicated an 81.8% decrease relative to using serial computations. These results provide insight into the tradeoffs of using different computing resources when research necessitates advanced integration of ecoinformatics incorporating large and complicated data inputs and models. - See more at: http://aimspress.com/aimses/ch/reader/view_abstract.aspx?file_no=Environ2015030&flag=1#sthash.p1XKDtF8.dpuf

  2. Color graphics, interactive processing, and the supercomputer

    NASA Technical Reports Server (NTRS)

    Smith-Taylor, Rudeen

    1987-01-01

    The development of a common graphics environment for the NASA Langley Research Center user community and the integration of a supercomputer into this environment is examined. The initial computer hardware, the software graphics packages, and their configurations are described. The addition of improved computer graphics capability to the supercomputer, and the utilization of the graphic software and hardware are discussed. Consideration is given to the interactive processing system which supports the computer in an interactive debugging, processing, and graphics environment.

  3. Open environments to support systems engineering tool integration: A study using the Portable Common Tool Environment (PCTE)

    NASA Technical Reports Server (NTRS)

    Eckhardt, Dave E., Jr.; Jipping, Michael J.; Wild, Chris J.; Zeil, Steven J.; Roberts, Cathy C.

    1993-01-01

    A study of computer engineering tool integration using the Portable Common Tool Environment (PCTE) Public Interface Standard is presented. Over a 10-week time frame, three existing software products were encapsulated to work in the Emeraude environment, an implementation of the PCTE version 1.5 standard. The software products used were a computer-aided software engineering (CASE) design tool, a software reuse tool, and a computer architecture design and analysis tool. The tool set was then demonstrated to work in a coordinated design process in the Emeraude environment. The project and the features of PCTE used are described, experience with the use of Emeraude environment over the project time frame is summarized, and several related areas for future research are summarized.

  4. Emerging and Future Computing Paradigms and Their Impact on the Research, Training, and Design Environments of the Aerospace Workforce

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler)

    2003-01-01

    The document contains the proceedings of the training workshop on Emerging and Future Computing Paradigms and their impact on the Research, Training and Design Environments of the Aerospace Workforce. The workshop was held at NASA Langley Research Center, Hampton, Virginia, March 18 and 19, 2003. The workshop was jointly sponsored by Old Dominion University and NASA. Workshop attendees came from NASA, other government agencies, industry and universities. The objectives of the workshop were to a) provide broad overviews of the diverse activities related to new computing paradigms, including grid computing, pervasive computing, high-productivity computing, and the IBM-led autonomic computing; and b) identify future directions for research that have high potential for future aerospace workforce environments. The format of the workshop included twenty-one, half-hour overview-type presentations and three exhibits by vendors.

  5. The Contribution of Visualization to Learning Computer Architecture

    ERIC Educational Resources Information Center

    Yehezkel, Cecile; Ben-Ari, Mordechai; Dreyfus, Tommy

    2007-01-01

    This paper describes a visualization environment and associated learning activities designed to improve learning of computer architecture. The environment, EasyCPU, displays a model of the components of a computer and the dynamic processes involved in program execution. We present the results of a research program that analysed the contribution of…

  6. 40 CFR Appendix C to Part 66 - Computer Program

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 16 2012-07-01 2012-07-01 false Computer Program C Appendix C to Part 66 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer...

  7. 40 CFR Appendix C to Part 66 - Computer Program

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 16 2014-07-01 2014-07-01 false Computer Program C Appendix C to Part 66 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer...

  8. 40 CFR Appendix C to Part 67 - Computer Program

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 16 2013-07-01 2013-07-01 false Computer Program C Appendix C to Part 67 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) EPA APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note...

  9. 40 CFR Appendix C to Part 66 - Computer Program

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 16 2013-07-01 2013-07-01 false Computer Program C Appendix C to Part 66 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer...

  10. 40 CFR Appendix C to Part 67 - Computer Program

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 16 2014-07-01 2014-07-01 false Computer Program C Appendix C to Part 67 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) EPA APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note...

  11. 40 CFR Appendix C to Part 67 - Computer Program

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 16 2012-07-01 2012-07-01 false Computer Program C Appendix C to Part 67 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) EPA APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note...

  12. 40 CFR Appendix C to Part 66 - Computer Program

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Computer Program C Appendix C to Part 66 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ASSESSMENT AND COLLECTION OF NONCOMPLIANCE PENALTIES BY EPA Pt. 66, App. C Appendix C to Part 66—Computer...

  13. 40 CFR Appendix C to Part 67 - Computer Program

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Computer Program C Appendix C to Part 67 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) EPA APPROVAL OF STATE NONCOMPLIANCE PENALTY PROGRAM Pt. 67, App. C Appendix C to Part 67—Computer Program Note...

  14. SNS programming environment user's guide

    NASA Technical Reports Server (NTRS)

    Tennille, Geoffrey M.; Howser, Lona M.; Humes, D. Creig; Cronin, Catherine K.; Bowen, John T.; Drozdowski, Joseph M.; Utley, Judith A.; Flynn, Theresa M.; Austin, Brenda A.

    1992-01-01

    The computing environment is briefly described for the Supercomputing Network Subsystem (SNS) of the Central Scientific Computing Complex of NASA Langley. The major SNS computers are a CRAY-2, a CRAY Y-MP, a CONVEX C-210, and a CONVEX C-220. The software is described that is common to all of these computers, including: the UNIX operating system, computer graphics, networking utilities, mass storage, and mathematical libraries. Also described is file management, validation, SNS configuration, documentation, and customer services.

  15. Sensor sentinel computing device

    DOEpatents

    Damico, Joseph P.

    2016-08-02

    Technologies pertaining to authenticating data output by sensors in an industrial environment are described herein. A sensor sentinel computing device receives time-series data from a sensor by way of a wireline connection. The sensor sentinel computing device generates a validation signal that is a function of the time-series signal. The sensor sentinel computing device then transmits the validation signal to a programmable logic controller in the industrial environment.

  16. Learners' Perceptions and Illusions of Adaptivity in Computer-Based Learning Environments

    ERIC Educational Resources Information Center

    Vandewaetere, Mieke; Vandercruysse, Sylke; Clarebout, Geraldine

    2012-01-01

    Research on computer-based adaptive learning environments has shown exemplary growth. Although the mechanisms of effective adaptive instruction are unraveled systematically, little is known about the relative effect of learners' perceptions of adaptivity in adaptive learning environments. As previous research has demonstrated that the learners'…

  17. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  18. Some key considerations in evolving a computer system and software engineering support environment for the space station program

    NASA Technical Reports Server (NTRS)

    Mckay, C. W.; Bown, R. L.

    1985-01-01

    The space station data management system involves networks of computing resources that must work cooperatively and reliably over an indefinite life span. This program requires a long schedule of modular growth and an even longer period of maintenance and operation. The development and operation of space station computing resources will involve a spectrum of systems and software life cycle activities distributed across a variety of hosts, an integration, verification, and validation host with test bed, and distributed targets. The requirement for the early establishment and use of an apporopriate Computer Systems and Software Engineering Support Environment is identified. This environment will support the Research and Development Productivity challenges presented by the space station computing system.

  19. Usability Studies In Virtual And Traditional Computer Aided Design Environments For Spatial Awareness

    DTIC Science & Technology

    2017-08-08

    Usability Studies In Virtual And Traditional Computer Aided Design Environments For Spatial Awareness Dr. Syed Adeel Ahmed, Xavier University of...virtual environment with wand interfaces compared directly with a workstation non-stereoscopic traditional CAD interface with keyboard and mouse. In...navigate through a virtual environment. The wand interface provides a significantly improved means of interaction. This study quantitatively measures the

  20. Additional Security Considerations for Grid Management

    NASA Technical Reports Server (NTRS)

    Eidson, Thomas M.

    2003-01-01

    The use of Grid computing environments is growing in popularity. A Grid computing environment is primarily a wide area network that encompasses multiple local area networks, where some of the local area networks are managed by different organizations. A Grid computing environment also includes common interfaces for distributed computing software so that the heterogeneous set of machines that make up the Grid can be used more easily. The other key feature of a Grid is that the distributed computing software includes appropriate security technology. The focus of most Grid software is on the security involved with application execution, file transfers, and other remote computing procedures. However, there are other important security issues related to the management of a Grid and the users who use that Grid. This note discusses these additional security issues and makes several suggestions as how they can be managed.

  1. Visualization of unsteady computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Haimes, Robert

    1994-11-01

    A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.

  2. Visualization of unsteady computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1994-01-01

    A brief summary of the computer environment used for calculating three dimensional unsteady Computational Fluid Dynamic (CFD) results is presented. This environment requires a super computer as well as massively parallel processors (MPP's) and clusters of workstations acting as a single MPP (by concurrently working on the same task) provide the required computational bandwidth for CFD calculations of transient problems. The cluster of reduced instruction set computers (RISC) is a recent advent based on the low cost and high performance that workstation vendors provide. The cluster, with the proper software can act as a multiple instruction/multiple data (MIMD) machine. A new set of software tools is being designed specifically to address visualizing 3D unsteady CFD results in these environments. Three user's manuals for the parallel version of Visual3, pV3, revision 1.00 make up the bulk of this report.

  3. [Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure].

    PubMed

    Yokohama, Noriya

    2013-07-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost.

  4. Toward an automated parallel computing environment for geosciences

    NASA Astrophysics Data System (ADS)

    Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping

    2007-08-01

    Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.

  5. Effects on Training Using Illumination in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Maida, James C.; Novak, M. S. Jennifer; Mueller, Kristian

    1999-01-01

    Camera based tasks are commonly performed during orbital operations, and orbital lighting conditions, such as high contrast shadowing and glare, are a factor in performance. Computer based training using virtual environments is a common tool used to make and keep CTW members proficient. If computer based training included some of these harsh lighting conditions, would the crew increase their proficiency? The project goal was to determine whether computer based training increases proficiency if one trains for a camera based task using computer generated virtual environments with enhanced lighting conditions such as shadows and glare rather than color shaded computer images normally used in simulators. Previous experiments were conducted using a two degree of freedom docking system. Test subjects had to align a boresight camera using a hand controller with one axis of rotation and one axis of rotation. Two sets of subjects were trained on two computer simulations using computer generated virtual environments, one with lighting, and one without. Results revealed that when subjects were constrained by time and accuracy, those who trained with simulated lighting conditions performed significantly better than those who did not. To reinforce these results for speed and accuracy, the task complexity was increased.

  6. Computer-aided design development transition for IPAD environment

    NASA Technical Reports Server (NTRS)

    Owens, H. G.; Mock, W. D.; Mitchell, J. C.

    1980-01-01

    The relationship of federally sponsored computer-aided design/computer-aided manufacturing (CAD/CAM) programs to the aircraft life cycle design process, an overview of NAAD'S CAD development program, an evaluation of the CAD design process, a discussion of the current computing environment within which NAAD is developing its CAD system, some of the advantages/disadvantages of the NAAD-IPAD approach, and CAD developments during transition into the IPAD system are discussed.

  7. Computers and Individualized Instruction: Moving to Alternative Learning Environments.

    ERIC Educational Resources Information Center

    Robbat, Richard J.

    The overall focus of this booklet is on planning for change that allows for integration of computers into articulated learning environments that will enhance the learning goal of students. The first chapter presents four major themes to increase the likelihood of combining computers and individualized instruction in schools: (1) a revitalized form…

  8. Examining Student Outcomes in University Computer Laboratory Environments: Issues for Educational Management

    ERIC Educational Resources Information Center

    Newby, Michael; Marcoulides, Laura D.

    2008-01-01

    Purpose: The purpose of this paper is to model the relationship between student performance, student attitudes, and computer laboratory environments. Design/methodology/approach: Data were collected from 234 college students enrolled in courses that involved the use of a computer to solve problems and provided the laboratory experience by means of…

  9. 40 CFR 721.91 - Computation of estimated surface water concentrations: Instructions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 32 2012-07-01 2012-07-01 false Computation of estimated surface water concentrations: Instructions. 721.91 Section 721.91 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT SIGNIFICANT NEW USES OF CHEMICAL SUBSTANCES Certain Significant New Uses § 721.91 Computation of...

  10. 40 CFR 721.91 - Computation of estimated surface water concentrations: Instructions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Computation of estimated surface water concentrations: Instructions. 721.91 Section 721.91 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT SIGNIFICANT NEW USES OF CHEMICAL SUBSTANCES Certain Significant New Uses § 721.91 Computation of...

  11. Heterogeneity in Health Care Computing Environments

    PubMed Central

    Sengupta, Soumitra

    1989-01-01

    This paper discusses issues of heterogeneity in computer systems, networks, databases, and presentation techniques, and the problems it creates in developing integrated medical information systems. The need for institutional, comprehensive goals are emphasized. Using the Columbia-Presbyterian Medical Center's computing environment as the case study, various steps to solve the heterogeneity problem are presented.

  12. Exploiting GPUs in Virtual Machine for BioCloud

    PubMed Central

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment. PMID:23710465

  13. Exploiting GPUs in virtual machine for BioCloud.

    PubMed

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment.

  14. New computing systems, future computing environment, and their implications on structural analysis and design

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Housner, Jerrold M.

    1993-01-01

    Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.

  15. PISCES: An environment for parallel scientific computation

    NASA Technical Reports Server (NTRS)

    Pratt, T. W.

    1985-01-01

    The parallel implementation of scientific computing environment (PISCES) is a project to provide high-level programming environments for parallel MIMD computers. Pisces 1, the first of these environments, is a FORTRAN 77 based environment which runs under the UNIX operating system. The Pisces 1 user programs in Pisces FORTRAN, an extension of FORTRAN 77 for parallel processing. The major emphasis in the Pisces 1 design is in providing a carefully specified virtual machine that defines the run-time environment within which Pisces FORTRAN programs are executed. Each implementation then provides the same virtual machine, regardless of differences in the underlying architecture. The design is intended to be portable to a variety of architectures. Currently Pisces 1 is implemented on a network of Apollo workstations and on a DEC VAX uniprocessor via simulation of the task level parallelism. An implementation for the Flexible Computing Corp. FLEX/32 is under construction. An introduction to the Pisces 1 virtual computer and the FORTRAN 77 extensions is presented. An example of an algorithm for the iterative solution of a system of equations is given. The most notable features of the design are the provision for several granularities of parallelism in programs and the provision of a window mechanism for distributed access to large arrays of data.

  16. Using a Cloud-Based Computing Environment to Support Teacher Training on Common Core Implementation

    ERIC Educational Resources Information Center

    Robertson, Cory

    2013-01-01

    A cloud-based computing environment, Google Apps for Education (GAFE), has provided the Anaheim City School District (ACSD) a comprehensive and collaborative avenue for creating, sharing, and editing documents, calendars, and social networking communities. With this environment, teachers and district staff at ACSD are able to utilize the deep…

  17. Determinants of Computer Self-Efficacy--An Examination of Learning Motivations and Learning Environments

    ERIC Educational Resources Information Center

    Hsu, Wen-Kai K.; Huang, Show-Hui S.

    2006-01-01

    The purpose of this article is to discuss determinants of computer self-efficacy from the perspective of participant internal learning motivations and external learning environments. The former consisted of three motivations--interest, trend, and employment--while the latter comprised two environments--home and school. Through an intermediate…

  18. From Virtual Environments to Physical Environments: Exploring Interactivity in Ubiquitous-Learning Systems

    ERIC Educational Resources Information Center

    Peng, Hsinyi; Chou, Chien; Chang, Chun-Yu

    2008-01-01

    Computing devices and applications are now used beyond the desktop, in diverse environments, and this trend toward ubiquitous computing is evolving. In this study, we re-visit the interactivity concept and its applications for interactive function design in a ubiquitous-learning system (ULS). Further, we compare interactivity dimensions and…

  19. A Framework for the Evaluation of CASE Tool Learnability in Educational Environments

    ERIC Educational Resources Information Center

    Senapathi, Mali

    2005-01-01

    The aim of the research is to derive a framework for the evaluation of Computer Aided Software Engineering (CASE) tool learnability in educational environments. Drawing from the literature of Human Computer Interaction and educational research, a framework for evaluating CASE tool learnability in educational environments is derived. The two main…

  20. Mathematical Language Development and Talk Types in Computer Supported Collaborative Learning Environments

    ERIC Educational Resources Information Center

    Symons, Duncan; Pierce, Robyn

    2015-01-01

    In this study we examine the use of cumulative and exploratory talk types in a year 5 computer supported collaborative learning environment. The focus for students in this environment was to participate in mathematical problem solving, with the intention of developing the proficiencies of problem solving and reasoning. Findings suggest that…

  1. Examining Metacognitive Processes in Exploratory Computer-Based Learning Environments Using Activity Log Analysis

    ERIC Educational Resources Information Center

    Chang, Yoo Kyung

    2010-01-01

    Metacognition is widely studied for its influence on the effectiveness of learning. With Exploratory Computer-Based Learning Environments (ECBLE), metacognition is found to be especially important because these environments require adaptive metacognitive control by the learners due to their open-ended structure that allows for multiple learning…

  2. Are Cloud Environments Ready for Scientific Applications?

    NASA Astrophysics Data System (ADS)

    Mehrotra, P.; Shackleford, K.

    2011-12-01

    Cloud computing environments are becoming widely available both in the commercial and government sectors. They provide flexibility to rapidly provision resources in order to meet dynamic and changing computational needs without the customers incurring capital expenses and/or requiring technical expertise. Clouds also provide reliable access to resources even though the end-user may not have in-house expertise for acquiring or operating such resources. Consolidation and pooling in a cloud environment allow organizations to achieve economies of scale in provisioning or procuring computing resources and services. Because of these and other benefits, many businesses and organizations are migrating their business applications (e.g., websites, social media, and business processes) to cloud environments-evidenced by the commercial success of offerings such as the Amazon EC2. In this paper, we focus on the feasibility of utilizing cloud environments for scientific workloads and workflows particularly of interest to NASA scientists and engineers. There is a wide spectrum of such technical computations. These applications range from small workstation-level computations to mid-range computing requiring small clusters to high-performance simulations requiring supercomputing systems with high bandwidth/low latency interconnects. Data-centric applications manage and manipulate large data sets such as satellite observational data and/or data previously produced by high-fidelity modeling and simulation computations. Most of the applications are run in batch mode with static resource requirements. However, there do exist situations that have dynamic demands, particularly ones with public-facing interfaces providing information to the general public, collaborators and partners, as well as to internal NASA users. In the last few months we have been studying the suitability of cloud environments for NASA's technical and scientific workloads. We have ported several applications to multiple cloud environments including NASA's Nebula environment, Amazon's EC2, Magellan at NERSC, and SGI's Cyclone system. We critically examined the performance of the applications on these systems. We also collected information on the usability of these cloud environments. In this talk we will present the results of our study focusing on the efficacy of using clouds for NASA's scientific applications.

  3. Military medical modeling and simulation in the 21st century.

    PubMed

    Moses, G; Magee, J H; Bauer, J J; Leitch, R

    2001-01-01

    As we enter the 21st century, military medicine struggles with critical issues. One of the most important issues is how to train medical personnel in peace for the realities of war. In April, 1998, The General Accounting Office (GAO) reported, "Military medical personnel have almost no chance during peacetime to practice battlefield trauma care skills. As a result, physicians both within and outside the Department of Defense (DOD) believe that military medical personnel are not prepared to provide trauma care to the severely injured soldiers in wartime. With some of today's training methods disappearing, the challenge of providing both initial; and sustainment training for almost 100,000 military medical personnel is becoming insurmountable. The "training gap" is huge and impediments to training are mounting. For example, restrictions on animal use are increasing and the cost of conducting live mass casualty exercises is prohibitive. Many medical simulation visionaries believe that four categories of medical simulation are emerging to address these challenges. These categories include PC-based multimedia, digital mannequins, virtual workbenches, and total immersion virtual reality (TIVR). The use of simulation training can provide a risk = free realistic learning environment for the spectrum of medical skills training, from buddy-aid to trauma surgery procedures. This will, in turn, enhance limited hands on training opportunities and revolutionize the way we train in peace to deliver medicine in war. High-fidelity modeling will permit manufacturers to prototype new devices before manufacture. Also, engineers will be able to test a device for themselves in a variety of simulated anatomical representations, permitting them to "practice medicine".

  4. Computational physics in RISC environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhoades, C.E. Jr.

    The new high performance Reduced Instruction Set Computers (RISC) promise near Cray-level performance at near personal-computer prices. This paper explores the performance, conversion and compatibility issues associated with developing, testing and using our traditional, large-scale simulation models in the RISC environments exemplified by the IBM RS6000 and MISP R3000 machines. The questions of operating systems (CTSS versus UNIX), compilers (Fortran, C, pointers) and data are addressed in detail. Overall, it is concluded that the RISC environments are practical for a very wide range of computational physic activities. Indeed, all but the very largest two- and three-dimensional codes will work quitemore » well, particularly in a single user environment. Easily projected hardware-performance increases will revolutionize the field of computational physics. The way we do research will change profoundly in the next few years. There is, however, nothing more difficult to plan, nor more dangerous to manage than the creation of this new world.« less

  5. Computational physics in RISC environments. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhoades, C.E. Jr.

    The new high performance Reduced Instruction Set Computers (RISC) promise near Cray-level performance at near personal-computer prices. This paper explores the performance, conversion and compatibility issues associated with developing, testing and using our traditional, large-scale simulation models in the RISC environments exemplified by the IBM RS6000 and MISP R3000 machines. The questions of operating systems (CTSS versus UNIX), compilers (Fortran, C, pointers) and data are addressed in detail. Overall, it is concluded that the RISC environments are practical for a very wide range of computational physic activities. Indeed, all but the very largest two- and three-dimensional codes will work quitemore » well, particularly in a single user environment. Easily projected hardware-performance increases will revolutionize the field of computational physics. The way we do research will change profoundly in the next few years. There is, however, nothing more difficult to plan, nor more dangerous to manage than the creation of this new world.« less

  6. Evidence Accumulation and Change Rate Inference in Dynamic Environments.

    PubMed

    Radillo, Adrian E; Veliz-Cuba, Alan; Josić, Krešimir; Kilpatrick, Zachary P

    2017-06-01

    In a constantly changing world, animals must account for environmental volatility when making decisions. To appropriately discount older, irrelevant information, they need to learn the rate at which the environment changes. We develop an ideal observer model capable of inferring the present state of the environment along with its rate of change. Key to this computation is an update of the posterior probability of all possible change point counts. This computation can be challenging, as the number of possibilities grows rapidly with time. However, we show how the computations can be simplified in the continuum limit by a moment closure approximation. The resulting low-dimensional system can be used to infer the environmental state and change rate with accuracy comparable to the ideal observer. The approximate computations can be performed by a neural network model via a rate-correlation-based plasticity rule. We thus show how optimal observers accumulate evidence in changing environments and map this computation to reduced models that perform inference using plausible neural mechanisms.

  7. A Programming Language Environment for the Unassisted Learner.

    ERIC Educational Resources Information Center

    Thomas, P. G.; Ince, D. C.

    1982-01-01

    Describes the computing environment and command language for a new programing language called OUSBASIC which is designed to enable naive users to interact usefully, with little assistance, with a computer system. (Author/CHC)

  8. High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away

    NASA Astrophysics Data System (ADS)

    Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.

    2012-09-01

    By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data, and so the costs of running applications vary widely according to how they use resources. The cloud is well suited to processing CPU-bound (and memory bound) workflows such as the periodogram code, given the relatively low cost of processing in comparison with I/O operations. I/O-bound applications such as Montage perform best on high-performance clusters with fast networks and parallel file-systems. Science-driven Cyberinfrastructure: Montage has been widely used as a driver application to develop workflow management services, such as task scheduling in distributed environments, designing fault tolerance techniques for job schedulers, and developing workflow orchestration techniques. Running Parallel Applications Across Distributed Cloud Environments: Data processing will eventually take place in parallel distributed across cyber infrastructure environments having different architectures. We have used the Pegasus Work Management System (WMS) to successfully run applications across three very different environments: TeraGrid, OSG (Open Science Grid), and FutureGrid. Provisioning resources across different grids and clouds (also referred to as Sky Computing), involves establishing a distributed environment, where issues of, e.g, remote job submission, data management, and security need to be addressed. This environment also requires building virtual machine images that can run in different environments. Usually, each cloud provides basic images that can be customized with additional software and services. In most of our work, we provisioned compute resources using a custom application, called Wrangler. Pegasus WMS abstracts the architectures of the compute environments away from the end-user, and can be considered a first-generation tool suitable for scientists to run their applications on disparate environments.

  9. Image-Processing Software For A Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  10. Applications integration in a hybrid cloud computing environment: modelling and platform

    NASA Astrophysics Data System (ADS)

    Li, Qing; Wang, Ze-yuan; Li, Wei-hua; Li, Jun; Wang, Cheng; Du, Rui-yang

    2013-08-01

    With the development of application services providers and cloud computing, more and more small- and medium-sized business enterprises use software services and even infrastructure services provided by professional information service companies to replace all or part of their information systems (ISs). These information service companies provide applications, such as data storage, computing processes, document sharing and even management information system services as public resources to support the business process management of their customers. However, no cloud computing service vendor can satisfy the full functional IS requirements of an enterprise. As a result, enterprises often have to simultaneously use systems distributed in different clouds and their intra enterprise ISs. Thus, this article presents a framework to integrate applications deployed in public clouds and intra ISs. A run-time platform is developed and a cross-computing environment process modelling technique is also developed to improve the feasibility of ISs under hybrid cloud computing environments.

  11. An overview of computer viruses in a research environment

    NASA Technical Reports Server (NTRS)

    Bishop, Matt

    1991-01-01

    The threat of attack by computer viruses is in reality a very small part of a much more general threat, specifically threats aimed at subverting computer security. Here, computer viruses are examined as a malicious logic in a research and development environment. A relation is drawn between the viruses and various models of security and integrity. Current research techniques aimed at controlling the threats posed to computer systems by threatening viruses in particular and malicious logic in general are examined. Finally, a brief examination of the vulnerabilities of research and development systems that malicious logic and computer viruses may exploit is undertaken.

  12. Dynamic Scaffolding of Socially Regulated Learning in a Computer-Based Learning Environment

    ERIC Educational Resources Information Center

    Molenaar, Inge; Roda, Claudia; van Boxtel, Carla; Sleegers, Peter

    2012-01-01

    The aim of this study is to test the effects of dynamically scaffolding social regulation of middle school students working in a computer-based learning environment. Dyads in the scaffolding condition (N=56) are supported with computer-generated scaffolds and students in the control condition (N=54) do not receive scaffolds. The scaffolds are…

  13. Learner-Environment Fit: University Students in a Computer Room.

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    The purpose of this study was to apply the theory of person-environment fit in assessing student well-being in a university computer room. Subjects were 12 students enrolled in a computer literacy course. Their learning behavior and well-being were evaluated on the basis of three symptoms of video display terminal stress usually found in the…

  14. Computational Toxicology as Implemented by the U.S. EPA: Providing High Throughput Decision Support Tools for Screening and Assessing Chemical Exposure, Hazard and Risk

    EPA Science Inventory

    Computational toxicology is the application of mathematical and computer models to help assess chemical hazards and risks to human health and the environment. Supported by advances in informatics, high-throughput screening (HTS) technologies, and systems biology, the U.S. Environ...

  15. A Two-Tier Test-Based Approach to Improving Students' Computer-Programming Skills in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Yang, Tzu-Chi; Hwang, Gwo-Jen; Yang, Stephen J. H.; Hwang, Gwo-Haur

    2015-01-01

    Computer programming is an important skill for engineering and computer science students. However, teaching and learning programming concepts and skills has been recognized as a great challenge to both teachers and students. Therefore, the development of effective learning strategies and environments for programming courses has become an important…

  16. Developing an Efficient Computational Method that Estimates the Ability of Students in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2012-01-01

    This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…

  17. Technology Diffusion and Innovations in Music Education in a Notebook Computer Environment.

    ERIC Educational Resources Information Center

    Hagen, Sara L.

    Valley City State University (North Dakota) was the second university in the nation to adopt a notebook computer environment, supplying every faculty, staff member, administrator, and student with a laptop computer and 24-hour access to the World Wide Web. This paper outlines the innovations made in the music department to accommodate the infusion…

  18. Design of intelligent vehicle control system based on single chip microcomputer

    NASA Astrophysics Data System (ADS)

    Zhang, Congwei

    2018-06-01

    The smart car microprocessor uses the KL25ZV128VLK4 in the Freescale series of single-chip microcomputers. The image sampling sensor uses the CMOS digital camera OV7725. The obtained track data is processed by the corresponding algorithm to obtain track sideline information. At the same time, the pulse width modulation control (PWM) is used to control the motor and servo movements, and based on the digital incremental PID algorithm, the motor speed control and servo steering control are realized. In the project design, IAR Embedded Workbench IDE is used as the software development platform to program and debug the micro-control module, camera image processing module, hardware power distribution module, motor drive and servo control module, and then complete the design of the intelligent car control system.

  19. Cytoscape: the network visualization tool for GenomeSpace workflows.

    PubMed

    Demchak, Barry; Hull, Tim; Reich, Michael; Liefeld, Ted; Smoot, Michael; Ideker, Trey; Mesirov, Jill P

    2014-01-01

    Modern genomic analysis often requires workflows incorporating multiple best-of-breed tools. GenomeSpace is a web-based visual workbench that combines a selection of these tools with mechanisms that create data flows between them. One such tool is Cytoscape 3, a popular application that enables analysis and visualization of graph-oriented genomic networks. As Cytoscape runs on the desktop, and not in a web browser, integrating it into GenomeSpace required special care in creating a seamless user experience and enabling appropriate data flows. In this paper, we present the design and operation of the Cytoscape GenomeSpace app, which accomplishes this integration, thereby providing critical analysis and visualization functionality for GenomeSpace users. It has been downloaded over 850 times since the release of its first version in September, 2013.

  20. Characteristics study of the gears by the CAD/CAE

    NASA Astrophysics Data System (ADS)

    Wang, P. Y.; Chang, S. L.; Lee, B. Y.; Nguyen, D. H.; Cao, C. W.

    2017-09-01

    Gears are the most important transmission component in machines. The rapid development of the machines in industry requires a shorter time of the analysis process. In traditional, the gears are analyzed by setting up the complete mathematical model firstly, considering the profile of cutter and coordinate systems relationship between the machine and the cutter. It is a really complex and time-consuming process. Recently, the CAD/CAE software is well developed and useful in the mechanical design. In this paper, the Autodesk Inventor® software is introduced to model the spherical gears firstly, and then the models can also be transferred into ANSYS Workbench for the finite element analysis. The proposed process in this paper is helpful to the engineers to speed up the analyzing process of gears in the design stage.

  1. Databases and Associated Tools for Glycomics and Glycoproteomics.

    PubMed

    Lisacek, Frederique; Mariethoz, Julien; Alocci, Davide; Rudd, Pauline M; Abrahams, Jodie L; Campbell, Matthew P; Packer, Nicolle H; Ståhle, Jonas; Widmalm, Göran; Mullen, Elaine; Adamczyk, Barbara; Rojas-Macias, Miguel A; Jin, Chunsheng; Karlsson, Niclas G

    2017-01-01

    The access to biodatabases for glycomics and glycoproteomics has proven to be essential for current glycobiological research. This chapter presents available databases that are devoted to different aspects of glycobioinformatics. This includes oligosaccharide sequence databases, experimental databases, 3D structure databases (of both glycans and glycorelated proteins) and association of glycans with tissue, disease, and proteins. Specific search protocols are also provided using tools associated with experimental databases for converting primary glycoanalytical data to glycan structural information. In particular, researchers using glycoanalysis methods by U/HPLC (GlycoBase), MS (GlycoWorkbench, UniCarb-DB, GlycoDigest), and NMR (CASPER) will benefit from this chapter. In addition we also include information on how to utilize glycan structural information to query databases that associate glycans with proteins (UniCarbKB) and with interactions with pathogens (SugarBind).

  2. An expert system shell for inferring vegetation characteristics: Prototype help system (Task 1)

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The NASA Vegetation Workbench (VEG) is a knowledge based system that infers vegetation characteristics from reflectance data. A prototype of the VEG subgoal HELP.SYSTEM has been completed and the Help System has been added to the VEG system. It is loaded when the user first clicks on the HELP.SYSTEM option in the Tool Box Menu. The Help System provides a user tool to support needed user information. It also provides interactive tools the scientist may use to develop new help messages and to modify existing help messages that are attached to VEG screens. The system automatically manages system and file operations needed to preserve new or modified help messages. The Help System was tested both as a help system development and a help system user tool.

  3. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    NASA Astrophysics Data System (ADS)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-01

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  4. Cytoscape: the network visualization tool for GenomeSpace workflows

    PubMed Central

    Demchak, Barry; Hull, Tim; Reich, Michael; Liefeld, Ted; Smoot, Michael; Ideker, Trey; Mesirov, Jill P.

    2014-01-01

    Modern genomic analysis often requires workflows incorporating multiple best-of-breed tools. GenomeSpace is a web-based visual workbench that combines a selection of these tools with mechanisms that create data flows between them. One such tool is Cytoscape 3, a popular application that enables analysis and visualization of graph-oriented genomic networks. As Cytoscape runs on the desktop, and not in a web browser, integrating it into GenomeSpace required special care in creating a seamless user experience and enabling appropriate data flows. In this paper, we present the design and operation of the Cytoscape GenomeSpace app, which accomplishes this integration, thereby providing critical analysis and visualization functionality for GenomeSpace users. It has been downloaded over 850 times since the release of its first version in September, 2013. PMID:25165537

  5. An off-the-shelf, authentic, and versatile undergraduate molecular biology practical course.

    PubMed

    Whitworth, David E

    2015-01-01

    We provide a prepackaged molecular biology course, which has a broad context and is scalable to large numbers of students. It is provided complete with technical setup guidance, a reliable assessment regime, and can be readily implemented without any development necessary. Framed as a forensic examination of blue/white cloning plasmids, the course is a versatile workbench, adaptable to different degree subjects, and can be easily modified to undertake novel research as part of its teaching activities. Course activities include DNA extraction, RFLP, PCR, DNA sequencing, gel electrophoresis, and transformation, alongside a range of basic microbiology techniques. Students particularly appreciated the relevance of the practical to professional practice and the authenticity of the experimental work. © 2015 The International Union of Biochemistry and Molecular Biology.

  6. An expert system shell for inferring vegetation characteristics: Atmospheric techniques (Task G)

    NASA Technical Reports Server (NTRS)

    Harrison, P. Ann; Harrison, Patrick R.

    1993-01-01

    The NASA VEGetation Workbench (VEG) is a knowledge based system that infers vegetation characteristics from reflectance data. The VEG Subgoals have been reorganized into categories. A new subgoal category 'Atmospheric Techniques' containing two new subgoals has been implemented. The subgoal Atmospheric Passes allows the scientist to take reflectance data measured at ground level and predict what the reflectance values would be if the data were measured at a different atmospheric height. The subgoal Atmospheric Corrections allows atmospheric corrections to be made to data collected from an aircraft or by a satellite to determine what the equivalent reflectance values would be if the data were measured at ground level. The report describes the implementation and testing of the basic framework and interface for the Atmospheric Techniques Subgoals.

  7. An expert system shell for inferring vegetation characteristics

    NASA Technical Reports Server (NTRS)

    Harrison, P. Ann; Harrison, Patrick R.

    1992-01-01

    The NASA VEGetation Workbench (VEG) is a knowledge based system that infers vegetation characteristics from reflectance data. The report describes the extensions that have been made to the first generation version of VEG. An interface to a file of unkown cover type data has been constructed. An interface that allows the results of VEG to be written to a file has been implemented. A learning system that learns class descriptions from a data base of historical cover type data and then uses the learned class descriptions to classify an unknown sample has been built. This system has an interface that integrates it into the rest of VEG. The VEG subgoal PROPORTION.GROUND.COVER has been completed and a number of additional techniques that infer the proportion ground cover of a sample have been implemented.

  8. Asymmetric Base-Bleed Effect on Aerospike Plume-Induced Base-Heating Environment

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Droege, Alan; DAgostino, Mark; Lee, Young-Ching; Williams, Robert

    2004-01-01

    A computational heat transfer design methodology was developed to study the dual-engine linear aerospike plume-induced base-heating environment during one power-pack out, in ascent flight. It includes a three-dimensional, finite volume, viscous, chemically reacting, and pressure-based computational fluid dynamics formulation, a special base-bleed boundary condition, and a three-dimensional, finite volume, and spectral-line-based weighted-sum-of-gray-gases absorption computational radiation heat transfer formulation. A separate radiation model was used for diagnostic purposes. The computational methodology was systematically benchmarked. In this study, near-base radiative heat fluxes were computed, and they compared well with those measured during static linear aerospike engine tests. The base-heating environment of 18 trajectory points selected from three power-pack out scenarios was computed. The computed asymmetric base-heating physics were analyzed. The power-pack out condition has the most impact on convective base heating when it happens early in flight. The source of its impact comes from the asymmetric and reduced base bleed.

  9. A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    ERIC Educational Resources Information Center

    Browning, N. Andrew; Grossberg, Stephen; Mingolla, Ennio

    2009-01-01

    Visually-based navigation is a key competence during spatial cognition. Animals avoid obstacles and approach goals in novel cluttered environments using optic flow to compute heading with respect to the environment. Most navigation models try either explain data, or to demonstrate navigational competence in real-world environments without regard…

  10. Making Visible the Behaviors that Influence Learning Environment: A Qualitative Exploration of Computer Science Classrooms

    ERIC Educational Resources Information Center

    Barker, Lecia J.; Garvin-Doxas, Kathy

    2004-01-01

    The authors conducted ethnographic research to provide deep understanding of the learning environment of a selection of computer science classrooms at a large, research university in the United States. Categories emerging from data analysis included (1) impersonal environment and guarded behavior; and (2) the creation and maintenance of informal…

  11. Touch in Computer-Mediated Environments: An Analysis of Online Shoppers' Touch-Interface User Experiences

    ERIC Educational Resources Information Center

    Chung, Sorim

    2016-01-01

    Over the past few years, one of the most fundamental changes in current computer-mediated environments has been input devices, moving from mouse devices to touch interfaces. However, most studies of online retailing have not considered device environments as retail cues that could influence users' shopping behavior. In this research, I examine the…

  12. pV3-Gold Visualization Environment for Computer Simulations

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa L.

    1997-01-01

    A new visualization environment, pV3-Gold, can be used during and after a computer simulation to extract and visualize the physical features in the results. This environment, which is an extension of the pV3 visualization environment developed at the Massachusetts Institute of Technology with guidance and support by researchers at the NASA Lewis Research Center, features many tools that allow users to display data in various ways.

  13. LifeWatch - a Large-scale eScience Infrastructure to Assist in Understanding and Managing our Planet's Biodiversity

    NASA Astrophysics Data System (ADS)

    Hernández Ernst, Vera; Poigné, Axel; Los, Walter

    2010-05-01

    Understanding and managing the complexity of the biodiversity system in relation to global changes concerning land use and climate change with their social and economic implications is crucial to mitigate species loss and biodiversity changes in general. The sustainable development and exploitation of existing biodiversity resources require flexible and powerful infrastructures offering, on the one hand, the access to large-scale databases of observations and measures, to advanced analytical and modelling software, and to high performance computing environments and, on the other hand, the interlinkage of European scientific communities among each others and with national policies. The European Strategy Forum on Research Infrastructures (ESFRI) selected the "LifeWatch e-science and technology infrastructure for biodiversity research" as a promising development to construct facilities to contribute to meet those challenges. LifeWatch collaborates with other selected initiatives (e.g. ICOS, ANAEE, NOHA, and LTER-Europa) to achieve the integration of the infrastructures at landscape and regional scales. This should result in a cooperating cluster of such infrastructures supporting an integrated approach for data capture and transmission, data management and harmonisation. Besides, facilities for exploration, forecasting, and presentation using heterogeneous and distributed data and tools should allow the interdisciplinary scientific research at any spatial and temporal scale. LifeWatch is an example of a new generation of interoperable research infrastructures based on standards and a service-oriented architecture that allow for linkage with external resources and associated infrastructures. External data sources will be established data aggregators as the Global Biodiversity Information Facility (GBIF) for species occurrences and other EU Networks of Excellence like the Long-Term Ecological Research Network (LTER), GMES, and GEOSS for terrestrial monitoring, the MARBEF network for marine data, and the Consortium for European Taxonomic Facilities (CETAF) and its European Distributed Institute for Taxonomy (EDIT) for taxonomic data. But also "smaller" networks and "volunteer scientists" may send data (e.g. GPS supported species observations) to a LifeWatch repository. Autonomous operating wireless environmental sensors and other smart hand-held devices will contribute to increase data capture activities. In this way LifeWatch will directly underpin the development of GEOBON, the biodiversity component if GEOSS, the Global Earth observation System. To overcome all major technical difficulties imposed by the variety of currently and future technologies, protocols, data formats, etc., LifeWatch will define and use common open interfaces. For this purpose, the LifeWatch Reference Model was developed during the preparatory phase specifying the service-oriented architecture underlying the ICT-infrastructure. The Reference Model identifies key requirements and key architectural concepts to support workflows for scientific in-silico experiments, tracking of provenance, and semantic enhancement, besides meeting the functional requirements mentioned before. It provides guidelines for the specification and implementation of services and information models, defining as well a number of generic services and models. Another key issue addressed by the Reference Model is that the cooperation of many developer teams residing in many European countries has to be organized to obtain compatible results in that conformance with the specifications and policies of the Reference Model will be required. The LifeWatch Reference Model is based on the ORCHESTRA Reference Model for geospatial-oriented architectures and services networks that provides a generic framework and has been endorsed as best practice by the Open Geospatial Consortium (OGC). The LifeWatch Infrastructure will allow (interdisciplinary) scientific researchers to collaborate by creating e-Laboratories or by composing e-Services which can be shared and jointly developed. For it a long-term vision for the LifeWatch Biodiversity Workbench Portal has been developed as a one-stop application for the LifeWatch infrastructure based on existing and emerging technologies. There the user can find all available resources such as data, workflows, tools, etc. and access LifeWatch applications that integrate different resource and provides key capabilities like resource discovery and visualisation, creation of workflows, creation and management of provenance, and the support of collaborative activities. While LifeWatch developers will construct components for solving generic LifeWatch tasks, users may add their own facilities to fulfil individual needs. Examples for application of the LifeWatch Reference Model and the LifeWatch Biodiversity Workbench Portal will be given.

  14. NOSTOS: a paper-based ubiquitous computing healthcare environment to support data capture and collaboration.

    PubMed

    Bång, Magnus; Larsson, Anders; Eriksson, Henrik

    2003-01-01

    In this paper, we present a new approach to clinical workplace computerization that departs from the window-based user interface paradigm. NOSTOS is an experimental computer-augmented work environment designed to support data capture and teamwork in an emergency room. NOSTOS combines multiple technologies, such as digital pens, walk-up displays, headsets, a smart desk, and sensors to enhance an existing paper-based practice with computer power. The physical interfaces allow clinicians to retain mobile paper-based collaborative routines and still benefit from computer technology. The requirements for the system were elicited from situated workplace studies. We discuss the advantages and disadvantages of augmenting a paper-based clinical work environment.

  15. Principled design for an integrated computational environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Disessa, A.A.

    Boxer is a computer language designed to be the base of an integrated computational environment providing a broad array of functionality -- from text editing to programming -- for naive and novice users. It stands in the line of Lisp inspired languages (Lisp, Logo, Scheme), but differs from these in achieving much of its understandability from pervasive use of a spatial metaphor reinforced through suitable graphics. This paper describes a set of learnability and understandability issues first and then uses them to motivate design decisions made concerning Boxer and the environment in which it is embedded.

  16. Performance measurement and modeling of component applications in a high performance computing environment : a case study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armstrong, Robert C.; Ray, Jaideep; Malony, A.

    2003-11-01

    We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.

  17. Reproducible Earth observation analytics: challenges, ideas, and a study case on containerized land use change detection

    NASA Astrophysics Data System (ADS)

    Appel, Marius; Nüst, Daniel; Pebesma, Edzer

    2017-04-01

    Geoscientific analyses of Earth observation data typically involve a long path from data acquisition to scientific results and conclusions. Before starting the actual processing, scenes must be downloaded from the providers' platforms and the computing infrastructure needs to be prepared. The computing environment often requires specialized software, which in turn might have lots of dependencies. The software is often highly customized and provided without commercial support, which leads to rather ad-hoc systems and irreproducible results. To let other scientists reproduce the analyses, the full workspace including data, code, the computing environment, and documentation must be bundled and shared. Technologies such as virtualization or containerization allow for the creation of identical computing environments with relatively little effort. Challenges, however, arise when the volume of the data is too large, when computations are done in a cluster environment, or when complex software components such as databases are used. We discuss these challenges for the example of scalable Land use change detection on Landsat imagery. We present a reproducible implementation that runs R and the scalable data management and analytical system SciDB within a Docker container. Thanks to an explicit container recipe (the Dockerfile), this enables the all-in-one reproduction including the installation of software components, the ingestion of the data, and the execution of the analysis in a well-defined environment. We furthermore discuss possibilities how the implementation could be transferred to multi-container environments in order to support reproducibility on large cluster environments.

  18. Thermally assisted adiabatic quantum computation.

    PubMed

    Amin, M H S; Love, Peter J; Truncik, C J S

    2008-02-15

    We study the effect of a thermal environment on adiabatic quantum computation using the Bloch-Redfield formalism. We show that in certain cases the environment can enhance the performance in two different ways: (i) by introducing a time scale for thermal mixing near the anticrossing that is smaller than the adiabatic time scale, and (ii) by relaxation after the anticrossing. The former can enhance the scaling of computation when the environment is super-Ohmic, while the latter can only provide a prefactor enhancement. We apply our method to the case of adiabatic Grover search and show that performance better than classical is possible with a super-Ohmic environment, with no a priori knowledge of the energy spectrum.

  19. Environments for online maritime simulators with cloud computing capabilities

    NASA Astrophysics Data System (ADS)

    Raicu, Gabriel; Raicu, Alexandra

    2016-12-01

    This paper presents the cloud computing environments, network principles and methods for graphical development in realistic naval simulation, naval robotics and virtual interactions. The aim of this approach is to achieve a good simulation quality in large networked environments using open source solutions designed for educational purposes. Realistic rendering of maritime environments requires near real-time frameworks with enhanced computing capabilities during distance interactions. E-Navigation concepts coupled with the last achievements in virtual and augmented reality will enhance the overall experience leading to new developments and innovations. We have to deal with a multiprocessing situation using advanced technologies and distributed applications using remote ship scenario and automation of ship operations.

  20. Transition of a Three-Dimensional Unsteady Viscous Flow Analysis from a Research Environment to the Design Environment

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne; Dorney, Daniel J.; Huber, Frank; Sheffler, David A.; Turner, James E. (Technical Monitor)

    2001-01-01

    The advent of advanced computer architectures and parallel computing have led to a revolutionary change in the design process for turbomachinery components. Two- and three-dimensional steady-state computational flow procedures are now routinely used in the early stages of design. Unsteady flow analyses, however, are just beginning to be incorporated into design systems. This paper outlines the transition of a three-dimensional unsteady viscous flow analysis from the research environment into the design environment. The test case used to demonstrate the analysis is the full turbine system (high-pressure turbine, inter-turbine duct and low-pressure turbine) from an advanced turboprop engine.

  1. Scaling Task Management in Space and Time: Reducing User Overhead in Ubiquitous-Computing Environments

    DTIC Science & Technology

    2005-03-28

    consequently users are torn between taking advantage of increasingly pervasive computing systems, and the price (in attention and skill) that they have to... advantage of the surrounding computing environments; and (c) that it is usable by non-experts. Second, from a software architect’s perspective, we...take full advantage of the computing systems accessible to them, much as they take advantage of the furniture in each physical space. In the example

  2. Impact of new computing systems on computational mechanics and flight-vehicle structures technology

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Storaasli, O. O.; Fulton, R. E.

    1984-01-01

    Advances in computer technology which may have an impact on computational mechanics and flight vehicle structures technology were reviewed. The characteristics of supersystems, highly parallel systems, and small systems are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario for future hardware/software environment and engineering analysis systems is presented. Research areas with potential for improving the effectiveness of analysis methods in the new environment are identified.

  3. Computer Vision Assisted Virtual Reality Calibration

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  4. Computing, Environment and Life Sciences | Argonne National Laboratory

    Science.gov Websites

    engineer receives prestigious medal August 18, 2016 Software optimized on Mira advances design of mini » Back to top Twitter Flickr Facebook Linked In YouTube Pinterest Google Plus Computing, Environment and

  5. Computer Assisted Virtual Environment - CAVE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Phillip; Podgorney, Robert; Weingartner,

    Research at the Center for Advanced Energy Studies is taking on another dimension with a 3-D device known as a Computer Assisted Virtual Environment. The CAVE uses projection to display high-end computer graphics on three walls and the floor. By wearing 3-D glasses to create depth perception and holding a wand to move and rotate images, users can delve into data.

  6. Cloud-Based versus Local-Based Web Development Education: An Experimental Study in Learning Experience

    ERIC Educational Resources Information Center

    Pike, Ronald E.; Pittman, Jason M.; Hwang, Drew

    2017-01-01

    This paper investigates the use of a cloud computing environment to facilitate the teaching of web development at a university in the Southwestern United States. A between-subjects study of students in a web development course was conducted to assess the merits of a cloud computing environment instead of personal computers for developing websites.…

  7. Computer Assisted Virtual Environment - CAVE

    ScienceCinema

    Erickson, Phillip; Podgorney, Robert; Weingartner,

    2018-05-30

    Research at the Center for Advanced Energy Studies is taking on another dimension with a 3-D device known as a Computer Assisted Virtual Environment. The CAVE uses projection to display high-end computer graphics on three walls and the floor. By wearing 3-D glasses to create depth perception and holding a wand to move and rotate images, users can delve into data.

  8. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    PubMed

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  9. The Charging of Composites in the Space Environment

    NASA Technical Reports Server (NTRS)

    Czepiela, Steven A.

    1997-01-01

    Deep dielectric charging and subsequent electrostatic discharge in composite materials used on spacecraft have become greater concerns since composite materials are being used more extensively as main structural components. Deep dielectric charging occurs when high energy particles penetrate and deposit themselves in the insulating material of spacecraft components. These deposited particles induce an electric field in the material, which causes the particles to move and thus changes the electric field. The electric field continues to change until a steady state is reached between the incoming particles from the space environment and the particles moving away due to the electric field. An electrostatic discharge occurs when the electric field is greater than the dielectric strength of the composite material. The goal of the current investigation is to investigate deep dielectric charging in composite materials and ascertain what modifications have to be made to the composite properties to alleviate any breakdown issues. A 1-D model was created. The space environment, which is calculated using the Environmental Workbench software, the composite material properties, and the electric field and voltage boundary conditions are input into the model. The output from the model is the charge density, electric field, and voltage distributions as functions of the depth into the material and time. Analysis using the model show that there should be no deep dielectric charging problem with conductive composites such as carbon fiber/epoxy. With insulating materials such as glass fiber/epoxy, Kevlar, and polymers, there is also no concern of deep dielectric charging problems with average day-to-day particle fluxes. However, problems can arise during geomagnetic substorms and solar particle events where particle flux levels increase by several orders of magnitude, and thus increase the electric field in the material by several orders of magnitude. Therefore, the second part of this investigation was an experimental attempt to measure the continuum electrical properties of a carbon fiber/epoxy composite, and to create a composite with tailorable conductivity without affecting its mechanical properties. The measurement of the conductivity and dielectric strength of carbon fiber/epoxy composites showed that these properties are surface layer dominated and difficult to measure. In the second experimental task, the conductivity of a glass fiber/epoxy composite was increased by 3 orders of magnitude, dielectric constant was increased approximately by a factor of 16, with minimal change to the mechanical properties, by adding conductive carbon black to the epoxy.

  10. Assessment of the Effectiveness of the Educational Environment Supported by Computer Aided Presentations at Primary School Level

    ERIC Educational Resources Information Center

    Kose, Erdogan

    2009-01-01

    The objective of this study is to assess the effectiveness of the educational environment supported by computer aided presentations at primary school. The effectiveness of the environment has been evaluated in terms of students' learning and remembering what they have learnt. In the study, we have compared experimental group and control group in…

  11. HeNCE: A Heterogeneous Network Computing Environment

    DOE PAGES

    Beguelin, Adam; Dongarra, Jack J.; Geist, George Al; ...

    1994-01-01

    Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE) is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM).more » The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.« less

  12. Quantum robots plus environments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benioff, P.

    1998-07-23

    A quantum robot is a mobile quantum system, including an on board quantum computer and needed ancillary systems, that interacts with an environment of quantum systems. Quantum robots carry out tasks whose goals include making specified changes in the state of the environment or carrying out measurements on the environment. The environments considered so far, oracles, data bases, and quantum registers, are seen to be special cases of environments considered here. It is also seen that a quantum robot should include a quantum computer and cannot be simply a multistate head. A model of quantum robots and their interactions ismore » discussed in which each task, as a sequence of alternating computation and action phases,is described by a unitary single time step operator T {approx} T{sub a} + T{sub c} (discrete space and time are assumed). The overall system dynamics is described as a sum over paths of completed computation (T{sub c}) and action (T{sub a}) phases. A simple example of a task, measuring the distance between the quantum robot and a particle on a 1D lattice with quantum phase path dispersion present, is analyzed. A decision diagram for the task is presented and analyzed.« less

  13. Programmable lithography engine (ProLE) grid-type supercomputer and its applications

    NASA Astrophysics Data System (ADS)

    Petersen, John S.; Maslow, Mark J.; Gerold, David J.; Greenway, Robert T.

    2003-06-01

    There are many variables that can affect lithographic dependent device yield. Because of this, it is not enough to make optical proximity corrections (OPC) based on the mask type, wavelength, lens, illumination-type and coherence. Resist chemistry and physics along with substrate, exposure, and all post-exposure processing must be considered too. Only a holistic approach to finding imaging solutions will accelerate yield and maximize performance. Since experiments are too costly in both time and money, accomplishing this takes massive amounts of accurate simulation capability. Our solution is to create a workbench that has a set of advanced user applications that utilize best-in-class simulator engines for solving litho-related DFM problems using distributive computing. Our product, ProLE (Programmable Lithography Engine), is an integrated system that combines Petersen Advanced Lithography Inc."s (PAL"s) proprietary applications and cluster management software wrapped around commercial software engines, along with optional commercial hardware and software. It uses the most rigorous lithography simulation engines to solve deep sub-wavelength imaging problems accurately and at speeds that are several orders of magnitude faster than current methods. Specifically, ProLE uses full vector thin-mask aerial image models or when needed, full across source 3D electromagnetic field simulation to make accurate aerial image predictions along with calibrated resist models;. The ProLE workstation from Petersen Advanced Lithography, Inc., is the first commercial product that makes it possible to do these intensive calculations at a fraction of a time previously available thus significantly reducing time to market for advance technology devices. In this work, ProLE is introduced, through model comparison to show why vector imaging and rigorous resist models work better than other less rigorous models, then some applications of that use our distributive computing solution are shown. Topics covered describe why ProLE solutions are needed from an economic and technical aspect, a high level discussion of how the distributive system works, speed benchmarking, and finally, a brief survey of applications including advanced aberrations for lens sensitivity and flare studies, optical-proximity-correction for a bitcell and an application that will allow evaluation of the potential of a design to have systematic failures during fabrication.

  14. Research on Influence of Cloud Environment on Traditional Network Security

    NASA Astrophysics Data System (ADS)

    Ming, Xiaobo; Guo, Jinhua

    2018-02-01

    Cloud computing is a symbol of the progress of modern information network, cloud computing provides a lot of convenience to the Internet users, but it also brings a lot of risk to the Internet users. Second, one of the main reasons for Internet users to choose cloud computing is that the network security performance is great, it also is the cornerstone of cloud computing applications. This paper briefly explores the impact on cloud environment on traditional cybersecurity, and puts forward corresponding solutions.

  15. Argo: enabling the development of bespoke workflows and services for disease annotation.

    PubMed

    Batista-Navarro, Riza; Carter, Jacob; Ananiadou, Sophia

    2016-01-01

    Argo (http://argo.nactem.ac.uk) is a generic text mining workbench that can cater to a variety of use cases, including the semi-automatic annotation of literature. It enables its technical users to build their own customised text mining solutions by providing a wide array of interoperable and configurable elementary components that can be seamlessly integrated into processing workflows. With Argo's graphical annotation interface, domain experts can then make use of the workflows' automatically generated output to curate information of interest.With the continuously rising need to understand the aetiology of diseases as well as the demand for their informed diagnosis and personalised treatment, the curation of disease-relevant information from medical and clinical documents has become an indispensable scientific activity. In the Fifth BioCreative Challenge Evaluation Workshop (BioCreative V), there was substantial interest in the mining of literature for disease-relevant information. Apart from a panel discussion focussed on disease annotations, the chemical-disease relations (CDR) track was also organised to foster the sharing and advancement of disease annotation tools and resources.This article presents the application of Argo's capabilities to the literature-based annotation of diseases. As part of our participation in BioCreative V's User Interactive Track (IAT), we demonstrated and evaluated Argo's suitability to the semi-automatic curation of chronic obstructive pulmonary disease (COPD) phenotypes. Furthermore, the workbench facilitated the development of some of the CDR track's top-performing web services for normalising disease mentions against the Medical Subject Headings (MeSH) database. In this work, we highlight Argo's support for developing various types of bespoke workflows ranging from ones which enabled us to easily incorporate information from various databases, to those which train and apply machine learning-based concept recognition models, through to user-interactive ones which allow human curators to manually provide their corrections to automatically generated annotations. Our participation in the BioCreative V challenges shows Argo's potential as an enabling technology for curating disease and phenotypic information from literature.Database URL: http://argo.nactem.ac.uk. © The Author(s) 2016. Published by Oxford University Press.

  16. Argo: enabling the development of bespoke workflows and services for disease annotation

    PubMed Central

    Batista-Navarro, Riza; Carter, Jacob; Ananiadou, Sophia

    2016-01-01

    Argo (http://argo.nactem.ac.uk) is a generic text mining workbench that can cater to a variety of use cases, including the semi-automatic annotation of literature. It enables its technical users to build their own customised text mining solutions by providing a wide array of interoperable and configurable elementary components that can be seamlessly integrated into processing workflows. With Argo's graphical annotation interface, domain experts can then make use of the workflows' automatically generated output to curate information of interest. With the continuously rising need to understand the aetiology of diseases as well as the demand for their informed diagnosis and personalised treatment, the curation of disease-relevant information from medical and clinical documents has become an indispensable scientific activity. In the Fifth BioCreative Challenge Evaluation Workshop (BioCreative V), there was substantial interest in the mining of literature for disease-relevant information. Apart from a panel discussion focussed on disease annotations, the chemical-disease relations (CDR) track was also organised to foster the sharing and advancement of disease annotation tools and resources. This article presents the application of Argo’s capabilities to the literature-based annotation of diseases. As part of our participation in BioCreative V’s User Interactive Track (IAT), we demonstrated and evaluated Argo’s suitability to the semi-automatic curation of chronic obstructive pulmonary disease (COPD) phenotypes. Furthermore, the workbench facilitated the development of some of the CDR track’s top-performing web services for normalising disease mentions against the Medical Subject Headings (MeSH) database. In this work, we highlight Argo’s support for developing various types of bespoke workflows ranging from ones which enabled us to easily incorporate information from various databases, to those which train and apply machine learning-based concept recognition models, through to user-interactive ones which allow human curators to manually provide their corrections to automatically generated annotations. Our participation in the BioCreative V challenges shows Argo’s potential as an enabling technology for curating disease and phenotypic information from literature. Database URL: http://argo.nactem.ac.uk PMID:27189607

  17. An editor for pathway drawing and data visualization in the Biopathways Workbench.

    PubMed

    Byrnes, Robert W; Cotter, Dawn; Maer, Andreia; Li, Joshua; Nadeau, David; Subramaniam, Shankar

    2009-10-02

    Pathway models serve as the basis for much of systems biology. They are often built using programs designed for the purpose. Constructing new models generally requires simultaneous access to experimental data of diverse types, to databases of well-characterized biological compounds and molecular intermediates, and to reference model pathways. However, few if any software applications provide all such capabilities within a single user interface. The Pathway Editor is a program written in the Java programming language that allows de-novo pathway creation and downloading of LIPID MAPS (Lipid Metabolites and Pathways Strategy) and KEGG lipid metabolic pathways, and of measured time-dependent changes to lipid components of metabolism. Accessed through Java Web Start, the program downloads pathways from the LIPID MAPS Pathway database (Pathway) as well as from the LIPID MAPS web server http://www.lipidmaps.org. Data arises from metabolomic (lipidomic), microarray, and protein array experiments performed by the LIPID MAPS consortium of laboratories and is arranged by experiment. Facility is provided to create, connect, and annotate nodes and processes on a drawing panel with reference to database objects and time course data. Node and interaction layout as well as data display may be configured in pathway diagrams as desired. Users may extend diagrams, and may also read and write data and non-lipidomic KEGG pathways to and from files. Pathway diagrams in XML format, containing database identifiers referencing specific compounds and experiments, can be saved to a local file for subsequent use. The program is built upon a library of classes, referred to as the Biopathways Workbench, that convert between different file formats and database objects. An example of this feature is provided in the form of read/construct/write access to models in SBML (Systems Biology Markup Language) contained in the local file system. Inclusion of access to multiple experimental data types and of pathway diagrams within a single interface, automatic updating through connectivity to an online database, and a focus on annotation, including reference to standardized lipid nomenclature as well as common lipid names, supports the view that the Pathway Editor represents a significant, practicable contribution to current pathway modeling tools.

  18. Enabling Automated Graph-based Search for the Identification and Characterization of Mesoscale Convective Complexes in Satellite Datasets through Integration with the Apache Open Climate Workbench

    NASA Astrophysics Data System (ADS)

    McGibbney, L. J.; Whitehall, K. D.; Mattmann, C. A.; Goodale, C. E.; Joyce, M.; Ramirez, P.; Zimdars, P.

    2014-12-01

    We detail how Apache Open Climate Workbench (OCW) (recently open sourced by NASA JPL) was adapted to facilitate an ongoing study of Mesoscale Convective Complexes (MCCs) in West Africa and their contributions within the weather-climate continuum as it relates to climate variability. More than 400 MCCs occur annually over various locations on the globe. In West Africa, approximately one-fifth of that total occur during the summer months (June-November) alone and are estimated to contribute more than 50% of the seasonal rainfall amounts. Furthermore, in general the non-discriminatory socio-economic geospatial distribution of these features correlates with currently and projected densely populated locations. As such, the convective nature of MCCs raises questions regarding their seasonal variability and frequency in current and future climates, amongst others. However, in spite of the formal observation criteria of these features in 1980, these questions have remained comprehensively unanswered because of the untimely and subjective methods for identifying and characterizing MCCs due to limitations data-handling limitations. The main outcome of this work therefore documents how a graph-based search algorithm was implemented on top of the OCW stack with the ultimate goal of improving fully automated end-to-end identification and characterization of MCCs in high resolution observational datasets. Apache OCW as an open source project was demonstrated from inception and we display how it was again utilized to advance understanding and knowledge within the above domain. The project was born out of refactored code donated by NASA JPL from the Earth science community's Regional Climate Model Evaluation System (RCMES), a joint project between the Joint Institute for Regional Earth System Science and Engineering (JIFRESSE), and a scientific collaboration between the University of California at Los Angeles (UCLA) and NASA JPL. The Apache OCW project was then integrated back into the donor code with the aim of more efficiently powering that project. Notwithstanding, the object-oriented approach to creating a core set of libraries Apache OCW has scaled the usability of the project beyond climate model evaluation as displayed in the MCC use case detailed herewith.

  19. A distributed computing environment with support for constraint-based task scheduling and scientific experimentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahrens, J.P.; Shapiro, L.G.; Tanimoto, S.L.

    1997-04-01

    This paper describes a computing environment which supports computer-based scientific research work. Key features include support for automatic distributed scheduling and execution and computer-based scientific experimentation. A new flexible and extensible scheduling technique that is responsive to a user`s scheduling constraints, such as the ordering of program results and the specification of task assignments and processor utilization levels, is presented. An easy-to-use constraint language for specifying scheduling constraints, based on the relational database query language SQL, is described along with a search-based algorithm for fulfilling these constraints. A set of performance studies show that the environment can schedule and executemore » program graphs on a network of workstations as the user requests. A method for automatically generating computer-based scientific experiments is described. Experiments provide a concise method of specifying a large collection of parameterized program executions. The environment achieved significant speedups when executing experiments; for a large collection of scientific experiments an average speedup of 3.4 on an average of 5.5 scheduled processors was obtained.« less

  20. Architecture-Adaptive Computing Environment: A Tool for Teaching Parallel Programming

    NASA Technical Reports Server (NTRS)

    Dorband, John E.; Aburdene, Maurice F.

    2002-01-01

    Recently, networked and cluster computation have become very popular. This paper is an introduction to a new C based parallel language for architecture-adaptive programming, aCe C. The primary purpose of aCe (Architecture-adaptive Computing Environment) is to encourage programmers to implement applications on parallel architectures by providing them the assurance that future architectures will be able to run their applications with a minimum of modification. A secondary purpose is to encourage computer architects to develop new types of architectures by providing an easily implemented software development environment and a library of test applications. This new language should be an ideal tool to teach parallel programming. In this paper, we will focus on some fundamental features of aCe C.

  1. Computational path planner for product assembly in complex environments

    NASA Astrophysics Data System (ADS)

    Shang, Wei; Liu, Jianhua; Ning, Ruxin; Liu, Mi

    2013-03-01

    Assembly path planning is a crucial problem in assembly related design and manufacturing processes. Sampling based motion planning algorithms are used for computational assembly path planning. However, the performance of such algorithms may degrade much in environments with complex product structure, narrow passages or other challenging scenarios. A computational path planner for automatic assembly path planning in complex 3D environments is presented. The global planning process is divided into three phases based on the environment and specific algorithms are proposed and utilized in each phase to solve the challenging issues. A novel ray test based stochastic collision detection method is proposed to evaluate the intersection between two polyhedral objects. This method avoids fake collisions in conventional methods and degrades the geometric constraint when a part has to be removed with surface contact with other parts. A refined history based rapidly-exploring random tree (RRT) algorithm which bias the growth of the tree based on its planning history is proposed and employed in the planning phase where the path is simple but the space is highly constrained. A novel adaptive RRT algorithm is developed for the path planning problem with challenging scenarios and uncertain environment. With extending values assigned on each tree node and extending schemes applied, the tree can adapts its growth to explore complex environments more efficiently. Experiments on the key algorithms are carried out and comparisons are made between the conventional path planning algorithms and the presented ones. The comparing results show that based on the proposed algorithms, the path planner can compute assembly path in challenging complex environments more efficiently and with higher success. This research provides the references to the study of computational assembly path planning under complex environments.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heroux, Michael; Lethin, Richard

    Programming models and environments play the essential roles in high performance computing of enabling the conception, design, implementation and execution of science and engineering application codes. Programmer productivity is strongly influenced by the effectiveness of our programming models and environments, as is software sustainability since our codes have lifespans measured in decades, so the advent of new computing architectures, increased concurrency, concerns for resilience, and the increasing demands for high-fidelity, multi-physics, multi-scale and data-intensive computations mean that we have new challenges to address as part of our fundamental R&D requirements. Fortunately, we also have new tools and environments that makemore » design, prototyping and delivery of new programming models easier than ever. The combination of new and challenging requirements and new, powerful toolsets enables significant synergies for the next generation of programming models and environments R&D. This report presents the topics discussed and results from the 2014 DOE Office of Science Advanced Scientific Computing Research (ASCR) Programming Models & Environments Summit, and subsequent discussions among the summit participants and contributors to topics in this report.« less

  3. The CSM testbed software system: A development environment for structural analysis methods on the NAS CRAY-2

    NASA Technical Reports Server (NTRS)

    Gillian, Ronnie E.; Lotts, Christine G.

    1988-01-01

    The Computational Structural Mechanics (CSM) Activity at Langley Research Center is developing methods for structural analysis on modern computers. To facilitate that research effort, an applications development environment has been constructed to insulate the researcher from the many computer operating systems of a widely distributed computer network. The CSM Testbed development system was ported to the Numerical Aerodynamic Simulator (NAS) Cray-2, at the Ames Research Center, to provide a high end computational capability. This paper describes the implementation experiences, the resulting capability, and the future directions for the Testbed on supercomputers.

  4. Effectiveness of Kanban Approaches in Systems Engineering within Rapid Response Environments

    DTIC Science & Technology

    2012-01-01

    Procedia Computer Science Procedia Computer Science 00 (2012) 000–000 www.elsevier.com/locate/ procedia New Challenges in Systems...Author name / Procedia Computer Science 00 (2011) 000–000 inefficient use of resources. The move from ―one step to glory‖ system initiatives to...University of Science and Technology Effectiveness of kanban approaches in systems engineering within rapid response environments Richard Turner

  5. An Integrated Computer Modeling Environment for Regional Land Use, Air Quality, and Transportation Planning

    DOT National Transportation Integrated Search

    1997-04-01

    The Land Use, Air Quality, and Transportation Integrated Modeling Environment (LATIME) represents an integrated approach to computer modeling and simulation of land use allocation, travel demand, and mobile source emissions for the Albuquerque, New M...

  6. Retrieving and Indexing Spatial Data in the Cloud Computing Environment

    NASA Astrophysics Data System (ADS)

    Wang, Yonggang; Wang, Sheng; Zhou, Daliang

    In order to solve the drawbacks of spatial data storage in common Cloud Computing platform, we design and present a framework for retrieving, indexing, accessing and managing spatial data in the Cloud environment. An interoperable spatial data object model is provided based on the Simple Feature Coding Rules from the OGC such as Well Known Binary (WKB) and Well Known Text (WKT). And the classic spatial indexing algorithms like Quad-Tree and R-Tree are re-designed in the Cloud Computing environment. In the last we develop a prototype software based on Google App Engine to implement the proposed model.

  7. Methods for design and evaluation of integrated hardware-software systems for concurrent computation

    NASA Technical Reports Server (NTRS)

    Pratt, T. W.

    1985-01-01

    Research activities and publications are briefly summarized. The major tasks reviewed are: (1) VAX implementation of the PISCES parallel programming environment; (2) Apollo workstation network implementation of the PISCES environment; (3) FLEX implementation of the PISCES environment; (4) sparse matrix iterative solver in PSICES Fortran; (5) image processing application of PISCES; and (6) a formal model of concurrent computation being developed.

  8. The New Learning Ecology of One-to-One Computing Environments: Preparing Teachers for Shifting Dynamics and Relationships

    ERIC Educational Resources Information Center

    Spires, Hiller A.; Oliver, Kevin; Corn, Jenifer

    2012-01-01

    Despite growing research and evaluation results on one-to-one computing environments, how these environments affect learning in schools remains underexamined. The purpose of this article is twofold: (a) to use a theoretical lens, namely a new learning ecology, to frame the dynamic changes as well as challenges that are introduced by a one-to-one…

  9. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment.

    PubMed

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.

  10. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment

    PubMed Central

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505

  11. Design requirements for ubiquitous computing environments for healthcare professionals.

    PubMed

    Bång, Magnus; Larsson, Anders; Eriksson, Henrik

    2004-01-01

    Ubiquitous computing environments can support clinical administrative routines in new ways. The aim of such computing approaches is to enhance routine physical work, thus it is important to identify specific design requirements. We studied healthcare professionals in an emergency room and developed the computer-augmented environment NOSTOS to support teamwork in that setting. NOSTOS uses digital pens and paper-based media as the primary input interface for data capture and as a means of controlling the system. NOSTOS also includes a digital desk, walk-up displays, and sensor technology that allow the system to track documents and activities in the workplace. We propose a set of requirements and discuss the value of tangible user interfaces for healthcare personnel. Our results suggest that the key requirements are flexibility in terms of system usage and seamless integration between digital and physical components. We also discuss how ubiquitous computing approaches like NOSTOS can be beneficial in the medical workplace.

  12. ALR - Laser altimeter for the ASTER deep space mission. Simulated operation above a surface with crater

    NASA Astrophysics Data System (ADS)

    de Brum, A. G. V.; da Cruz, F. C.; Hetem, A., Jr.

    2015-10-01

    To assist in the investigation of the triple asteroid system 2001-SN263, the deep space mission ASTER will carry onboard a laser altimeter. The instrument was named ALR and its development is now in progress. In order to help in the instrument design, with a view to the creation of software to control the instrument, a package of computer programs was produced to simulate the operation of a pulsed laser altimeter with operating principle based on the measurement of the time of flight of the travelling pulse. This software Simulator was called ALR_Sim, and the results obtained with its use represent what should be expected as return signal when laser pulses are fired toward a target, reflect on it and return to be detected by the instrument. The program was successfully tested with regard to some of the most common situations expected. It constitutes now the main workbench dedicated to the creation and testing of control software to embark in the ALR. In addition, the Simulator constitutes also an important tool to assist the creation of software to be used on Earth, in the processing and analysis of the data received from the instrument. This work presents the results obtained in the special case which involves the modeling of a surface with crater, along with the simulation of the instrument operation above this type of terrain. This study points out that the comparison of the wave form obtained as return signal after reflection of the laser pulse on the surface of the crater with the expected return signal in the case of a flat and homogeneous surface is a useful method that can be applied for terrain details extraction.

  13. Biomechanics of cervical tooth region and noncarious cervical lesions of different morphology; three-dimensional finite element analysis

    PubMed Central

    Jakupović, Selma; Anić, Ivica; Ajanović, Muhamed; Korać, Samra; Konjhodžić, Alma; Džanković, Aida; Vuković, Amra

    2016-01-01

    Objective: The present study aims to investigate the influence of presence and shape of cervical lesions on biomechanical behavior of mandibular first premolar, subjected to two types of occlusal loading using three-dimensional (3D) finite element method (FEM). Materials and Methods: 3D models of the mandibular premolar are created from a micro computed tomography X-ray image: model of sound mandibular premolar, model with the wedge-shaped cervical lesion (V lesion), and model with saucer-shaped cervical lesion (U lesion). By FEM, straining of the tooth tissues under functional and nonfunctional occlusal loading of 200 (N) is analyzed. For the analysis, the following software was used: CTAn program 1.10 and ANSYS Workbench (version 14.0). The results are presented in von Mises stress. Results: Values of calculated stress in all tooth structures are higher under nonfunctional occlusal loading, while the functional loading is resulted in homogeneous stress distribution. Nonfunctional load in the cervical area of sound tooth model as well as in the sub-superficial layer of the enamel resulted with a significant stress (over 50 [MPa]). The highest stress concentration on models with lesions is noticed on the apex of the V-shaped lesion, while stress in saucer U lesion is significantly lower and distributed over wider area. Conclusion: The type of the occlusal teeth loading has the biggest influence on cervical stress intensity. Geometric shape of the existing lesion is very important in the distribution of internal stress. Compared to the U-shaped lesions, V-shaped lesions show significantly higher stress concentrations under load. Exposure to stress would lead to its progression. PMID:27403064

  14. Virtual Instrumentation for a Fiber-Optics-Based Artificial Nerve

    NASA Technical Reports Server (NTRS)

    Lyons, Donald R.; Kyaw, Thet Mon; Griffin, DeVon (Technical Monitor)

    2001-01-01

    A LabView-based computer interface for fiber-optic artificial nerves has been devised as a Masters thesis project. This project involves the use of outputs from wavelength multiplexed optical fiber sensors (artificial nerves), which are capable of producing dense optical data outputs for physical measurements. The potential advantages of using optical fiber sensors for sensory function restoration is the fact that well defined WDM-modulated signals can be transmitted to and from the sensing region allowing networked units to replace low-level nerve functions for persons desirous of "intelligent artificial limbs." Various FO sensors can be designed with high sensitivity and the ability to be interfaced with a wide range of devices including miniature shielded electrical conversion units. Our Virtual Instrument (VI) interface software package was developed using LabView's "Laboratory Virtual Instrument Engineering Workbench" package. The virtual instrument has been configured to arrange and encode the data to develop an intelligent response in the form of encoded digitized signal outputs. The architectural layout of our nervous system is such that different touch stimuli from different artificial fiber-optic nerve points correspond to gratings of a distinct resonant wavelength and physical location along the optical fiber. Thus, when an automated, tunable diode laser sends scans, the wavelength spectrum of the artificial nerve, it triggers responses that are encoded with different touch stimuli by way wavelength shifts in the reflected Bragg resonances. The reflected light is detected and a resulting analog signal is fed into ADC1 board and DAQ card. Finally, the software has been written such that the experimenter is able to set the response range during data acquisition.

  15. GALEN: a third generation terminology tool to support a multipurpose national coding system for surgical procedures.

    PubMed

    Trombert-Paviot, B; Rodrigues, J M; Rogers, J E; Baud, R; van der Haring, E; Rassinoux, A M; Abrial, V; Clavel, L; Idir, H

    2000-09-01

    Generalised architecture for languages, encyclopedia and nomenclatures in medicine (GALEN) has developed a new generation of terminology tools based on a language independent model describing the semantics and allowing computer processing and multiple reuses as well as natural language understanding systems applications to facilitate the sharing and maintaining of consistent medical knowledge. During the European Union 4 Th. framework program project GALEN-IN-USE and later on within two contracts with the national health authorities we applied the modelling and the tools to the development of a new multipurpose coding system for surgical procedures named CCAM in a minority language country, France. On one hand, we contributed to a language independent knowledge repository and multilingual semantic dictionaries for multicultural Europe. On the other hand, we support the traditional process for creating a new coding system in medicine which is very much labour consuming by artificial intelligence tools using a medically oriented recursive ontology and natural language processing. We used an integrated software named CLAW (for classification workbench) to process French professional medical language rubrics produced by the national colleges of surgeons domain experts into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation, on one hand, we generate with the LNAT natural language generator controlled French natural language to support the finalization of the linguistic labels (first generation) in relation with the meanings of the conceptual system structure. On the other hand, the Claw classification manager proves to be very powerful to retrieve the initial domain experts rubrics list with different categories of concepts (second generation) within a semantic structured representation (third generation) bridge to the electronic patient record detailed terminology.

  16. Establishment of sequential software processing for a biomechanical model of mandibular reconstruction with custom-made plate.

    PubMed

    Li, Peng; Tang, Youchao; Li, Jia; Shen, Longduo; Tian, Weidong; Tang, Wei

    2013-09-01

    The aim of this study is to describe the sequential software processing of computed tomography (CT) dataset for reconstructing the finite element analysis (FEA) mandibular model with custom-made plate, and to provide a theoretical basis for clinical usage of this reconstruction method. A CT scan was done on one patient who had mandibular continuity defects. This CT dataset in DICOM format was imported into Mimics 10.0 software in which a three-dimensional (3-D) model of the facial skeleton was reconstructed and the mandible was segmented out. With Geomagic Studio 11.0, one custom-made plate and nine virtual screws were designed. All parts of the reconstructed mandible were converted into NURBS and saved as IGES format for importing into pro/E 4.0. After Boolean operation and assembly, the model was switched to ANSYS Workbench 12.0. Finally, after applying the boundary conditions and material properties, an analysis was performed. As results, a 3-D FEA model was successfully developed using the softwares above. The stress-strain distribution precisely indicated biomechanical performance of the reconstructed mandible on the normal occlusion load, without stress concentrated areas. The Von-Mises stress in all parts of the model, from the maximum value of 50.9MPa to the minimum value of 0.1MPa, was lower than the ultimate tensile strength. In conclusion, the described strategy could speedily and successfully produce a biomechanical model of a reconstructed mandible with custom-made plate. Using this FEA foundation, the custom-made plate may be improved for an optimal clinical outcome. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Graphical User Interface for a Dual-Module EMCCD X-ray Detector Array.

    PubMed

    Wang, Weiyuan; Ionita, Ciprian; Kuhls-Gilcrist, Andrew; Huang, Ying; Qu, Bin; Gupta, Sandesh K; Bednarek, Daniel R; Rudin, Stephen

    2011-03-16

    A new Graphical User Interface (GUI) was developed using Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW) for a high-resolution, high-sensitivity Solid State X-ray Image Intensifier (SSXII), which is a new x-ray detector for radiographic and fluoroscopic imaging, consisting of an array of Electron-Multiplying CCDs (EMCCDs) each having a variable on-chip electron-multiplication gain of up to 2000× to reduce the effect of readout noise. To enlarge the field-of-view (FOV), each EMCCD sensor is coupled to an x-ray phosphor through a fiberoptic taper. Two EMCCD camera modules are used in our prototype to form a computer-controlled array; however, larger arrays are under development. The new GUI provides patient registration, EMCCD module control, image acquisition, and patient image review. Images from the array are stitched into a 2k×1k pixel image that can be acquired and saved at a rate of 17 Hz (faster with pixel binning). When reviewing the patient's data, the operator can select images from the patient's directory tree listed by the GUI and cycle through the images using a slider bar. Commonly used camera parameters including exposure time, trigger mode, and individual EMCCD gain can be easily adjusted using the GUI. The GUI is designed to accommodate expansion of the EMCCD array to even larger FOVs with more modules. The high-resolution, high-sensitivity EMCCD modular-array SSXII imager with the new user-friendly GUI should enable angiographers and interventionalists to visualize smaller vessels and endovascular devices, helping them to make more accurate diagnoses and to perform more precise image-guided interventions.

  18. Asian Citrus Psyllid Expression Profiles Suggest Candidatus Liberibacter Asiaticus-Mediated Alteration of Adult Nutrition and Metabolism, and of Nymphal Development and Immunity

    PubMed Central

    He, Ruifeng; Nelson, William; Yin, Guohua; Cicero, Joseph M.; Willer, Mark; Kim, Ryan; Kramer, Robin; May, Greg A.; Crow, John A.; Soderlund, Carol A.; Gang, David R.; Brown, Judith K.

    2015-01-01

    The Asian citrus psyllid (ACP) Diaphorina citri Kuwayama (Hemiptera: Psyllidae) is the insect vector of the fastidious bacterium Candidatus Liberibacter asiaticus (CLas), the causal agent of citrus greening disease, or Huanglongbing (HLB). The widespread invasiveness of the psyllid vector and HLB in citrus trees worldwide has underscored the need for non-traditional approaches to manage the disease. One tenable solution is through the deployment of RNA interference technology to silence protein-protein interactions essential for ACP-mediated CLas invasion and transmission. To identify psyllid interactor-bacterial effector combinations associated with psyllid-CLas interactions, cDNA libraries were constructed from CLas-infected and CLas-free ACP adults and nymphs, and analyzed for differential expression. Library assemblies comprised 24,039,255 reads and yielded 45,976 consensus contigs. They were annotated (UniProt), classified using Gene Ontology, and subjected to in silico expression analyses using the Transcriptome Computational Workbench (TCW) (http://www.sohomoptera.org/ACPPoP/). Functional-biological pathway interpretations were carried out using the Kyoto Encyclopedia of Genes and Genomes databases. Differentially expressed contigs in adults and/or nymphs represented genes and/or metabolic/pathogenesis pathways involved in adhesion, biofilm formation, development-related, immunity, nutrition, stress, and virulence. Notably, contigs involved in gene silencing and transposon-related responses were documented in a psyllid for the first time. This is the first comparative transcriptomic analysis of ACP adults and nymphs infected and uninfected with CLas. The results provide key initial insights into host-parasite interactions involving CLas effectors that contribute to invasion-virulence, and to host nutritional exploitation and immune-related responses that appear to be essential for successful ACP-mediated circulative, propagative CLas transmission. PMID:26091106

  19. Using Additive Manufacturing to Optimize FLiBe Coolant Blanket in Fusion Reactors

    NASA Astrophysics Data System (ADS)

    Fry, Vincent Michael

    Fusion reactors have often been hailed as the holy grail of clean energy generation, though a power-generating reactor has never been built due to a multitude of limiting factors. One such factor is the immense 12-15 MW/m2 heat fluxes experienced by the inner wall of the reactor. Multiple groups have proposed the use of tungsten swirl tubes to withstand the heat generated within the reactor core. The primary focus of this investigation is to parameterize this 'first wall' interior structure to determine the highest achievable heat transfer coefficient given the many tungsten configurations enabled via additive manufacturing. Two general tube structures were considered: an orthogonal three-dimensional mesh of various diameters and spacings, as well as a swirl tube geometry with varying 'tape' thicknesses. The coolant liquid proposed is FLiBe (2LiF-BeF2) due to its high specific heat capacity as well as its ability to breed tritium, the fuel for the reactor. This was accomplished using theoretical calculations; computational fluid dynamics and conjugate heat transfer simulations in ANSYS Workbench; as well as an experimental setup to confirm tube pressure drop along the pipe. It was determined that heat transfer coefficients between upwards of 60,000 W/m 2K were readily achievable, keeping the first wall temperature around 1300 K. A multitude of designs proved to be feasible given the pumping power restrictions, though the suggested design going forward is a swirl tube with 2 mm 'tape' thickness and 3 m/s inlet velocity. Simulated pressure drop with water was accurate to within 30% of experimentally measured values, giving confidence in the credibility of the results.

  20. Effect of Gender on Computer Use and Attitudes of College Seniors

    NASA Astrophysics Data System (ADS)

    McCoy, Leah P.; Heafner, Tina L.

    Male and female students have historically had different computer attitudes and levels of computer use. These equity issues are of interest to researchers and practitioners who seek to understand why a digital divide exists between men and women. In this study, these questions were examined in an intensive computing environment in which all students at one university were issued identical laptop computers and used them extensively for 4 years. Self-reported computer use was examined for effects of gender. Attitudes toward computers were also assessed and compared for male and female students. The results indicated that when the technological environment was institutionally equalized for male and female students, many traditional findings of gender differences were not evident.

  1. Heterogeneous Distributed Computing for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy S.

    1998-01-01

    The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.

  2. Understanding and preventing computer vision syndrome.

    PubMed

    Loh, Ky; Redd, Sc

    2008-01-01

    The invention of computer and advancement in information technology has revolutionized and benefited the society but at the same time has caused symptoms related to its usage such as ocular sprain, irritation, redness, dryness, blurred vision and double vision. This cluster of symptoms is known as computer vision syndrome which is characterized by the visual symptoms which result from interaction with computer display or its environment. Three major mechanisms that lead to computer vision syndrome are extraocular mechanism, accommodative mechanism and ocular surface mechanism. The visual effects of the computer such as brightness, resolution, glare and quality all are known factors that contribute to computer vision syndrome. Prevention is the most important strategy in managing computer vision syndrome. Modification in the ergonomics of the working environment, patient education and proper eye care are crucial in managing computer vision syndrome.

  3. Novel 3D/VR interactive environment for MD simulations, visualization and analysis.

    PubMed

    Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P

    2014-12-18

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.

  4. Novel 3D/VR Interactive Environment for MD Simulations, Visualization and Analysis

    PubMed Central

    Doblack, Benjamin N.; Allis, Tim; Dávila, Lilian P.

    2014-01-01

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced. PMID:25549300

  5. 40 CFR 1042.115 - Other requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Other requirements. 1042.115 Section 1042.115 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS...). (3) The onboard computer log must record in nonvolatile computer memory all incidents of engine...

  6. Client-Server: What Is It and Are We There Yet?

    ERIC Educational Resources Information Center

    Gershenfeld, Nancy

    1995-01-01

    Discusses client-server architecture in dumb terminals, personal computers, local area networks, and graphical user interfaces. Focuses on functions offered by client personal computers: individualized environments; flexibility in running operating systems; advanced operating system features; multiuser environments; and centralized data…

  7. Using Python on the Peregrine System | High-Performance Computing | NREL

    Science.gov Websites

    was not designed for use in a shared computing environment. The following example creates a new Python is run. For example an environment.yml file can be created on the developer's laptop and used on the

  8. 40 CFR 300.7 - Computation of time.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 29 2013-07-01 2013-07-01 false Computation of time. 300.7 Section 300.7 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION CONTINGENCY...

  9. 40 CFR 300.7 - Computation of time.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 28 2011-07-01 2011-07-01 false Computation of time. 300.7 Section 300.7 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION CONTINGENCY...

  10. 40 CFR 300.7 - Computation of time.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 27 2010-07-01 2010-07-01 false Computation of time. 300.7 Section 300.7 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS NATIONAL OIL AND HAZARDOUS SUBSTANCES POLLUTION CONTINGENCY...

  11. Secure data exchange between intelligent devices and computing centers

    NASA Astrophysics Data System (ADS)

    Naqvi, Syed; Riguidel, Michel

    2005-03-01

    The advent of reliable spontaneous networking technologies (commonly known as wireless ad-hoc networks) has ostensibly raised stakes for the conception of computing intensive environments using intelligent devices as their interface with the external world. These smart devices are used as data gateways for the computing units. These devices are employed in highly volatile environments where the secure exchange of data between these devices and their computing centers is of paramount importance. Moreover, their mission critical applications require dependable measures against the attacks like denial of service (DoS), eavesdropping, masquerading, etc. In this paper, we propose a mechanism to assure reliable data exchange between an intelligent environment composed of smart devices and distributed computing units collectively called 'computational grid'. The notion of infosphere is used to define a digital space made up of a persistent and a volatile asset in an often indefinite geographical space. We study different infospheres and present general evolutions and issues in the security of such technology-rich and intelligent environments. It is beyond any doubt that these environments will likely face a proliferation of users, applications, networked devices, and their interactions on a scale never experienced before. It would be better to build in the ability to uniformly deal with these systems. As a solution, we propose a concept of virtualization of security services. We try to solve the difficult problems of implementation and maintenance of trust on the one hand, and those of security management in heterogeneous infrastructure on the other hand.

  12. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    NASA Astrophysics Data System (ADS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-09-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers.

  13. A visualization environment for supercomputing-based applications in computational mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlakos, C.J.; Schoof, L.A.; Mareda, J.F.

    1993-06-01

    In this paper, we characterize a visualization environment that has been designed and prototyped for a large community of scientists and engineers, with an emphasis in superconducting-based computational mechanics. The proposed environment makes use of a visualization server concept to provide effective, interactive visualization to the user`s desktop. Benefits of using the visualization server approach are discussed. Some thoughts regarding desirable features for visualization server hardware architectures are also addressed. A brief discussion of the software environment is included. The paper concludes by summarizing certain observations which we have made regarding the implementation of such visualization environments.

  14. A Grid Infrastructure for Supporting Space-based Science Operations

    NASA Technical Reports Server (NTRS)

    Bradford, Robert N.; Redman, Sandra H.; McNair, Ann R. (Technical Monitor)

    2002-01-01

    Emerging technologies for computational grid infrastructures have the potential for revolutionizing the way computers are used in all aspects of our lives. Computational grids are currently being implemented to provide a large-scale, dynamic, and secure research and engineering environments based on standards and next-generation reusable software, enabling greater science and engineering productivity through shared resources and distributed computing for less cost than traditional architectures. Combined with the emerging technologies of high-performance networks, grids provide researchers, scientists and engineers the first real opportunity for an effective distributed collaborative environment with access to resources such as computational and storage systems, instruments, and software tools and services for the most computationally challenging applications.

  15. Dome: Distributed Object Migration Environment

    DTIC Science & Technology

    1994-05-01

    Best Available Copy AD-A281 134 Computer Science Dome: Distributed object migration environment Adam Beguelin Erik Seligman Michael Starkey May 1994...Beguelin Erik Seligman Michael Starkey May 1994 CMU-CS-94-153 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Dome... Linda [4], Isis [2], and Express [6] allow a pro- grammer to treat a heterogeneous network of computers as a parallel machine. These tools allow the

  16. Sensing and perception: Connectionist approaches to subcognitive computing

    NASA Technical Reports Server (NTRS)

    Skrrypek, J.

    1987-01-01

    New approaches to machine sensing and perception are presented. The motivation for crossdisciplinary studies of perception in terms of AI and neurosciences is suggested. The question of computing architecture granularity as related to global/local computation underlying perceptual function is considered and examples of two environments are given. Finally, the examples of using one of the environments, UCLA PUNNS, to study neural architectures for visual function are presented.

  17. Human-Computer Interaction and Virtual Environments

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Compiler)

    1995-01-01

    The proceedings of the Workshop on Human-Computer Interaction and Virtual Environments are presented along with a list of attendees. The objectives of the workshop were to assess the state-of-technology and level of maturity of several areas in human-computer interaction and to provide guidelines for focused future research leading to effective use of these facilities in the design/fabrication and operation of future high-performance engineering systems.

  18. Finite Element Analysis of Walking Beam of a New Compound Adjustment Balance Pumping Unit

    NASA Astrophysics Data System (ADS)

    Wu, Jufei; Wang, Qian; Han, Yunfei

    2017-12-01

    In this paper, taking the designer of the new compound balance pumping unit beam as our research target, the three-dimensional model is established by Solid Works, the load and the constraint are determined. ANSYS Workbench is used to analyze the tail and the whole of the beam, the stress and deformation are obtained to meet the strength requirements. The finite element simulation and theoretical calculation of the moment of the center axis beam are carried out. The finite element simulation results are compared with the calculated results of the theoretical mechanics model to verify the correctness of the theoretical calculation. Finally, the finite element analysis is consistent with the theoretical calculation results. The theoretical calculation results are preferable, and the bending moment value provides the theoretical reference for the follow-up optimization and research design.

  19. Mathematical modeling of the stress-strain state of the outlet guide vane made of various materials

    NASA Astrophysics Data System (ADS)

    Grinev, M. A.; Anoshkin, A. N.; Pisarev, P. V.; Zuiko, V. Yu.; Shipunov, G. S.

    2016-11-01

    The present work is devoted to the detailed stress-strain analysis of the composite outlet guide vane (OGV) for aircraft engines with a special focus on areas with twisted layers where the initiation of high interlaminar stresses is most expected. Various polymer composite materials and reinforcing schemes are researched. The technological scheme of laying-out of anisotropic plies and the fastening method are taken into account in the model. The numerical simulation is carried out by the finite element method (FEM) with the ANSYS Workbench software. It is shown that interlaminar shear stresses are most dangerous. It is found that balanced carbon fiber reinforced plastic (CFRP) with the [0°/±45°] reinforcing scheme allows us to provide the double strength margin under working loads for the developed OGV.

  20. An expert system shell for inferring vegetation characteristics: Implementation of additional techniques (task E)

    NASA Technical Reports Server (NTRS)

    Harrison, P. Ann

    1992-01-01

    The NASA VEGetation Workbench (VEG) is a knowledge based system that infers vegetation characteristics from reflectance data. The VEG subgoal PROPORTION.GROUND.COVER has been completed and a number of additional techniques that infer the proportion ground cover of a sample have been implemented. Some techniques operate on sample data at a single wavelength. The techniques previously incorporated in VEG for other subgoals operated on data at a single wavelength so implementing the additional single wavelength techniques required no changes to the structure of VEG. Two techniques which use data at multiple wavelengths to infer proportion ground cover were also implemented. This work involved modifying the structure of VEG so that multiple wavelength techniques could be incorporated. All the new techniques were tested using both the VEG 'Research Mode' and the 'Automatic Mode.'

Top