Sample records for visualizing model-driven software

  1. Simulation and animation of sensor-driven robots.

    PubMed

    Chen, C; Trivedi, M M; Bidlack, C R

    1994-10-01

    Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aid the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the users visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.

  2. Activity-Centered Domain Characterization for Problem-Driven Scientific Visualization

    PubMed Central

    Marai, G. Elisabeta

    2018-01-01

    Although visualization design models exist in the literature in the form of higher-level methodological frameworks, these models do not present a clear methodological prescription for the domain characterization step. This work presents a framework and end-to-end model for requirements engineering in problem-driven visualization application design. The framework and model are based on the activity-centered design paradigm, which is an enhancement of human-centered design. The proposed activity-centered approach focuses on user tasks and activities, and allows an explicit link between the requirements engineering process with the abstraction stage—and its evaluation—of existing, higher-level visualization design models. In a departure from existing visualization design models, the resulting model: assigns value to a visualization based on user activities; ranks user tasks before the user data; partitions requirements in activity-related capabilities and nonfunctional characteristics and constraints; and explicitly incorporates the user workflows into the requirements process. A further merit of this model is its explicit integration of functional specifications, a concept this work adapts from the software engineering literature, into the visualization design nested model. A quantitative evaluation using two sets of interdisciplinary projects supports the merits of the activity-centered model. The result is a practical roadmap to the domain characterization step of visualization design for problem-driven data visualization. Following this domain characterization model can help remove a number of pitfalls that have been identified multiple times in the visualization design literature. PMID:28866550

  3. Performance analysis and optimization of an advanced pharmaceutical wastewater treatment plant through a visual basic software tool (PWWT.VB).

    PubMed

    Pal, Parimal; Thakura, Ritwik; Chakrabortty, Sankha

    2016-05-01

    A user-friendly, menu-driven simulation software tool has been developed for the first time to optimize and analyze the system performance of an advanced continuous membrane-integrated pharmaceutical wastewater treatment plant. The software allows pre-analysis and manipulation of input data which helps in optimization and shows the software performance visually on a graphical platform. Moreover, the software helps the user to "visualize" the effects of the operating parameters through its model-predicted output profiles. The software is based on a dynamic mathematical model, developed for a systematically integrated forward osmosis-nanofiltration process for removal of toxic organic compounds from pharmaceutical wastewater. The model-predicted values have been observed to corroborate well with the extensive experimental investigations which were found to be consistent under varying operating conditions like operating pressure, operating flow rate, and draw solute concentration. Low values of the relative error (RE = 0.09) and high values of Willmott-d-index (d will = 0.981) reflected a high degree of accuracy and reliability of the software. This software is likely to be a very efficient tool for system design or simulation of an advanced membrane-integrated treatment plant for hazardous wastewater.

  4. A Visual Basic simulation software tool for performance analysis of a membrane-based advanced water treatment plant.

    PubMed

    Pal, P; Kumar, R; Srivastava, N; Chaudhuri, J

    2014-02-01

    A Visual Basic simulation software (WATTPPA) has been developed to analyse the performance of an advanced wastewater treatment plant. This user-friendly and menu-driven software is based on the dynamic mathematical model for an industrial wastewater treatment scheme that integrates chemical, biological and membrane-based unit operations. The software-predicted results corroborate very well with the experimental findings as indicated in the overall correlation coefficient of the order of 0.99. The software permits pre-analysis and manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. It allows quick performance analysis of the whole system as well as the individual units. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for hazardous wastewater.

  5. Simulation and animation of sensor-driven robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, C.; Trivedi, M.M.; Bidlack, C.R.

    1994-10-01

    Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aide the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the usersmore » visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.« less

  6. Water Network Tool for Resilience v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-12-09

    WNTR is a python package designed to simulate and analyze resilience of water distribution networks. The software includes: - Pressure driven and demand driven hydraulic simulation - Water quality simulation to track concentration, trace, and water age - Conditional controls to simulate power outages - Models to simulate pipe breaks - A wide range of resilience metrics - Analysis and visualization tools

  7. Open cyberGIS software for geospatial research and education in the big data era

    NASA Astrophysics Data System (ADS)

    Wang, Shaowen; Liu, Yan; Padmanabhan, Anand

    CyberGIS represents an interdisciplinary field combining advanced cyberinfrastructure, geographic information science and systems (GIS), spatial analysis and modeling, and a number of geospatial domains to improve research productivity and enable scientific breakthroughs. It has emerged as new-generation GIS that enable unprecedented advances in data-driven knowledge discovery, visualization and visual analytics, and collaborative problem solving and decision-making. This paper describes three open software strategies-open access, source, and integration-to serve various research and education purposes of diverse geospatial communities. These strategies have been implemented in a leading-edge cyberGIS software environment through three corresponding software modalities: CyberGIS Gateway, Toolkit, and Middleware, and achieved broad and significant impacts.

  8. An Analysis of Category Management of Service Contracts

    DTIC Science & Technology

    2017-12-01

    management teams a way to make informed , data-driven decisions. Data-driven decisions derived from clustering not only align with Category...savings. Furthermore, this methodology provides a data-driven visualization to inform sound business decisions on potential Category Management ...Category Management initiatives. The Maptitude software will allow future research to collect data and develop visualizations to inform Category

  9. Revel8or: Model Driven Capacity Planning Tool Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Liming; Liu, Yan; Bui, Ngoc B.

    2007-05-31

    Designing complex multi-tier applications that must meet strict performance requirements is a challenging software engineering problem. Ideally, the application architect could derive accurate performance predictions early in the project life-cycle, leveraging initial application design-level models and a description of the target software and hardware platforms. To this end, we have developed a capacity planning tool suite for component-based applications, called Revel8tor. The tool adheres to the model driven development paradigm and supports benchmarking and performance prediction for J2EE, .Net and Web services platforms. The suite is composed of three different tools: MDAPerf, MDABench and DSLBench. MDAPerf allows annotation of designmore » diagrams and derives performance analysis models. MDABench allows a customized benchmark application to be modeled in the UML 2.0 Testing Profile and automatically generates a deployable application, with measurement automatically conducted. DSLBench allows the same benchmark modeling and generation to be conducted using a simple performance engineering Domain Specific Language (DSL) in Microsoft Visual Studio. DSLBench integrates with Visual Studio and reuses its load testing infrastructure. Together, the tool suite can assist capacity planning across platforms in an automated fashion.« less

  10. Arsenic removal from contaminated groundwater by membrane-integrated hybrid plant: optimization and control using Visual Basic platform.

    PubMed

    Chakrabortty, S; Sen, M; Pal, P

    2014-03-01

    A simulation software (ARRPA) has been developed in Microsoft Visual Basic platform for optimization and control of a novel membrane-integrated arsenic separation plant in the backdrop of absence of such software. The user-friendly, menu-driven software is based on a dynamic linearized mathematical model, developed for the hybrid treatment scheme. The model captures the chemical kinetics in the pre-treating chemical reactor and the separation and transport phenomena involved in nanofiltration. The software has been validated through extensive experimental investigations. The agreement between the outputs from computer simulation program and the experimental findings are excellent and consistent under varying operating conditions reflecting high degree of accuracy and reliability of the software. High values of the overall correlation coefficient (R (2) = 0.989) and Willmott d-index (0.989) are indicators of the capability of the software in analyzing performance of the plant. The software permits pre-analysis, manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. Performance analysis of the whole system as well as the individual units is possible using the tool. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for removal of arsenic from contaminated groundwater.

  11. ProteoLens: a visual analytic tool for multi-scale database-driven biological network data mining.

    PubMed

    Huan, Tianxiao; Sivachenko, Andrey Y; Harrison, Scott H; Chen, Jake Y

    2008-08-12

    New systems biology studies require researchers to understand how interplay among myriads of biomolecular entities is orchestrated in order to achieve high-level cellular and physiological functions. Many software tools have been developed in the past decade to help researchers visually navigate large networks of biomolecular interactions with built-in template-based query capabilities. To further advance researchers' ability to interrogate global physiological states of cells through multi-scale visual network explorations, new visualization software tools still need to be developed to empower the analysis. A robust visual data analysis platform driven by database management systems to perform bi-directional data processing-to-visualizations with declarative querying capabilities is needed. We developed ProteoLens as a JAVA-based visual analytic software tool for creating, annotating and exploring multi-scale biological networks. It supports direct database connectivity to either Oracle or PostgreSQL database tables/views, on which SQL statements using both Data Definition Languages (DDL) and Data Manipulation languages (DML) may be specified. The robust query languages embedded directly within the visualization software help users to bring their network data into a visualization context for annotation and exploration. ProteoLens supports graph/network represented data in standard Graph Modeling Language (GML) formats, and this enables interoperation with a wide range of other visual layout tools. The architectural design of ProteoLens enables the de-coupling of complex network data visualization tasks into two distinct phases: 1) creating network data association rules, which are mapping rules between network node IDs or edge IDs and data attributes such as functional annotations, expression levels, scores, synonyms, descriptions etc; 2) applying network data association rules to build the network and perform the visual annotation of graph nodes and edges according to associated data values. We demonstrated the advantages of these new capabilities through three biological network visualization case studies: human disease association network, drug-target interaction network and protein-peptide mapping network. The architectural design of ProteoLens makes it suitable for bioinformatics expert data analysts who are experienced with relational database management to perform large-scale integrated network visual explorations. ProteoLens is a promising visual analytic platform that will facilitate knowledge discoveries in future network and systems biology studies.

  12. Software-codec-based full motion video conferencing on the PC using visual pattern image sequence coding

    NASA Astrophysics Data System (ADS)

    Barnett, Barry S.; Bovik, Alan C.

    1995-04-01

    This paper presents a real time full motion video conferencing system based on the Visual Pattern Image Sequence Coding (VPISC) software codec. The prototype system hardware is comprised of two personal computers, two camcorders, two frame grabbers, and an ethernet connection. The prototype system software has a simple structure. It runs under the Disk Operating System, and includes a user interface, a video I/O interface, an event driven network interface, and a free running or frame synchronous video codec that also acts as the controller for the video and network interfaces. Two video coders have been tested in this system. Simple implementations of Visual Pattern Image Coding and VPISC have both proven to support full motion video conferencing with good visual quality. Future work will concentrate on expanding this prototype to support the motion compensated version of VPISC, as well as encompassing point-to-point modem I/O and multiple network protocols. The application will be ported to multiple hardware platforms and operating systems. The motivation for developing this prototype system is to demonstrate the practicality of software based real time video codecs. Furthermore, software video codecs are not only cheaper, but are more flexible system solutions because they enable different computer platforms to exchange encoded video information without requiring on-board protocol compatible video codex hardware. Software based solutions enable true low cost video conferencing that fits the `open systems' model of interoperability that is so important for building portable hardware and software applications.

  13. Model Driven Engineering

    NASA Astrophysics Data System (ADS)

    Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan

    A relevant initiative from the software engineering community called Model Driven Engineering (MDE) is being developed in parallel with the Semantic Web (Mellor et al. 2003a). The MDE approach to software development suggests that one should first develop a model of the system under study, which is then transformed into the real thing (i.e., an executable software entity). The most important research initiative in this area is the Model Driven Architecture (MDA), which is Model Driven Architecture being developed under the umbrella of the Object Management Group (OMG). This chapter describes the basic concepts of this software engineering effort.

  14. Transportable Applications Environment (TAE) Tenth Users' Conference

    NASA Technical Reports Server (NTRS)

    Rouff, Chris (Editor); Harris, Elfrieda (Editor); Yeager, Arleen (Editor)

    1993-01-01

    Conference proceedings are represented in graphic visual-aid form. Presentation and panel discussion topics include user experiences with C++ and Ada; the design and interaction of the user interface; the history and goals of TAE; commercialization and testing of TAE Plus; Computer-Human Interaction Models (CHIMES); data driven objects; item-to-item connections and object dependencies; and integration with other software. There follows a list of conference attendees.

  15. Evaluation of software maintain ability with open EHR - a comparison of architectures.

    PubMed

    Atalag, Koray; Yang, Hong Yul; Tempero, Ewan; Warren, James R

    2014-11-01

    To assess whether it is easier to maintain a clinical information system developed using open EHR model driven development versus mainstream methods. A new open source application (GastrOS) has been developed following open EHR's multi-level modelling approach using .Net/C# based on the same requirements of an existing clinically used application developed using Microsoft Visual Basic and Access database. Almost all the domain knowledge was embedded into the software code and data model in the latter. The same domain knowledge has been expressed as a set of open EHR Archetypes in GastrOS. We then introduced eight real-world change requests that had accumulated during live clinical usage, and implemented these in both systems while measuring time for various development tasks and change in software size for each change request. Overall it took half the time to implement changes in GastrOS. However it was the more difficult application to modify for one change request, suggesting the nature of change is also important. It was not possible to implement changes by modelling only. Comparison of relative measures of time and software size change within each application highlights how architectural differences affected maintain ability across change requests. The use of open EHR model driven development can result in better software maintain ability. The degree to which open EHR affects software maintain ability depends on the extent and nature of domain knowledge involved in changes. Although we used relative measures for time and software size, confounding factors could not be totally excluded as a controlled study design was not feasible. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Open Source Next Generation Visualization Software for Interplanetary Missions

    NASA Technical Reports Server (NTRS)

    Trimble, Jay; Rinker, George

    2016-01-01

    Mission control is evolving quickly, driven by the requirements of new missions, and enabled by modern computing capabilities. Distributed operations, access to data anywhere, data visualization for spacecraft analysis that spans multiple data sources, flexible reconfiguration to support multiple missions, and operator use cases, are driving the need for new capabilities. NASA's Advanced Multi-Mission Operations System (AMMOS), Ames Research Center (ARC) and the Jet Propulsion Laboratory (JPL) are collaborating to build a new generation of mission operations software for visualization, to enable mission control anywhere, on the desktop, tablet and phone. The software is built on an open source platform that is open for contributions (http://nasa.github.io/openmct).

  17. Visualization Skills: A Prerequisite to Advanced Solid Modeling

    ERIC Educational Resources Information Center

    Gow, George

    2007-01-01

    Many educators believe that solid modeling software has made teaching two- and three-dimensional visualization skills obsolete. They claim that the visual tools built into the solid modeling software serve as a replacement for the CAD operator's personal visualization skills. They also claim that because solid modeling software can produce…

  18. Visual Target Tracking on the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Kim, Won; Biesiadecki, Jeffrey; Ali, Khaled

    2008-01-01

    Visual target tracking (VTT) software has been incorporated into Release 9.2 of the Mars Exploration Rover (MER) flight software, now running aboard the rovers Spirit and Opportunity. In the VTT operation (see figure), the rover is driven in short steps between stops and, at each stop, still images are acquired by actively aimed navigation cameras (navcams) on a mast on the rover (see artistic rendition). The VTT software processes the digitized navcam images so as to track a target reliably and to make it possible to approach the target accurately to within a few centimeters over a 10-m traverse.

  19. CrossTalk. The Journal of Defense Software Engineering. Volume 23, Number 6, Nov/Dec 2010

    DTIC Science & Technology

    2010-11-01

    Model of archi- tectural design. It guides developers to apply effort to their software architecture commensurate with the risks faced by...Driven Model is the promotion of risk to prominence. It is possible to apply the Risk-Driven Model to essentially any software development process...succeed without any planned architecture work, while many high-risk projects would fail without it . The Risk-Driven Model walks a middle path

  20. 3DVEM Software Modules for Efficient Management of Point Clouds and Photorealistic 3d Models

    NASA Astrophysics Data System (ADS)

    Fabado, S.; Seguí, A. E.; Cabrelles, M.; Navarro, S.; García-De-San-Miguel, D.; Lerma, J. L.

    2013-07-01

    Cultural heritage managers in general and information users in particular are not usually used to deal with high-technological hardware and software. On the contrary, information providers of metric surveys are most of the times applying latest developments for real-life conservation and restoration projects. This paper addresses the software issue of handling and managing either 3D point clouds or (photorealistic) 3D models to bridge the gap between information users and information providers as regards the management of information which users and providers share as a tool for decision-making, analysis, visualization and management. There are not many viewers specifically designed to handle, manage and create easily animations of architectural and/or archaeological 3D objects, monuments and sites, among others. 3DVEM - 3D Viewer, Editor & Meter software will be introduced to the scientific community, as well as 3DVEM - Live and 3DVEM - Register. The advantages of managing projects with both sets of data, 3D point cloud and photorealistic 3D models, will be introduced. Different visualizations of true documentation projects in the fields of architecture, archaeology and industry will be presented. Emphasis will be driven to highlight the features of new userfriendly software to manage virtual projects. Furthermore, the easiness of creating controlled interactive animations (both walkthrough and fly-through) by the user either on-the-fly or as a traditional movie file will be demonstrated through 3DVEM - Live.

  1. Experiences in Teaching a Graduate Course on Model-Driven Software Development

    ERIC Educational Resources Information Center

    Tekinerdogan, Bedir

    2011-01-01

    Model-driven software development (MDSD) aims to support the development and evolution of software intensive systems using the basic concepts of model, metamodel, and model transformation. In parallel with the ongoing academic research, MDSD is more and more applied in industrial practices. After being accepted both by a broad community of…

  2. Consistent Evolution of Software Artifacts and Non-Functional Models

    DTIC Science & Technology

    2014-11-14

    induce bad software performance)? 15. SUBJECT TERMS EOARD, Nano particles, Photo-Acoustic Sensors, Model-Driven Engineering ( MDE ), Software Performance...Università degli Studi dell’Aquila, Via Vetoio, 67100 L’Aquila, Italy Email: vittorio.cortellessa@univaq.it Web : http: // www. di. univaq. it/ cortelle/ Phone...Model-Driven Engineering ( MDE ), Software Performance Engineering (SPE), Change Propagation, Performance Antipatterns. For sake of readability of the

  3. MOPEX: a software package for astronomical image processing and visualization

    NASA Astrophysics Data System (ADS)

    Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley

    2006-06-01

    We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software package has been developed by a small group of software developers and scientists at the Spitzer Science Center. It is available for distribution at the Spitzer Science Center web page.

  4. Visualization techniques to aid in the analysis of multi-spectral astrophysical data sets

    NASA Technical Reports Server (NTRS)

    Domik, Gitta; Alam, Salim; Pinkney, Paul

    1992-01-01

    This report describes our project activities for the period Sep. 1991 - Oct. 1992. Our activities included stabilizing the software system STAR, porting STAR to IDL/widgets (improved user interface), targeting new visualization techniques for multi-dimensional data visualization (emphasizing 3D visualization), and exploring leading-edge 3D interface devices. During the past project year we emphasized high-end visualization techniques, by exploring new tools offered by state-of-the-art visualization software (such as AVS3 and IDL4/widgets), by experimenting with tools still under research at the Department of Computer Science (e.g., use of glyphs for multidimensional data visualization), and by researching current 3D input/output devices as they could be used to explore 3D astrophysical data. As always, any project activity is driven by the need to interpret astrophysical data more effectively.

  5. Explicet: graphical user interface software for metadata-driven management, analysis and visualization of microbiome data.

    PubMed

    Robertson, Charles E; Harris, J Kirk; Wagner, Brandie D; Granger, David; Browne, Kathy; Tatem, Beth; Feazel, Leah M; Park, Kristin; Pace, Norman R; Frank, Daniel N

    2013-12-01

    Studies of the human microbiome, and microbial community ecology in general, have blossomed of late and are now a burgeoning source of exciting research findings. Along with the advent of next-generation sequencing platforms, which have dramatically increased the scope of microbiome-related projects, several high-performance sequence analysis pipelines (e.g. QIIME, MOTHUR, VAMPS) are now available to investigators for microbiome analysis. The subject of our manuscript, the graphical user interface-based Explicet software package, fills a previously unmet need for a robust, yet intuitive means of integrating the outputs of the software pipelines with user-specified metadata and then visualizing the combined data.

  6. Students' Different Understandings of Class Diagrams

    ERIC Educational Resources Information Center

    Boustedt, Jonas

    2012-01-01

    The software industry needs well-trained software designers and one important aspect of software design is the ability to model software designs visually and understand what visual models represent. However, previous research indicates that software design is a difficult task to many students. This article reports empirical findings from a…

  7. A generic open-source software framework supporting scenario simulations in bioterrorist crises.

    PubMed

    Falenski, Alexander; Filter, Matthias; Thöns, Christian; Weiser, Armin A; Wigger, Jan-Frederik; Davis, Matthew; Douglas, Judith V; Edlund, Stefan; Hu, Kun; Kaufman, James H; Appel, Bernd; Käsbohrer, Annemarie

    2013-09-01

    Since the 2001 anthrax attack in the United States, awareness of threats originating from bioterrorism has grown. This led internationally to increased research efforts to improve knowledge of and approaches to protecting human and animal populations against the threat from such attacks. A collaborative effort in this context is the extension of the open-source Spatiotemporal Epidemiological Modeler (STEM) simulation and modeling software for agro- or bioterrorist crisis scenarios. STEM, originally designed to enable community-driven public health disease models and simulations, was extended with new features that enable integration of proprietary data as well as visualization of agent spread along supply and production chains. STEM now provides a fully developed open-source software infrastructure supporting critical modeling tasks such as ad hoc model generation, parameter estimation, simulation of scenario evolution, estimation of effects of mitigation or management measures, and documentation. This open-source software resource can be used free of charge. Additionally, STEM provides critical features like built-in worldwide data on administrative boundaries, transportation networks, or environmental conditions (eg, rainfall, temperature, elevation, vegetation). Users can easily combine their own confidential data with built-in public data to create customized models of desired resolution. STEM also supports collaborative and joint efforts in crisis situations by extended import and export functionalities. In this article we demonstrate specifically those new software features implemented to accomplish STEM application in agro- or bioterrorist crisis scenarios.

  8. qPortal: A platform for data-driven biomedical research.

    PubMed

    Mohr, Christopher; Friedrich, Andreas; Wojnar, David; Kenar, Erhan; Polatkan, Aydin Can; Codrea, Marius Cosmin; Czemmel, Stefan; Kohlbacher, Oliver; Nahnsen, Sven

    2018-01-01

    Modern biomedical research aims at drawing biological conclusions from large, highly complex biological datasets. It has become common practice to make extensive use of high-throughput technologies that produce big amounts of heterogeneous data. In addition to the ever-improving accuracy, methods are getting faster and cheaper, resulting in a steadily increasing need for scalable data management and easily accessible means of analysis. We present qPortal, a platform providing users with an intuitive way to manage and analyze quantitative biological data. The backend leverages a variety of concepts and technologies, such as relational databases, data stores, data models and means of data transfer, as well as front-end solutions to give users access to data management and easy-to-use analysis options. Users are empowered to conduct their experiments from the experimental design to the visualization of their results through the platform. Here, we illustrate the feature-rich portal by simulating a biomedical study based on publically available data. We demonstrate the software's strength in supporting the entire project life cycle. The software supports the project design and registration, empowers users to do all-digital project management and finally provides means to perform analysis. We compare our approach to Galaxy, one of the most widely used scientific workflow and analysis platforms in computational biology. Application of both systems to a small case study shows the differences between a data-driven approach (qPortal) and a workflow-driven approach (Galaxy). qPortal, a one-stop-shop solution for biomedical projects offers up-to-date analysis pipelines, quality control workflows, and visualization tools. Through intensive user interactions, appropriate data models have been developed. These models build the foundation of our biological data management system and provide possibilities to annotate data, query metadata for statistics and future re-analysis on high-performance computing systems via coupling of workflow management systems. Integration of project and data management as well as workflow resources in one place present clear advantages over existing solutions.

  9. 'Tagger' - a Mac OS X Interactive Graphical Application for Data Inference and Analysis of N-Dimensional Datasets in the Natural Physical Sciences.

    NASA Astrophysics Data System (ADS)

    Morse, P. E.; Reading, A. M.; Lueg, C.

    2014-12-01

    Pattern-recognition in scientific data is not only a computational problem but a human-observer problem as well. Human observation of - and interaction with - data visualization software can augment, select, interrupt and modify computational routines and facilitate processes of pattern and significant feature recognition for subsequent human analysis, machine learning, expert and artificial intelligence systems.'Tagger' is a Mac OS X interactive data visualisation tool that facilitates Human-Computer interaction for the recognition of patterns and significant structures. It is a graphical application developed using the Quartz Composer framework. 'Tagger' follows a Model-View-Controller (MVC) software architecture: the application problem domain (the model) is to facilitate novel ways of abstractly representing data to a human interlocutor, presenting these via different viewer modalities (e.g. chart representations, particle systems, parametric geometry) to the user (View) and enabling interaction with the data (Controller) via a variety of Human Interface Devices (HID). The software enables the user to create an arbitrary array of tags that may be appended to the visualised data, which are then saved into output files as forms of semantic metadata. Three fundamental problems that are not strongly supported by conventional scientific visualisation software are addressed:1] How to visually animate data over time, 2] How to rapidly deploy unconventional parametrically driven data visualisations, 3] How to construct and explore novel interaction models that capture the activity of the end-user as semantic metadata that can be used to computationally enhance subsequent interrogation. Saved tagged data files may be loaded into Tagger, so that tags may be tagged, if desired. Recursion opens up the possibility of refining or overlapping different types of tags, tagging a variety of different POIs or types of events, and of capturing different types of specialist observations of important or noticeable events. Other visualisations and modes of interaction will also be demonstrated, with the aim of discovering knowledge in large datasets in the natural, physical sciences. Fig.1 Wave height data from an oceanographic Wave Rider Buoy. Colors/radii are driven by wave height data.

  10. The community-driven BiG CZ software system for integration and analysis of bio- and geoscience data in the critical zone

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Mayorga, E.; Horsburgh, J. S.; Lehnert, K. A.; Zaslavsky, I.; Valentine, D. W., Jr.; Richard, S. M.; Cheetham, R.; Meyer, F.; Henry, C.; Berg-Cross, G.; Packman, A. I.; Aronson, E. L.

    2014-12-01

    Here we present the prototypes of a new scientific software system designed around the new Observations Data Model version 2.0 (ODM2, https://github.com/UCHIC/ODM2) to substantially enhance integration of biological and Geological (BiG) data for Critical Zone (CZ) science. The CZ science community takes as its charge the effort to integrate theory, models and data from the multitude of disciplines collectively studying processes on the Earth's surface. The central scientific challenge of the CZ science community is to develop a "grand unifying theory" of the critical zone through a theory-model-data fusion approach, for which the key missing need is a cyberinfrastructure for seamless 4D visual exploration of the integrated knowledge (data, model outputs and interpolations) from all the bio and geoscience disciplines relevant to critical zone structure and function, similar to today's ability to easily explore historical satellite imagery and photographs of the earth's surface using Google Earth. This project takes the first "BiG" steps toward answering that need. The overall goal of this project is to co-develop with the CZ science and broader community, including natural resource managers and stakeholders, a web-based integration and visualization environment for joint analysis of cross-scale bio and geoscience processes in the critical zone (BiG CZ), spanning experimental and observational designs. We will: (1) Engage the CZ and broader community to co-develop and deploy the BiG CZ software stack; (2) Develop the BiG CZ Portal web application for intuitive, high-performance map-based discovery, visualization, access and publication of data by scientists, resource managers, educators and the general public; (3) Develop the BiG CZ Toolbox to enable cyber-savvy CZ scientists to access BiG CZ Application Programming Interfaces (APIs); and (4) Develop the BiG CZ Central software stack to bridge data systems developed for multiple critical zone domains into a single metadata catalog. The entire BiG CZ Software system is being developed on public repositories as a modular suite of open source software projects. It will be built around a new Observations Data Model Version 2.0 (ODM2) that has been developed by members of the BiG CZ project team, with community input, under separate funding.

  11. Data Curation and Visualization for MuSIASEM Analysis of the Nexus

    NASA Astrophysics Data System (ADS)

    Renner, Ansel

    2017-04-01

    A novel software-based approach to relational analysis applying recent theoretical advancements of the Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism (MuSIASEM) accounting framework is presented. This research explores and explains underutilized ways software can assist complex system analysis across the stages of data collection, exploration, analysis and dissemination and in a transparent and collaborative manner. This work is being conducted as part of, and in support of, the four-year European Commission H2020 project: Moving Towards Adaptive Governance in Complexity: Informing Nexus Security (MAGIC). In MAGIC, theoretical advancements to MuSIASEM propose a powerful new approach to spatial-temporal WEFC relational analysis in accordance with a structural-functional scaling mechanism appropriate for biophysically relevant complex system analyses. Software is designed primarily with JavaScript using the Angular2 model-view-controller framework and the Data-Driven Documents (D3) library. These design choices clarify and modularize data flow, simplify research practitioner's work, allow for and assist stakeholder involvement and advance collaboration at all stages. Data requirements and scalable, robust yet light-weight structuring will first be explained. Following, algorithms to process this data will be explored. Data interfaces and data visualization approaches will lastly be presented and described.

  12. VISUAL PLUMES MIXING ZONE MODELING SOFTWARE

    EPA Science Inventory

    The U.S. Environmental Protection Agency has a long history of both supporting plume model development and providing mixing zone modeling software. The Visual Plumes model is the most recent addition to the suite of public-domain models available through the EPA-Athens Center f...

  13. SpreaD3: Interactive Visualization of Spatiotemporal History and Trait Evolutionary Processes.

    PubMed

    Bielejec, Filip; Baele, Guy; Vrancken, Bram; Suchard, Marc A; Rambaut, Andrew; Lemey, Philippe

    2016-08-01

    Model-based phylogenetic reconstructions increasingly consider spatial or phenotypic traits in conjunction with sequence data to study evolutionary processes. Alongside parameter estimation, visualization of ancestral reconstructions represents an integral part of these analyses. Here, we present a complete overhaul of the spatial phylogenetic reconstruction of evolutionary dynamics software, now called SpreaD3 to emphasize the use of data-driven documents, as an analysis and visualization package that primarily complements Bayesian inference in BEAST (http://beast.bio.ed.ac.uk, last accessed 9 May 2016). The integration of JavaScript D3 libraries (www.d3.org, last accessed 9 May 2016) offers novel interactive web-based visualization capacities that are not restricted to spatial traits and extend to any discrete or continuously valued trait for any organism of interest. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. The MDE Diploma: First International Postgraduate Specialization in Model-Driven Engineering

    ERIC Educational Resources Information Center

    Cabot, Jordi; Tisi, Massimo

    2011-01-01

    Model-Driven Engineering (MDE) is changing the way we build, operate, and maintain our software-intensive systems. Several projects using MDE practices are reporting significant improvements in quality and performance but, to be able to handle these projects, software engineers need a set of technical and interpersonal skills that are currently…

  15. A Comparison and Evaluation of Real-Time Software Systems Modeling Languages

    NASA Technical Reports Server (NTRS)

    Evensen, Kenneth D.; Weiss, Kathryn Anne

    2010-01-01

    A model-driven approach to real-time software systems development enables the conceptualization of software, fostering a more thorough understanding of its often complex architecture and behavior while promoting the documentation and analysis of concerns common to real-time embedded systems such as scheduling, resource allocation, and performance. Several modeling languages have been developed to assist in the model-driven software engineering effort for real-time systems, and these languages are beginning to gain traction with practitioners throughout the aerospace industry. This paper presents a survey of several real-time software system modeling languages, namely the Architectural Analysis and Design Language (AADL), the Unified Modeling Language (UML), Systems Modeling Language (SysML), the Modeling and Analysis of Real-Time Embedded Systems (MARTE) UML profile, and the AADL for UML profile. Each language has its advantages and disadvantages, and in order to adequately describe a real-time software system's architecture, a complementary use of multiple languages is almost certainly necessary. This paper aims to explore these languages in the context of understanding the value each brings to the model-driven software engineering effort and to determine if it is feasible and practical to combine aspects of the various modeling languages to achieve more complete coverage in architectural descriptions. To this end, each language is evaluated with respect to a set of criteria such as scope, formalisms, and architectural coverage. An example is used to help illustrate the capabilities of the various languages.

  16. The application of domain-driven design in NMS

    NASA Astrophysics Data System (ADS)

    Zhang, Jinsong; Chen, Yan; Qin, Shengjun

    2011-12-01

    In the traditional design approach of data-model-driven, system analysis and design phases are often separated which makes the demand information can not be expressed explicitly. The method is also easy to lead developer to the process-oriented programming, making codes between the modules or between hierarchies disordered. So it is hard to meet requirement of system scalability. The paper proposes a software hiberarchy based on rich domain model according to domain-driven design named FHRDM, then the Webwork + Spring + Hibernate (WSH) framework is determined. Domain-driven design aims to construct a domain model which not only meets the demand of the field where the software exists but also meets the need of software development. In this way, problems in Navigational Maritime System (NMS) development like big system business volumes, difficulty of requirement elicitation, high development costs and long development cycle can be resolved successfully.

  17. Automatic extraction and visualization of object-oriented software design metrics

    NASA Astrophysics Data System (ADS)

    Lakshminarayana, Anuradha; Newman, Timothy S.; Li, Wei; Talburt, John

    2000-02-01

    Software visualization is a graphical representation of software characteristics and behavior. Certain modes of software visualization can be useful in isolating problems and identifying unanticipated behavior. In this paper we present a new approach to aid understanding of object- oriented software through 3D visualization of software metrics that can be extracted from the design phase of software development. The focus of the paper is a metric extraction method and a new collection of glyphs for multi- dimensional metric visualization. Our approach utilize the extensibility interface of a popular CASE tool to access and automatically extract the metrics from Unified Modeling Language class diagrams. Following the extraction of the design metrics, 3D visualization of these metrics are generated for each class in the design, utilizing intuitively meaningful 3D glyphs that are representative of the ensemble of metrics. Extraction and visualization of design metrics can aid software developers in the early study and understanding of design complexity.

  18. PechaKucha Presentations: Teaching Storytelling, Visual Design, and Conciseness

    ERIC Educational Resources Information Center

    Lucas, Kristen; Rawlins, Jacob D.

    2015-01-01

    When speakers rely too heavily on presentation software templates, they often end up stultifying audiences with a triple-whammy of bullet points. In this article, Lucas and Rawlins present an alternative method--PechaKucha (the Japanese word for "chit chat")--a presentation style driven by a carefully planned, automatically timed…

  19. ISEES: an institute for sustainable software to accelerate environmental science

    NASA Astrophysics Data System (ADS)

    Jones, M. B.; Schildhauer, M.; Fox, P. A.

    2013-12-01

    Software is essential to the full science lifecycle, spanning data acquisition, processing, quality assessment, data integration, analysis, modeling, and visualization. Software runs our meteorological sensor systems, our data loggers, and our ocean gliders. Every aspect of science is impacted by, and improved by, software. Scientific advances ranging from modeling climate change to the sequencing of the human genome have been rendered possible in the last few decades due to the massive improvements in the capabilities of computers to process data through software. This pivotal role of software in science is broadly acknowledged, while simultaneously being systematically undervalued through minimal investments in maintenance and innovation. As a community, we need to embrace the creation, use, and maintenance of software within science, and address problems such as code complexity, openness,reproducibility, and accessibility. We also need to fully develop new skills and practices in software engineering as a core competency in our earth science disciplines, starting with undergraduate and graduate education and extending into university and agency professional positions. The Institute for Sustainable Earth and Environmental Software (ISEES) is being envisioned as a community-driven activity that can facilitate and galvanize activites around scientific software in an analogous way to synthesis centers such as NCEAS and NESCent that have stimulated massive advances in ecology and evolution. We will describe the results of six workshops (Science Drivers, Software Lifecycles, Software Components, Workforce Development and Training, Sustainability and Governance, and Community Engagement) that have been held in 2013 to envision such an institute. We will present community recommendations from these workshops and our strategic vision for how ISEES will address the technical issues in the software lifecycle, sustainability of the whole software ecosystem, and the critical issue of computational training for the scientific community. Process for envisioning ISEES.

  20. Aspect-Oriented Model-Driven Software Product Line Engineering

    NASA Astrophysics Data System (ADS)

    Groher, Iris; Voelter, Markus

    Software product line engineering aims to reduce development time, effort, cost, and complexity by taking advantage of the commonality within a portfolio of similar products. The effectiveness of a software product line approach directly depends on how well feature variability within the portfolio is implemented and managed throughout the development lifecycle, from early analysis through maintenance and evolution. This article presents an approach that facilitates variability implementation, management, and tracing by integrating model-driven and aspect-oriented software development. Features are separated in models and composed of aspect-oriented composition techniques on model level. Model transformations support the transition from problem to solution space models. Aspect-oriented techniques enable the explicit expression and modularization of variability on model, template, and code level. The presented concepts are illustrated with a case study of a home automation system.

  1. Discovering objects in a blood recipient information system.

    PubMed

    Qiu, D; Junghans, G; Marquardt, K; Kroll, H; Mueller-Eckhardt, C; Dudeck, J

    1995-01-01

    Application of object-oriented (OO) methodologies has been generally considered as a solution to the problem of improving the software development process and managing the so-called software crisis. Among them, object-oriented analysis (OOA) is the most essential and is a vital prerequisite for the successful use of other OO methodologies. Though there are already a good deal of OOA methods published, the most important aspect common to all these methods: discovering objects classes truly relevant to the given problem domain, has remained a subject to be intensively researched. In this paper, using the successful development of a blood recipient information system as an example, we present our approach which is based on the conceptual framework of responsibility-driven OOA. In the discussion, we also suggest that it may be inadequate to simply attribute the software crisis to the waterfall model of the software development life-cycle. We are convinced that the real causes for the failure of some software and information systems should be sought in the methodologies used in some crucial phases of the software development process. Furthermore, a software system can also fail if object classes essential to the problem domain are not discovered, implemented and visualized, so that the real-world situation cannot be faithfully traced by it.

  2. 3D Visualization for Phoenix Mars Lander Science Operations

    NASA Technical Reports Server (NTRS)

    Edwards, Laurence; Keely, Leslie; Lees, David; Stoker, Carol

    2012-01-01

    Planetary surface exploration missions present considerable operational challenges in the form of substantial communication delays, limited communication windows, and limited communication bandwidth. A 3D visualization software was developed and delivered to the 2008 Phoenix Mars Lander (PML) mission. The components of the system include an interactive 3D visualization environment called Mercator, terrain reconstruction software called the Ames Stereo Pipeline, and a server providing distributed access to terrain models. The software was successfully utilized during the mission for science analysis, site understanding, and science operations activity planning. A terrain server was implemented that provided distribution of terrain models from a central repository to clients running the Mercator software. The Ames Stereo Pipeline generates accurate, high-resolution, texture-mapped, 3D terrain models from stereo image pairs. These terrain models can then be visualized within the Mercator environment. The central cross-cutting goal for these tools is to provide an easy-to-use, high-quality, full-featured visualization environment that enhances the mission science team s ability to develop low-risk productive science activity plans. In addition, for the Mercator and Viz visualization environments, extensibility and adaptability to different missions and application areas are key design goals.

  3. Cybersim: geographic, temporal, and organizational dynamics of malware propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santhi, Nandakishore; Yan, Guanhua; Eidenbenz, Stephan

    2010-01-01

    Cyber-infractions into a nation's strategic security envelope pose a constant and daunting challenge. We present the modular CyberSim tool which has been developed in response to the need to realistically simulate at a national level, software vulnerabilities and resulting mal ware propagation in online social networks. CyberSim suite (a) can generate realistic scale-free networks from a database of geocoordinated computers to closely model social networks arising from personal and business email contacts and online communities; (b) maintains for each,bost a list of installed software, along with the latest published vulnerabilities; (d) allows designated initial nodes where malware gets introduced; (e)more » simulates, using distributed discrete event-driven technology, the spread of malware exploiting a specific vulnerability, with packet delay and user online behavior models; (f) provides a graphical visualization of spread of infection, its severity, businesses affected etc to the analyst. We present sample simulations on a national level network with millions of computers.« less

  4. Constraint-Driven Software Design: An Escape from the Waterfall Model.

    ERIC Educational Resources Information Center

    de Hoog, Robert; And Others

    1994-01-01

    Presents the principles of a development methodology for software design based on a nonlinear, product-driven approach that integrates quality aspects. Two examples are given to show that the flexibility needed for building high quality systems leads to integrated development environments in which methodology, product, and tools are closely…

  5. Graphical Technique to Support the Teaching/Learning Process of Software Process Reference Models

    NASA Astrophysics Data System (ADS)

    Espinosa-Curiel, Ismael Edrein; Rodríguez-Jacobo, Josefina; Fernández-Zepeda, José Alberto

    In this paper, we propose a set of diagrams to visualize software process reference models (PRM). The diagrams, called dimods, are the combination of some visual and process modeling techniques such as rich pictures, mind maps, IDEF and RAD diagrams. We show the use of this technique by designing a set of dimods for the Mexican Software Industry Process Model (MoProSoft). Additionally, we perform an evaluation of the usefulness of dimods. The result of the evaluation shows that dimods may be a support tool that facilitates the understanding, memorization, and learning of software PRMs in both, software development organizations and universities. The results also show that dimods may have advantages over the traditional description methods for these types of models.

  6. FROMS3D: New Software for 3-D Visualization of Fracture Network System in Fractured Rock Masses

    NASA Astrophysics Data System (ADS)

    Noh, Y. H.; Um, J. G.; Choi, Y.

    2014-12-01

    A new software (FROMS3D) is presented to visualize fracture network system in 3-D. The software consists of several modules that play roles in management of borehole and field fracture data, fracture network modelling, visualization of fracture geometry in 3-D and calculation and visualization of intersections and equivalent pipes between fractures. Intel Parallel Studio XE 2013, Visual Studio.NET 2010 and the open source VTK library were utilized as development tools to efficiently implement the modules and the graphical user interface of the software. The results have suggested that the developed software is effective in visualizing 3-D fracture network system, and can provide useful information to tackle the engineering geological problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.

  7. MFV-class: a multi-faceted visualization tool of object classes.

    PubMed

    Zhang, Zhi-meng; Pan, Yun-he; Zhuang, Yue-ting

    2004-11-01

    Classes are key software components in an object-oriented software system. In many industrial OO software systems, there are some classes that have complicated structure and relationships. So in the processes of software maintenance, testing, software reengineering, software reuse and software restructure, it is a challenge for software engineers to understand these classes thoroughly. This paper proposes a class comprehension model based on constructivist learning theory, and implements a software visualization tool (MFV-Class) to help in the comprehension of a class. The tool provides multiple views of class to uncover manifold facets of class contents. It enables visualizing three object-oriented metrics of classes to help users focus on the understanding process. A case study was conducted to evaluate our approach and the toolkit.

  8. Toward a user-driven approach to radiology software solutions: putting the wag back in the dog.

    PubMed

    Morgan, Matthew; Mates, Jonathan; Chang, Paul

    2006-09-01

    The relationship between healthcare providers and the software industry is evolving. In many cases, industry's traditional, market-driven model is failing to meet the increasingly sophisticated and appropriately individualized needs of providers. Advances in both technology infrastructure and development methodologies have set the stage for the transition from a vendor-driven to a more user-driven process of solution engineering. To make this transition, providers must take an active role in the development process and vendors must provide flexible frameworks on which to build. Only then can the provider/vendor relationship mature from a purchaser/supplier to a codesigner/partner model, where true insight and innovation can occur.

  9. VisTrails SAHM: visualization and workflow management for species habitat modeling

    USGS Publications Warehouse

    Morisette, Jeffrey T.; Jarnevich, Catherine S.; Holcombe, Tracy R.; Talbert, Colin B.; Ignizio, Drew A.; Talbert, Marian; Silva, Claudio; Koop, David; Swanson, Alan; Young, Nicholas E.

    2013-01-01

    The Software for Assisted Habitat Modeling (SAHM) has been created to both expedite habitat modeling and help maintain a record of the various input data, pre- and post-processing steps and modeling options incorporated in the construction of a species distribution model through the established workflow management and visualization VisTrails software. This paper provides an overview of the VisTrails:SAHM software including a link to the open source code, a table detailing the current SAHM modules, and a simple example modeling an invasive weed species in Rocky Mountain National Park, USA.

  10. A component-based software environment for visualizing large macromolecular assemblies.

    PubMed

    Sanner, Michel F

    2005-03-01

    The interactive visualization of large biological assemblies poses a number of challenging problems, including the development of multiresolution representations and new interaction methods for navigating and analyzing these complex systems. An additional challenge is the development of flexible software environments that will facilitate the integration and interoperation of computational models and techniques from a wide variety of scientific disciplines. In this paper, we present a component-based software development strategy centered on the high-level, object-oriented, interpretive programming language: Python. We present several software components, discuss their integration, and describe some of their features that are relevant to the visualization of large molecular assemblies. Several examples are given to illustrate the interoperation of these software components and the integration of structural data from a variety of experimental sources. These examples illustrate how combining visual programming with component-based software development facilitates the rapid prototyping of novel visualization tools.

  11. The topographical model of multiple sclerosis

    PubMed Central

    Cook, Karin; De Nino, Scott; Fletcher, Madhuri

    2016-01-01

    Relapses and progression contribute to multiple sclerosis (MS) disease course, but neither the relationship between them nor the spectrum of clinical heterogeneity has been fully characterized. A hypothesis-driven, biologically informed model could build on the clinical phenotypes to encompass the dynamic admixture of factors underlying MS disease course. In this medical hypothesis, we put forth a dynamic model of MS disease course that incorporates localization and other drivers of disability to propose a clinical manifestation framework that visualizes MS in a clinically individualized way. The topographical model encapsulates 5 factors (localization of relapses and causative lesions; relapse frequency, severity, and recovery; and progression rate), visualized utilizing dynamic 3-dimensional renderings. The central hypothesis is that, like symptom recrudescence in Uhthoff phenomenon and pseudoexacerbations, progression clinically recapitulates prior relapse symptoms and unmasks previously silent lesions, incrementally revealing underlying lesion topography. The model uses real-time simulation software to depict disease course archetypes and illuminate several well-described but poorly reconciled phenomena including the clinical/MRI paradox and prognostic significance of lesion location and burden on disease outcomes. Utilization of this model could allow for earlier and more clinically precise identification of progressive MS and predictive implications can be empirically tested. PMID:27648465

  12. Celeris: A GPU-accelerated open source software with a Boussinesq-type wave solver for real-time interactive simulation and visualization

    NASA Astrophysics Data System (ADS)

    Tavakkol, Sasan; Lynett, Patrick

    2017-08-01

    In this paper, we introduce an interactive coastal wave simulation and visualization software, called Celeris. Celeris is an open source software which needs minimum preparation to run on a Windows machine. The software solves the extended Boussinesq equations using a hybrid finite volume-finite difference method and supports moving shoreline boundaries. The simulation and visualization are performed on the GPU using Direct3D libraries, which enables the software to run faster than real-time. Celeris provides a first-of-its-kind interactive modeling platform for coastal wave applications and it supports simultaneous visualization with both photorealistic and colormapped rendering capabilities. We validate our software through comparison with three standard benchmarks for non-breaking and breaking waves.

  13. Test Driven Development: Lessons from a Simple Scientific Model

    NASA Astrophysics Data System (ADS)

    Clune, T. L.; Kuo, K.

    2010-12-01

    In the commercial software industry, unit testing frameworks have emerged as a disruptive technology that has permanently altered the process by which software is developed. Unit testing frameworks significantly reduce traditional barriers, both practical and psychological, to creating and executing tests that verify software implementations. A new development paradigm, known as test driven development (TDD), has emerged from unit testing practices, in which low-level tests (i.e. unit tests) are created by developers prior to implementing new pieces of code. Although somewhat counter-intuitive, this approach actually improves developer productivity. In addition to reducing the average time for detecting software defects (bugs), the requirement to provide procedure interfaces that enable testing frequently leads to superior design decisions. Although TDD is widely accepted in many software domains, its applicability to scientific modeling still warrants reasonable skepticism. While the technique is clearly relevant for infrastructure layers of scientific models such as the Earth System Modeling Framework (ESMF), numerical and scientific components pose a number of challenges to TDD that are not often encountered in commercial software. Nonetheless, our experience leads us to believe that the technique has great potential not only for developer productivity, but also as a tool for understanding and documenting the basic scientific assumptions upon which our models are implemented. We will provide a brief introduction to test driven development and then discuss our experience in using TDD to implement a relatively simple numerical model that simulates the growth of snowflakes. Many of the lessons learned are directly applicable to larger scientific models.

  14. FoodChain-Lab: A Trace-Back and Trace-Forward Tool Developed and Applied during Food-Borne Disease Outbreak Investigations in Germany and Europe.

    PubMed

    Weiser, Armin A; Thöns, Christian; Filter, Matthias; Falenski, Alexander; Appel, Bernd; Käsbohrer, Annemarie

    2016-01-01

    FoodChain-Lab is modular open-source software for trace-back and trace-forward analysis in food-borne disease outbreak investigations. Development of FoodChain-Lab has been driven by a need for appropriate software in several food-related outbreaks in Germany since 2011. The software allows integrated data management, data linkage, enrichment and visualization as well as interactive supply chain analyses. Identification of possible outbreak sources or vehicles is facilitated by calculation of tracing scores for food-handling stations (companies or persons) and food products under investigation. The software also supports consideration of station-specific cross-contamination, analysis of geographical relationships, and topological clustering of the tracing network structure. FoodChain-Lab has been applied successfully in previous outbreak investigations, for example during the 2011 EHEC outbreak and the 2013/14 European hepatitis A outbreak. The software is most useful in complex, multi-area outbreak investigations where epidemiological evidence may be insufficient to discriminate between multiple implicated food products. The automated analysis and visualization components would be of greater value if trading information on food ingredients and compound products was more easily available.

  15. FoodChain-Lab: A Trace-Back and Trace-Forward Tool Developed and Applied during Food-Borne Disease Outbreak Investigations in Germany and Europe

    PubMed Central

    Filter, Matthias; Falenski, Alexander; Appel, Bernd; Käsbohrer, Annemarie

    2016-01-01

    FoodChain-Lab is modular open-source software for trace-back and trace-forward analysis in food-borne disease outbreak investigations. Development of FoodChain-Lab has been driven by a need for appropriate software in several food-related outbreaks in Germany since 2011. The software allows integrated data management, data linkage, enrichment and visualization as well as interactive supply chain analyses. Identification of possible outbreak sources or vehicles is facilitated by calculation of tracing scores for food-handling stations (companies or persons) and food products under investigation. The software also supports consideration of station-specific cross-contamination, analysis of geographical relationships, and topological clustering of the tracing network structure. FoodChain-Lab has been applied successfully in previous outbreak investigations, for example during the 2011 EHEC outbreak and the 2013/14 European hepatitis A outbreak. The software is most useful in complex, multi-area outbreak investigations where epidemiological evidence may be insufficient to discriminate between multiple implicated food products. The automated analysis and visualization components would be of greater value if trading information on food ingredients and compound products was more easily available. PMID:26985673

  16. Physically Based Rendering in the Nightshade NG Visualization Platform

    NASA Astrophysics Data System (ADS)

    Berglund, Karrie; Larey-Williams, Trystan; Spearman, Rob; Bogard, Arthur

    2015-01-01

    This poster describes our work on creating a physically based rendering model in Nightshade NG planetarium simulation and visualization software (project website: NightshadeSoftware.org). We discuss techniques used for rendering realistic scenes in the universe and dealing with astronomical distances in real time on consumer hardware. We also discuss some of the challenges of rewriting the software from scratch, a project which began in 2011.Nightshade NG can be a powerful tool for sharing data and visualizations. The desktop version of the software is free for anyone to download, use, and modify; it runs on Windows and Linux (and eventually Mac). If you are looking to disseminate your data or models, please stop by to discuss how we can work together.Nightshade software is used in literally hundreds of digital planetarium systems worldwide. Countless teachers and astronomy education groups run the software on flat screens. This wide use makes Nightshade an effective tool for dissemination to educators and the public.Nightshade NG is an especially powerful visualization tool when projected on a dome. We invite everyone to enter our inflatable dome in the exhibit hall to see this software in a 3D environment.

  17. Software For Computing Reliability Of Other Software

    NASA Technical Reports Server (NTRS)

    Nikora, Allen; Antczak, Thomas M.; Lyu, Michael

    1995-01-01

    Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.

  18. A Data-Driven Solution for Performance Improvement

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Marketed as the "Software of the Future," Optimal Engineering Systems P.I. EXPERT(TM) technology offers statistical process control and optimization techniques that are critical to businesses looking to restructure or accelerate operations in order to gain a competitive edge. Kennedy Space Center granted Optimal Engineering Systems the funding and aid necessary to develop a prototype of the process monitoring and improvement software. Completion of this prototype demonstrated that it was possible to integrate traditional statistical quality assurance tools with robust optimization techniques in a user- friendly format that is visually compelling. Using an expert system knowledge base, the software allows the user to determine objectives, capture constraints and out-of-control processes, predict results, and compute optimal process settings.

  19. An Educational MONTE CARLO Simulation/Animation Program for the Cosmic Rays Muons and a Prototype Computer-Driven Hardware Display.

    ERIC Educational Resources Information Center

    Kalkanis, G.; Sarris, M. M.

    1999-01-01

    Describes an educational software program for the study of and detection methods for the cosmic ray muons passing through several light transparent materials (i.e., water, air, etc.). Simulates muons and Cherenkov photons' paths and interactions and visualizes/animates them on the computer screen using Monte Carlo methods/techniques which employ…

  20. Model Driven Engineering with Ontology Technologies

    NASA Astrophysics Data System (ADS)

    Staab, Steffen; Walter, Tobias; Gröner, Gerd; Parreiras, Fernando Silva

    Ontologies constitute formal models of some aspect of the world that may be used for drawing interesting logical conclusions even for large models. Software models capture relevant characteristics of a software artifact to be developed, yet, most often these software models have limited formal semantics, or the underlying (often graphical) software language varies from case to case in a way that makes it hard if not impossible to fix its semantics. In this contribution, we survey the use of ontology technologies for software modeling in order to carry over advantages from ontology technologies to the software modeling domain. It will turn out that ontology-based metamodels constitute a core means for exploiting expressive ontology reasoning in the software modeling domain while remaining flexible enough to accommodate varying needs of software modelers.

  1. Efficient visualization of high-throughput targeted proteomics experiments: TAPIR.

    PubMed

    Röst, Hannes L; Rosenberger, George; Aebersold, Ruedi; Malmström, Lars

    2015-07-15

    Targeted mass spectrometry comprises a set of powerful methods to obtain accurate and consistent protein quantification in complex samples. To fully exploit these techniques, a cross-platform and open-source software stack based on standardized data exchange formats is required. We present TAPIR, a fast and efficient Python visualization software for chromatograms and peaks identified in targeted proteomics experiments. The input formats are open, community-driven standardized data formats (mzML for raw data storage and TraML encoding the hierarchical relationships between transitions, peptides and proteins). TAPIR is scalable to proteome-wide targeted proteomics studies (as enabled by SWATH-MS), allowing researchers to visualize high-throughput datasets. The framework integrates well with existing automated analysis pipelines and can be extended beyond targeted proteomics to other types of analyses. TAPIR is available for all computing platforms under the 3-clause BSD license at https://github.com/msproteomicstools/msproteomicstools. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Documentation Driven Development for Complex Real-Time Systems

    DTIC Science & Technology

    2004-12-01

    This paper presents a novel approach for development of complex real - time systems , called the documentation-driven development (DDD) approach. This... time systems . DDD will also support automated software generation based on a computational model and some relevant techniques. DDD includes two main...stakeholders to be easily involved in development processes and, therefore, significantly improve the agility of software development for complex real

  3. Study of heat transfer on physiological driven movement with CNT nanofluids and variable viscosity.

    PubMed

    Akbar, Noreen Sher; Kazmi, Naeem; Tripathi, Dharmendra; Mir, Nazir Ahmed

    2016-11-01

    With ongoing interest in CNT nanofluids and materials in biotechnology, energy and environment, microelectronics, composite materials etc., the current investigation is carried out to analyze the effects of variable viscosity and thermal conductivity of CNT nanofluids flow driven by cilia induced movement through a circular cylindrical tube. Metachronal wave is generated by the beating of cilia and mathematically modeled as elliptical wave propagation by Blake (1971). The problem is formulated in the form of nonlinear partial differential equations, which are simplified by using the dimensional analysis to avoid the complicacy of dimensional homogeneity. Lubrication theory is employed to linearize the governing equations and it is also physically appropriate for cilia movement. Analytical solutions for velocity, temperature and pressure gradient and stream function are obtained. The analytical results are numerically simulated by using the Mathematica Software and plotted the graphs for velocity profile, temperature profile, pressure gradient and stream lines for better discussion and visualization. This model is applicable in physiological transport phenomena to explore the nanotechnology in engineering the artificial cilia and ciliated tube/pipe. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. A review of visual MODFLOW applications in groundwater modelling

    NASA Astrophysics Data System (ADS)

    Hariharan, V.; Shankar, M. Uma

    2017-11-01

    Visual MODLOW is a Graphical User Interface for the USGS MODFLOW. It is a commercial software that is popular among the hydrogeologists for its user-friendly features. The software is mainly used for Groundwater flow and contaminant transport models under different conditions. This article is intended to review the versatility of its applications in groundwater modelling for the last 22 years. Agriculture, airfields, constructed wetlands, climate change, drought studies, Environmental Impact Assessment (EIA), landfills, mining operations, river and flood plain monitoring, salt water intrusion, soil profile surveys, watershed analyses, etc., are the areas where the software has been reportedly used till the current date. The review will provide a clarity on the scope of the software in groundwater modelling and research.

  5. Modeling Student Software Testing Processes: Attitudes, Behaviors, Interventions, and Their Effects

    ERIC Educational Resources Information Center

    Buffardi, Kevin John

    2014-01-01

    Effective software testing identifies potential bugs and helps correct them, producing more reliable and maintainable software. As software development processes have evolved, incremental testing techniques have grown in popularity, particularly with introduction of test-driven development (TDD). However, many programmers struggle to adopt TDD's…

  6. Software complex for geophysical data visualization

    NASA Astrophysics Data System (ADS)

    Kryukov, Ilya A.; Tyugin, Dmitry Y.; Kurkin, Andrey A.; Kurkina, Oxana E.

    2013-04-01

    The effectiveness of current research in geophysics is largely determined by the degree of implementation of the procedure of data processing and visualization with the use of modern information technology. Realistic and informative visualization of the results of three-dimensional modeling of geophysical processes contributes significantly into the naturalness of physical modeling and detailed view of the phenomena. The main difficulty in this case is to interpret the results of the calculations: it is necessary to be able to observe the various parameters of the three-dimensional models, build sections on different planes to evaluate certain characteristics and make a rapid assessment. Programs for interpretation and visualization of simulations are spread all over the world, for example, software systems such as ParaView, Golden Software Surfer, Voxler, Flow Vision and others. However, it is not always possible to solve the problem of visualization with the help of a single software package. Preprocessing, data transfer between the packages and setting up a uniform visualization style can turn into a long and routine work. In addition to this, sometimes special display modes for specific data are required and existing products tend to have more common features and are not always fully applicable to certain special cases. Rendering of dynamic data may require scripting languages that does not relieve the user from writing code. Therefore, the task was to develop a new and original software complex for the visualization of simulation results. Let us briefly list of the primary features that are developed. Software complex is a graphical application with a convenient and simple user interface that displays the results of the simulation. Complex is also able to interactively manage the image, resize the image without loss of quality, apply a two-dimensional and three-dimensional regular grid, set the coordinate axes with data labels and perform slice of data. The feature of geophysical data is their size. Detailed maps used in the simulations are large, thus rendering in real time can be difficult task even for powerful modern computers. Therefore, the performance of the software complex is an important aspect of this work. Complex is based on the latest version of graphic API: Microsoft - DirectX 11, which reduces overhead and harness the power of modern hardware. Each geophysical calculation is the adjustment of the mathematical model for a particular case, so the architecture of the complex visualization is created with the scalability and the ability to customize visualization objects, for better visibility and comfort. In the present study, software complex 'GeoVisual' was developed. One of the main features of this research is the use of bleeding-edge techniques of computer graphics in scientific visualization. The research was supported by The Ministry of education and science of Russian Federation, project 14.B37.21.0642.

  7. Genoviz Software Development Kit: Java tool kit for building genomics visualization applications.

    PubMed

    Helt, Gregg A; Nicol, John W; Erwin, Ed; Blossom, Eric; Blanchard, Steven G; Chervitz, Stephen A; Harmon, Cyrus; Loraine, Ann E

    2009-08-25

    Visualization software can expose previously undiscovered patterns in genomic data and advance biological science. The Genoviz Software Development Kit (SDK) is an open source, Java-based framework designed for rapid assembly of visualization software applications for genomics. The Genoviz SDK framework provides a mechanism for incorporating adaptive, dynamic zooming into applications, a desirable feature of genome viewers. Visualization capabilities of the Genoviz SDK include automated layout of features along genetic or genomic axes; support for user interactions with graphical elements (Glyphs) in a map; a variety of Glyph sub-classes that promote experimentation with new ways of representing data in graphical formats; and support for adaptive, semantic zooming, whereby objects change their appearance depending on zoom level and zooming rate adapts to the current scale. Freely available demonstration and production quality applications, including the Integrated Genome Browser, illustrate Genoviz SDK capabilities. Separation between graphics components and genomic data models makes it easy for developers to add visualization capability to pre-existing applications or build new applications using third-party data models. Source code, documentation, sample applications, and tutorials are available at http://genoviz.sourceforge.net/.

  8. Development of a new software for analyzing 3-D fracture network

    NASA Astrophysics Data System (ADS)

    Um, Jeong-Gi; Noh, Young-Hwan; Choi, Yosoon

    2014-05-01

    A new software is presented to analyze fracture network in 3-D. Recently, we completed the software package based on information given in EGU2013. The software consists of several modules that play roles in management of borehole data, stochastic modelling of fracture network, construction of analysis domain, visualization of fracture geometry in 3-D, calculation of equivalent pipes and production of cross-section diagrams. Intel Parallel Studio XE 2013, Visual Studio.NET 2010 and the open source VTK library were utilized as development tools to efficiently implement the modules and the graphical user interface of the software. A case study was performed to analyze 3-D fracture network system at the Upper Devonian Grosmont Formation in Alberta, Canada. The results have suggested that the developed software is effective in modelling and visualizing 3-D fracture network system, and can provide useful information to tackle the geomechanical problems related to strength, deformability and hydraulic behaviours of the fractured rock masses. This presentation describes the concept and details of the development and implementation of the software.

  9. Designing an optimal software intensive system acquisition: A game theoretic approach

    NASA Astrophysics Data System (ADS)

    Buettner, Douglas John

    The development of schedule-constrained software-intensive space systems is challenging. Case study data from national security space programs developed at the U.S. Air Force Space and Missile Systems Center (USAF SMC) provide evidence of the strong desire by contractors to skip or severely reduce software development design and early defect detection methods in these schedule-constrained environments. The research findings suggest recommendations to fully address these issues at numerous levels. However, the observations lead us to investigate modeling and theoretical methods to fundamentally understand what motivated this behavior in the first place. As a result, Madachy's inspection-based system dynamics model is modified to include unit testing and an integration test feedback loop. This Modified Madachy Model (MMM) is used as a tool to investigate the consequences of this behavior on the observed defect dynamics for two remarkably different case study software projects. Latin Hypercube sampling of the MMM with sample distributions for quality, schedule and cost-driven strategies demonstrate that the higher cost and effort quality-driven strategies provide consistently better schedule performance than the schedule-driven up-front effort-reduction strategies. Game theory reasoning for schedule-driven engineers cutting corners on inspections and unit testing is based on the case study evidence and Austin's agency model to describe the observed phenomena. Game theory concepts are then used to argue that the source of the problem and hence the solution to developers cutting corners on quality for schedule-driven system acquisitions ultimately lies with the government. The game theory arguments also lead to the suggestion that the use of a multi-player dynamic Nash bargaining game provides a solution for our observed lack of quality game between the government (the acquirer) and "large-corporation" software developers. A note is provided that argues this multi-player dynamic Nash bargaining game also provides the solution to Freeman Dyson's problem, for a way to place a label of good or bad on systems.

  10. Building Data-Driven Pathways From Routinely Collected Hospital Data: A Case Study on Prostate Cancer

    PubMed Central

    Clark, Jeremy; Cooper, Colin S; Mills, Robert; Rayward-Smith, Victor J; de la Iglesia, Beatriz

    2015-01-01

    Background Routinely collected data in hospitals is complex, typically heterogeneous, and scattered across multiple Hospital Information Systems (HIS). This big data, created as a byproduct of health care activities, has the potential to provide a better understanding of diseases, unearth hidden patterns, and improve services and cost. The extent and uses of such data rely on its quality, which is not consistently checked, nor fully understood. Nevertheless, using routine data for the construction of data-driven clinical pathways, describing processes and trends, is a key topic receiving increasing attention in the literature. Traditional algorithms do not cope well with unstructured processes or data, and do not produce clinically meaningful visualizations. Supporting systems that provide additional information, context, and quality assurance inspection are needed. Objective The objective of the study is to explore how routine hospital data can be used to develop data-driven pathways that describe the journeys that patients take through care, and their potential uses in biomedical research; it proposes a framework for the construction, quality assessment, and visualization of patient pathways for clinical studies and decision support using a case study on prostate cancer. Methods Data pertaining to prostate cancer patients were extracted from a large UK hospital from eight different HIS, validated, and complemented with information from the local cancer registry. Data-driven pathways were built for each of the 1904 patients and an expert knowledge base, containing rules on the prostate cancer biomarker, was used to assess the completeness and utility of the pathways for a specific clinical study. Software components were built to provide meaningful visualizations for the constructed pathways. Results The proposed framework and pathway formalism enable the summarization, visualization, and querying of complex patient-centric clinical information, as well as the computation of quality indicators and dimensions. A novel graphical representation of the pathways allows the synthesis of such information. Conclusions Clinical pathways built from routinely collected hospital data can unearth information about patients and diseases that may otherwise be unavailable or overlooked in hospitals. Data-driven clinical pathways allow for heterogeneous data (ie, semistructured and unstructured data) to be collated over a unified data model and for data quality dimensions to be assessed. This work has enabled further research on prostate cancer and its biomarkers, and on the development and application of methods to mine, compare, analyze, and visualize pathways constructed from routine data. This is an important development for the reuse of big data in hospitals. PMID:26162314

  11. InteractiveROSETTA: a graphical user interface for the PyRosetta protein modeling suite.

    PubMed

    Schenkelberg, Christian D; Bystroff, Christopher

    2015-12-15

    Modern biotechnical research is becoming increasingly reliant on computational structural modeling programs to develop novel solutions to scientific questions. Rosetta is one such protein modeling suite that has already demonstrated wide applicability to a number of diverse research projects. Unfortunately, Rosetta is largely a command-line-driven software package which restricts its use among non-computational researchers. Some graphical interfaces for Rosetta exist, but typically are not as sophisticated as commercial software. Here, we present InteractiveROSETTA, a graphical interface for the PyRosetta framework that presents easy-to-use controls for several of the most widely used Rosetta protocols alongside a sophisticated selection system utilizing PyMOL as a visualizer. InteractiveROSETTA is also capable of interacting with remote Rosetta servers, facilitating sophisticated protocols that are not accessible in PyRosetta or which require greater computational resources. InteractiveROSETTA is freely available at https://github.com/schenc3/InteractiveROSETTA/releases and relies upon a separate download of PyRosetta which is available at http://www.pyrosetta.org after obtaining a license (free for academic use). © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. JPL Earth Science Center Visualization Multitouch Table

    NASA Astrophysics Data System (ADS)

    Kim, R.; Dodge, K.; Malhotra, S.; Chang, G.

    2014-12-01

    JPL Earth Science Center Visualization table is a specialized software and hardware to allow multitouch, multiuser, and remote display control to create seamlessly integrated experiences to visualize JPL missions and their remote sensing data. The software is fully GIS capable through time aware OGC WMTS using Lunar Mapping and Modeling Portal as the GIS backend to continuously ingest and retrieve realtime remote sending data and satellite location data. 55 inch and 82 inch unlimited finger count multitouch displays allows multiple users to explore JPL Earth missions and visualize remote sensing data through very intuitive and interactive touch graphical user interface. To improve the integrated experience, Earth Science Center Visualization Table team developed network streaming which allows table software to stream data visualization to near by remote display though computer network. The purpose of this visualization/presentation tool is not only to support earth science operation, but specifically designed for education and public outreach and will significantly contribute to STEM. Our presentation will include overview of our software, hardware, and showcase of our system.

  13. Model driven development of clinical information sytems using openEHR.

    PubMed

    Atalag, Koray; Yang, Hong Yul; Tempero, Ewan; Warren, Jim

    2011-01-01

    openEHR and the recent international standard (ISO 13606) defined a model driven software development methodology for health information systems. However there is little evidence in the literature describing implementation; especially for desktop clinical applications. This paper presents an implementation pathway using .Net/C# technology for Microsoft Windows desktop platforms. An endoscopy reporting application driven by openEHR Archetypes and Templates has been developed. A set of novel GUI directives has been defined and presented which guides the automatic graphical user interface generator to render widgets properly. We also reveal the development steps and important design decisions; from modelling to the final software product. This might provide guidance for other developers and form evidence required for the adoption of these standards for vendors and national programs alike.

  14. NASA Tech Briefs, July 2005

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Thin-Film Resistance Heat-Flux Sensors Circuit Indicates that Voice-Recording Disks are Nearly Full Optical Sensing of Combustion Instabilities in Gas Turbines Topics include: Crane-Load Contact Sensor; Hexagonal and Pentagonal Fractal Multiband Antennas; Multifunctional Logic Gate Controlled by Temperature; Multifunctional Logic Gate Controlled by Supply Voltage; Power Divider for Waveforms Rich in Harmonics; SCB Quantum Computers Using iSWAP and 1-Qubit Rotations; CSAM Metrology Software Tool; Update on Rover Sequencing and Visualization Program; Selecting Data from a Star Catalog; Rotating Desk for Collaboration by Two Computer Programmers; Variable-Pressure Washer; Magnetically Attached Multifunction Maintenance Rover; Improvements in Fabrication of Sand/Binder Cores for Casting; Solid Freeform Fabrication of Composite-Material Objects; Efficient Computational Model of Hysteresis; Gauges for Highly Precise Metrology of a Compound Mirror; Improved Electrolytic Hydrogen Peroxide Generator; High-Power Fiber Lasers Using Photonic Band Gap Materials; Ontology-Driven Information Integration; Quantifying Traversability of Terrain for a Mobile Robot; More About Arc-Welding Process for Making Carbon Nanotubes; Controlling Laser Spot Size in Outer Space; or Software-Reconfigurable Processors for Spacecraft.

  15. Efficacy of Simulation-Based Learning of Electronics Using Visualization and Manipulation

    ERIC Educational Resources Information Center

    Chen, Yu-Lung; Hong, Yu-Ru; Sung, Yao-Ting; Chang, Kuo-En

    2011-01-01

    Software for simulation-based learning of electronics was implemented to help learners understand complex and abstract concepts through observing external representations and exploring concept models. The software comprises modules for visualization and simulative manipulation. Differences in learning performance of using the learning software…

  16. Visualization in aerospace research with a large wall display system

    NASA Astrophysics Data System (ADS)

    Matsuo, Yuichi

    2002-05-01

    National Aerospace Laboratory of Japan has built a large- scale visualization system with a large wall-type display. The system has been operational since April 2001 and comprises a 4.6x1.5-meter (15x5-foot) rear projection screen with 3 BARCO 812 high-resolution CRT projectors. The reason we adopted the 3-gun CRT projectors is support for stereoscopic viewing, ease with color/luminosity matching and accuracy of edge-blending. The system is driven by a new SGI Onyx 3400 server of distributed shared-memory architecture with 32 CPUs, 64Gbytes memory, 1.5TBytes FC RAID disk and 6 IR3 graphics pipelines. Software is another important issue for us to make full use of the system. We have introduced some applications available in a multi- projector environment such as AVS/MPE, EnSight Gold and COVISE, and been developing some software tools that create volumetric images with using SGI graphics libraries. The system is mainly used for visualization fo computational fluid dynamics (CFD) simulation sin aerospace research. Visualized CFD results are of our help for designing an improved configuration of aerospace vehicles and analyzing their aerodynamic performances. These days we also use it for various collaborations among researchers.

  17. VISUAL PLUMES MIXING ZONE MODELING SOFTWARE

    EPA Science Inventory

    The US Environmental Protection Agency has a history of developing plume models and providing technical assistance. The Visual Plumes model (VP) is a recent addition to the public-domain models available on the EPA Center for Exposure Assessment Modeling (CEAM) web page. The Wind...

  18. Model-Driven Development for PDS4 Software and Services

    NASA Astrophysics Data System (ADS)

    Hughes, J. S.; Crichton, D. J.; Algermissen, S. S.; Cayanan, M. D.; Joyner, R. S.; Hardman, S. H.; Padams, J. H.

    2018-04-01

    PDS4 data product labels provide the information necessary for processing the referenced digital object. However, significantly more information is available in the PDS4 Information Model. This additional information is made available for use, by both software and services, to configure, promote resiliency, and improve interoperability.

  19. Allen Brain Atlas-Driven Visualizations: a web-based gene expression energy visualization tool.

    PubMed

    Zaldivar, Andrew; Krichmar, Jeffrey L

    2014-01-01

    The Allen Brain Atlas-Driven Visualizations (ABADV) is a publicly accessible web-based tool created to retrieve and visualize expression energy data from the Allen Brain Atlas (ABA) across multiple genes and brain structures. Though the ABA offers their own search engine and software for researchers to view their growing collection of online public data sets, including extensive gene expression and neuroanatomical data from human and mouse brain, many of their tools limit the amount of genes and brain structures researchers can view at once. To complement their work, ABADV generates multiple pie charts, bar charts and heat maps of expression energy values for any given set of genes and brain structures. Such a suite of free and easy-to-understand visualizations allows for easy comparison of gene expression across multiple brain areas. In addition, each visualization links back to the ABA so researchers may view a summary of the experimental detail. ABADV is currently supported on modern web browsers and is compatible with expression energy data from the Allen Mouse Brain Atlas in situ hybridization data. By creating this web application, researchers can immediately obtain and survey numerous amounts of expression energy data from the ABA, which they can then use to supplement their work or perform meta-analysis. In the future, we hope to enable ABADV across multiple data resources.

  20. A Data-Driven Approach to Interactive Visualization of Power Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Jun

    Driven by emerging industry standards, electric utilities and grid coordination organizations are eager to seek advanced tools to assist grid operators to perform mission-critical tasks and enable them to make quick and accurate decisions. The emerging field of visual analytics holds tremendous promise for improving the business practices in today’s electric power industry. The conducted investigation, however, has revealed that the existing commercial power grid visualization tools heavily rely on human designers, hindering user’s ability to discover. Additionally, for a large grid, it is very labor-intensive and costly to build and maintain the pre-designed visual displays. This project proposes amore » data-driven approach to overcome the common challenges. The proposed approach relies on developing powerful data manipulation algorithms to create visualizations based on the characteristics of empirically or mathematically derived data. The resulting visual presentations emphasize what the data is rather than how the data should be presented, thus fostering comprehension and discovery. Furthermore, the data-driven approach formulates visualizations on-the-fly. It does not require a visualization design stage, completely eliminating or significantly reducing the cost for building and maintaining visual displays. The research and development (R&D) conducted in this project is mainly divided into two phases. The first phase (Phase I & II) focuses on developing data driven techniques for visualization of power grid and its operation. Various data-driven visualization techniques were investigated, including pattern recognition for auto-generation of one-line diagrams, fuzzy model based rich data visualization for situational awareness, etc. The R&D conducted during the second phase (Phase IIB) focuses on enhancing the prototyped data driven visualization tool based on the gathered requirements and use cases. The goal is to evolve the prototyped tool developed during the first phase into a commercial grade product. We will use one of the identified application areas as an example to demonstrate how research results achieved in this project are successfully utilized to address an emerging industry need. In summary, the data-driven visualization approach developed in this project has proven to be promising for building the next-generation power grid visualization tools. Application of this approach has resulted in a state-of-the-art commercial tool currently being leveraged by more than 60 utility organizations in North America and Europe .« less

  1. SWIFT MODELLER: a Java based GUI for molecular modeling.

    PubMed

    Mathur, Abhinav; Shankaracharya; Vidyarthi, Ambarish S

    2011-10-01

    MODELLER is command line argument based software which requires tedious formatting of inputs and writing of Python scripts which most people are not comfortable with. Also the visualization of output becomes cumbersome due to verbose files. This makes the whole software protocol very complex and requires extensive study of MODELLER manuals and tutorials. Here we describe SWIFT MODELLER, a GUI that automates formatting, scripting and data extraction processes and present it in an interactive way making MODELLER much easier to use than before. The screens in SWIFT MODELLER are designed keeping homology modeling in mind and their flow is a depiction of its steps. It eliminates the formatting of inputs, scripting processes and analysis of verbose output files through automation and makes pasting of the target sequence as the only prerequisite. Jmol (3D structure visualization tool) has been integrated into the GUI which opens and demonstrates the protein data bank files created by the MODELLER software. All files required and created by the software are saved in a folder named after the work instance's date and time of execution. SWIFT MODELLER lowers the skill level required for the software through automation of many of the steps in the original software protocol, thus saving an enormous amount of time per instance and making MODELLER very easy to work with.

  2. Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanny, S; Bogue, J; Parsai, E

    Purpose: Potential collisions between the gantry head and the patient or table assembly are difficult to detect in most treatment planning systems. We have developed and implemented a novel software package for the representation of potential gantry collisions with the couch assembly at the time of treatment planning. Methods: Physical dimensions of the Varian Edge linear accelerator treatment head were measured and reproduced using the Visual Python display package. A script was developed for the Pinnacle treatment planning system to generate a file with the relevant couch, gantry, and isocenter positions for each beam in a planning trial. A pythonmore » program was developed to parse the information from the TPS and produce a representative model of the couch/gantry system. Using the model and the Visual Python libraries, a rendering window is generated for each beam that allows the planner to evaluate the possibility of a collision. Results: Comparison against heuristic methods and direct verification on the machine validated the collision model generated by the software. Encounters of <1 cm between the gantry treatment head and table were visualized as collisions in our virtual model. Visual windows were created depicting the angle of collision for each beam, including the anticipated table coordinates. Visual rendering of a 6 arc trial with multiple couch positions was completed in under 1 minute, with network bandwidth being the primary bottleneck. Conclusion: The developed software allows for quick examination of possible collisions during the treatment planning process and helps to prevent major collisions prior to plan approval. The software can easily be implemented on future planning systems due to the versatility and platform independence of the Python programming language. Further integration of the software with the treatment planning system will allow the possibility of patient-gantry collision detection for a range of treatment machines.« less

  4. A distributed analysis and visualization system for model and observational data

    NASA Technical Reports Server (NTRS)

    Wilhelmson, Robert B.

    1994-01-01

    Software was developed with NASA support to aid in the analysis and display of the massive amounts of data generated from satellites, observational field programs, and from model simulations. This software was developed in the context of the PATHFINDER (Probing ATmospHeric Flows in an Interactive and Distributed EnviRonment) Project. The overall aim of this project is to create a flexible, modular, and distributed environment for data handling, modeling simulations, data analysis, and visualization of atmospheric and fluid flows. Software completed with NASA support includes GEMPAK analysis, data handling, and display modules for which collaborators at NASA had primary responsibility, and prototype software modules for three-dimensional interactive and distributed control and display as well as data handling, for which NSCA was responsible. Overall process control was handled through a scientific and visualization application builder from Silicon Graphics known as the Iris Explorer. In addition, the GEMPAK related work (GEMVIS) was also ported to the Advanced Visualization System (AVS) application builder. Many modules were developed to enhance those already available in Iris Explorer including HDF file support, improved visualization and display, simple lattice math, and the handling of metadata through development of a new grid datatype. Complete source and runtime binaries along with on-line documentation is available via the World Wide Web at: http://redrock.ncsa.uiuc.edu/ PATHFINDER/pathre12/top/top.html.

  5. GlastCam: A Telemetry-Driven Spacecraft Visualization Tool

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric T.; Tsai, Dean

    2009-01-01

    Developed for the GLAST project, which is now the Fermi Gamma-ray Space Telescope, GlastCam software ingests telemetry from the Integrated Test and Operations System (ITOS) and generates four graphical displays of geometric properties in real time, allowing visual assessment of the attitude, configuration, position, and various cross-checks. Four windows are displayed: a "cam" window shows a 3D view of the satellite; a second window shows the standard position plot of the satellite on a Mercator map of the Earth; a third window displays star tracker fields of view, showing which stars are visible from the spacecraft in order to verify star tracking; and the fourth window depicts

  6. Towards Model-Driven End-User Development in CALL

    ERIC Educational Resources Information Center

    Farmer, Rod; Gruba, Paul

    2006-01-01

    The purpose of this article is to introduce end-user development (EUD) processes to the CALL software development community. EUD refers to the active participation of end-users, as non-professional developers, in the software development life cycle. Unlike formal software engineering approaches, the focus in EUD on means/ends development is…

  7. Integrating model behavior, optimization, and sensitivity/uncertainty analysis: overview and application of the MOUSE software toolbox

    USDA-ARS?s Scientific Manuscript database

    This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...

  8. Development and case study of a science-based software platform to support policy making on air quality.

    PubMed

    Zhu, Yun; Lao, Yanwen; Jang, Carey; Lin, Chen-Jen; Xing, Jia; Wang, Shuxiao; Fu, Joshua S; Deng, Shuang; Xie, Junping; Long, Shicheng

    2015-01-01

    This article describes the development and implementations of a novel software platform that supports real-time, science-based policy making on air quality through a user-friendly interface. The software, RSM-VAT, uses a response surface modeling (RSM) methodology and serves as a visualization and analysis tool (VAT) for three-dimensional air quality data obtained by atmospheric models. The software features a number of powerful and intuitive data visualization functions for illustrating the complex nonlinear relationship between emission reductions and air quality benefits. The case study of contiguous U.S. demonstrates that the enhanced RSM-VAT is capable of reproducing the air quality model results with Normalized Mean Bias <2% and assisting in air quality policy making in near real time. Copyright © 2014. Published by Elsevier B.V.

  9. Information Extraction for System-Software Safety Analysis: Calendar Year 2008 Year-End Report

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2009-01-01

    This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  10. ROCKETSHIP: a flexible and modular software tool for the planning, processing and analysis of dynamic MRI studies.

    PubMed

    Barnes, Samuel R; Ng, Thomas S C; Santa-Maria, Naomi; Montagne, Axel; Zlokovic, Berislav V; Jacobs, Russell E

    2015-06-16

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a promising technique to characterize pathology and evaluate treatment response. However, analysis of DCE-MRI data is complex and benefits from concurrent analysis of multiple kinetic models and parameters. Few software tools are currently available that specifically focuses on DCE-MRI analysis with multiple kinetic models. Here, we developed ROCKETSHIP, an open-source, flexible and modular software for DCE-MRI analysis. ROCKETSHIP incorporates analyses with multiple kinetic models, including data-driven nested model analysis. ROCKETSHIP was implemented using the MATLAB programming language. Robustness of the software to provide reliable fits using multiple kinetic models is demonstrated using simulated data. Simulations also demonstrate the utility of the data-driven nested model analysis. Applicability of ROCKETSHIP for both preclinical and clinical studies is shown using DCE-MRI studies of the human brain and a murine tumor model. A DCE-MRI software suite was implemented and tested using simulations. Its applicability to both preclinical and clinical datasets is shown. ROCKETSHIP was designed to be easily accessible for the beginner, but flexible enough for changes or additions to be made by the advanced user as well. The availability of a flexible analysis tool will aid future studies using DCE-MRI. A public release of ROCKETSHIP is available at https://github.com/petmri/ROCKETSHIP .

  11. The Cooperate Assistive Teamwork Environment for Software Description Languages.

    PubMed

    Groenda, Henning; Seifermann, Stephan; Müller, Karin; Jaworek, Gerhard

    2015-01-01

    Versatile description languages such as the Unified Modeling Language (UML) are commonly used in software engineering across different application domains in theory and practice. They often use graphical notations and leverage visual memory for expressing complex relations. Those notations are hard to access for people with visual impairment and impede their smooth inclusion in an engineering team. Existing approaches provide textual notations but require manual synchronization between the notations. This paper presents requirements for an accessible and language-aware team work environment as well as our plan for the assistive implementation of Cooperate. An industrial software engineering team consisting of people with and without visual impairment will evaluate the implementation.

  12. LC-MS Data Processing with MAVEN: A Metabolomic Analysis and Visualization Engine

    PubMed Central

    Clasquin, Michelle F.; Melamud, Eugene; Rabinowitz, Joshua D.

    2014-01-01

    MAVEN is an open-source software program for interactive processing of LC-MS-based metabolomics data. MAVEN enables rapid and reliable metabolite quantitation from multiple reaction monitoring data or high-resolution full-scan mass spectrometry data. It automatically detects and reports peak intensities for isotope-labeled metabolites. Menu-driven, click-based navigation allows visualization of raw and analyzed data. Here we provide a User Guide for MAVEN. Step-by-step instructions are provided for data import, peak alignment across samples, identification of metabolites that differ strongly between biological conditions, quantitation and visualization of isotope-labeling patterns, and export of tables of metabolite-specific peak intensities. Together, these instructions describe a workflow that allows efficient processing of raw LC-MS data into a form ready for biological analysis. PMID:22389014

  13. LC-MS data processing with MAVEN: a metabolomic analysis and visualization engine.

    PubMed

    Clasquin, Michelle F; Melamud, Eugene; Rabinowitz, Joshua D

    2012-03-01

    MAVEN is an open-source software program for interactive processing of LC-MS-based metabolomics data. MAVEN enables rapid and reliable metabolite quantitation from multiple reaction monitoring data or high-resolution full-scan mass spectrometry data. It automatically detects and reports peak intensities for isotope-labeled metabolites. Menu-driven, click-based navigation allows visualization of raw and analyzed data. Here we provide a User Guide for MAVEN. Step-by-step instructions are provided for data import, peak alignment across samples, identification of metabolites that differ strongly between biological conditions, quantitation and visualization of isotope-labeling patterns, and export of tables of metabolite-specific peak intensities. Together, these instructions describe a workflow that allows efficient processing of raw LC-MS data into a form ready for biological analysis.

  14. Instrumentation: Software-Driven Instrumentation: The New Wave.

    ERIC Educational Resources Information Center

    Salit, M. L.; Parsons, M. L.

    1985-01-01

    Software-driven instrumentation makes measurements that demand a computer as an integral part of either control, data acquisition, or data reduction. The structure of such instrumentation, hardware requirements, and software requirements are discussed. Examples of software-driven instrumentation (such as wavelength-modulated continuum source…

  15. R&D Project on Algebra Software Seen to Show Promise

    ERIC Educational Resources Information Center

    Trotter, Andrew

    2007-01-01

    Computer software that shows students visual models of mathematical concepts--and lets them manipulate those models by doing math--has a certain intuitive appeal. Now, recent research on SimCalc Mathworlds, one of the pioneering examples of such software, is providing some of the best evidence so far that the approach can lead to gains in student…

  16. The Effects of Solid Modeling and Visualization on Technical Problem Solving

    ERIC Educational Resources Information Center

    Koch, Douglas

    2011-01-01

    The purpose of this study was to determine whether or not the use of solid modeling software increases participants' success in solving a specified technical problem and how visualization affects their ability to solve a technical problem. Specifically, the study sought to determine if (a) students' visualization skills affect their problem…

  17. Simulation of a Canard in Fluid Flow Driven by a Piezoelectric Beam with a Software Control Loop

    DTIC Science & Technology

    2014-04-01

    The canard is actuated by a piezoelectric beam that bends as voltage is applied. The voltage is controlled by a software subroutine that measures...Dynamic system Modeling Co-simulation Simulation Abaqus Finite element analysis (FEA) Finite element method (FEM) Computational...is unlimited. i CONTENTS Page Introduction 1 Model Description 1 Fluid Model 2 Structural Model 3 Control Subroutine 4 Results 4

  18. A Cloud Based Framework For Monitoring And Predicting Subsurface System Behaviour

    NASA Astrophysics Data System (ADS)

    Versteeg, R. J.; Rodzianko, A.; Johnson, D. V.; Soltanian, M. R.; Dwivedi, D.; Dafflon, B.; Tran, A. P.; Versteeg, O. J.

    2015-12-01

    Subsurface system behavior is driven and controlled by the interplay of physical, chemical, and biological processes which occur at multiple temporal and spatial scales. Capabilities to monitor, understand and predict this behavior in an effective and timely manner are needed for both scientific purposes and for effective subsurface system management. Such capabilities require three elements: Models, Data and an enabling cyberinfrastructure, which allow users to use these models and data in an effective manner. Under a DOE Office of Science funded STTR award Subsurface Insights and LBNL have designed and implemented a cloud based predictive assimilation framework (PAF) which automatically ingests, controls quality and stores heterogeneous physical and chemical subsurface data and processes these data using different inversion and modeling codes to provide information on the current state and evolution of subsurface systems. PAF is implemented as a modular cloud based software application with five components: (1) data acquisition, (2) data management, (3) data assimilation and processing, (4) visualization and result delivery and (5) orchestration. Serverside PAF uses ZF2 (a PHP web application framework) and Python and both open source (ODM2) and in house developed data models. Clientside PAF uses CSS and JS to allow for interactive data visualization and analysis. Client side modularity (which allows for a responsive interface) of the system is achieved by implementing each core capability of PAF (such as data visualization, user configuration and control, electrical geophysical monitoring and email/SMS alerts on data streams) as a SPA (Single Page Application). One of the recent enhancements is the full integration of a number of flow and mass transport and parameter estimation codes (e.g., MODFLOW, MT3DMS, PHT3D, TOUGH, PFLOTRAN) in this framework. This integration allows for autonomous and user controlled modeling of hydrological and geochemical processes. In our presentation we will discuss our software architecture and present the results of using these codes and the overall developed performance of our framework using hydrological, geochemical and geophysical data from the LBNL SFA2 Rifle field site.

  19. Minerva: User-Centered Science Operations Software Capability for Future Human Exploration

    NASA Technical Reports Server (NTRS)

    Deans, Matthew; Marquez, Jessica J.; Cohen, Tamar; Miller, Matthew J.; Deliz, Ivonne; Hillenius, Steven; Hoffman, Jeffrey; Lee, Yeon Jin; Lees, David; Norheim, Johannes; hide

    2017-01-01

    In June of 2016, the Biologic Analog Science Associated with Lava Terrains (BASALT) research project conducted its first field deployment, which we call BASALT-1. BASALT-1 consisted of a science-driven field campaign in a volcanic field in Idaho as a simulated human mission to Mars. Scientists and mission operators were provided a suite of ground software tools that we refer to collectively as Minerva to carry out their work. Minerva provides capabilities for traverse planning and route optimization, timeline generation and display, procedure management, execution monitoring, data archiving, visualization, and search. This paper describes the Minerva architecture, constituent components, use cases, and some preliminary findings from the BASALT-1 campaign.

  20. Built To Last: Using Iterative Development Models for Sustainable Scientific Software Development

    NASA Astrophysics Data System (ADS)

    Jasiak, M. E.; Truslove, I.; Savoie, M.

    2013-12-01

    In scientific research, software development exists fundamentally for the results they create. The core research must take focus. It seems natural to researchers, driven by grant deadlines, that every dollar invested in software development should be used to push the boundaries of problem solving. This system of values is frequently misaligned with those of the software being created in a sustainable fashion; short-term optimizations create longer-term sustainability issues. The National Snow and Ice Data Center (NSIDC) has taken bold cultural steps in using agile and lean development and management methodologies to help its researchers meet critical deadlines, while building in the necessary support structure for the code to live far beyond its original milestones. Agile and lean software development and methodologies including Scrum, Kanban, Continuous Delivery and Test-Driven Development have seen widespread adoption within NSIDC. This focus on development methods is combined with an emphasis on explaining to researchers why these methods produce more desirable results for everyone, as well as promoting developers interacting with researchers. This presentation will describe NSIDC's current scientific software development model, how this addresses the short-term versus sustainability dichotomy, the lessons learned and successes realized by transitioning to this agile and lean-influenced model, and the current challenges faced by the organization.

  1. A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints.

    PubMed

    Sundharam, Sakthivel Manikandan; Navet, Nicolas; Altmeyer, Sebastian; Havet, Lionel

    2018-02-20

    Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system.

  2. A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints

    PubMed Central

    Navet, Nicolas; Havet, Lionel

    2018-01-01

    Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system. PMID:29461489

  3. The Scientific Uplink and User Support System for SIRTF

    NASA Astrophysics Data System (ADS)

    Heinrichsen, I.; Chavez, J.; Hartley, B.; Mei, Y.; Potts, S.; Roby, T.; Turek, G.; Valjavec, E.; Wu, X.

    The Space Infrared Telescope Facility (SIRTF) is one of NASA's Great Observatory missions, scheduled for launch in 2001. As such its ground segment design is driven by the requirement to provide strong support for the entire astronomical community starting with the call for Legacy Proposals in early 2000. In this contribution, we present the astronomical user interface and the design of the server software that comprises the Scientific Uplink System for SIRTF. The software architecture is split into three major parts: A front-end Java application deployed to the astronomical community providing the capabilities to visualize and edit proposals and the associated lists of observations. This observer toolkit provides templates to define all parameters necessary to carry out the required observations. A specialized version of this software, based on the same overall architecture, is used internal to the SIRTF Science Center to prepare calibration and engineering observations. A Weblogic (TM) based middleware component brokers the transactions with the servers, astronomical image and catalog sources as well as the SIRTF operational databases. Several server systems perform the necessary computations, to obtain resource estimates, target visibilities and to access the instrument models for signal to noise calculations. The same server software is used internally at a later stage to derive the detailed command sequences needed by the SIRTF instruments and spacecraft to execute a given observation.

  4. Dependability modeling and assessment in UML-based software development.

    PubMed

    Bernardi, Simona; Merseguer, José; Petriu, Dorina C

    2012-01-01

    Assessment of software nonfunctional properties (NFP) is an important problem in software development. In the context of model-driven development, an emerging approach for the analysis of different NFPs consists of the following steps: (a) to extend the software models with annotations describing the NFP of interest; (b) to transform automatically the annotated software model to the formalism chosen for NFP analysis; (c) to analyze the formal model using existing solvers; (d) to assess the software based on the results and give feedback to designers. Such a modeling→analysis→assessment approach can be applied to any software modeling language, be it general purpose or domain specific. In this paper, we focus on UML-based development and on the dependability NFP, which encompasses reliability, availability, safety, integrity, and maintainability. The paper presents the profile used to extend UML with dependability information, the model transformation to generate a DSPN formal model, and the assessment of the system properties based on the DSPN results.

  5. Dependability Modeling and Assessment in UML-Based Software Development

    PubMed Central

    Bernardi, Simona; Merseguer, José; Petriu, Dorina C.

    2012-01-01

    Assessment of software nonfunctional properties (NFP) is an important problem in software development. In the context of model-driven development, an emerging approach for the analysis of different NFPs consists of the following steps: (a) to extend the software models with annotations describing the NFP of interest; (b) to transform automatically the annotated software model to the formalism chosen for NFP analysis; (c) to analyze the formal model using existing solvers; (d) to assess the software based on the results and give feedback to designers. Such a modeling→analysis→assessment approach can be applied to any software modeling language, be it general purpose or domain specific. In this paper, we focus on UML-based development and on the dependability NFP, which encompasses reliability, availability, safety, integrity, and maintainability. The paper presents the profile used to extend UML with dependability information, the model transformation to generate a DSPN formal model, and the assessment of the system properties based on the DSPN results. PMID:22988428

  6. Configuring the Orion Guidance, Navigation, and Control Flight Software for Automated Sequencing

    NASA Technical Reports Server (NTRS)

    Odegard, Ryan G.; Siliwinski, Tomasz K.; King, Ellis T.; Hart, Jeremy J.

    2010-01-01

    The Orion Crew Exploration Vehicle is being designed with greater automation capabilities than any other crewed spacecraft in NASA s history. The Guidance, Navigation, and Control (GN&C) flight software architecture is designed to provide a flexible and evolvable framework that accommodates increasing levels of automation over time. Within the GN&C flight software, a data-driven approach is used to configure software. This approach allows data reconfiguration and updates to automated sequences without requiring recompilation of the software. Because of the great dependency of the automation and the flight software on the configuration data, the data management is a vital component of the processes for software certification, mission design, and flight operations. To enable the automated sequencing and data configuration of the GN&C subsystem on Orion, a desktop database configuration tool has been developed. The database tool allows the specification of the GN&C activity sequences, the automated transitions in the software, and the corresponding parameter reconfigurations. These aspects of the GN&C automation on Orion are all coordinated via data management, and the database tool provides the ability to test the automation capabilities during the development of the GN&C software. In addition to providing the infrastructure to manage the GN&C automation, the database tool has been designed with capabilities to import and export artifacts for simulation analysis and documentation purposes. Furthermore, the database configuration tool, currently used to manage simulation data, is envisioned to evolve into a mission planning tool for generating and testing GN&C software sequences and configurations. A key enabler of the GN&C automation design, the database tool allows both the creation and maintenance of the data artifacts, as well as serving the critical role of helping to manage, visualize, and understand the data-driven parameters both during software development and throughout the life of the Orion project.

  7. The Prodiguer Messaging Platform

    NASA Astrophysics Data System (ADS)

    Denvil, S.; Greenslade, M. A.; Carenton, N.; Levavasseur, G.; Raciazek, J.

    2015-12-01

    CONVERGENCE is a French multi-partner national project designed to gather HPC and informatics expertise to innovate in the context of running French global climate models with differing grids and at differing resolutions. Efficient and reliable execution of these models and the management and dissemination of model output are some of the complexities that CONVERGENCE aims to resolve.At any one moment in time, researchers affiliated with the Institut Pierre Simon Laplace (IPSL) climate modeling group, are running hundreds of global climate simulations. These simulations execute upon a heterogeneous set of French High Performance Computing (HPC) environments. The IPSL's simulation execution runtime libIGCM (library for IPSL Global Climate Modeling group) has recently been enhanced so as to support hitherto impossible realtime use cases such as simulation monitoring, data publication, metrics collection, simulation control, visualizations … etc. At the core of this enhancement is Prodiguer: an AMQP (Advanced Message Queue Protocol) based event driven asynchronous distributed messaging platform. libIGCM now dispatches copious amounts of information, in the form of messages, to the platform for remote processing by Prodiguer software agents at IPSL servers in Paris. Such processing takes several forms: Persisting message content to database(s); Launching rollback jobs upon simulation failure; Notifying downstream applications; Automation of visualization pipelines; We will describe and/or demonstrate the platform's: Technical implementation; Inherent ease of scalability; Inherent adaptiveness in respect to supervising simulations; Web portal receiving simulation notifications in realtime.

  8. Integrated Modeling Environment

    NASA Technical Reports Server (NTRS)

    Mosier, Gary; Stone, Paul; Holtery, Christopher

    2006-01-01

    The Integrated Modeling Environment (IME) is a software system that establishes a centralized Web-based interface for integrating people (who may be geographically dispersed), processes, and data involved in a common engineering project. The IME includes software tools for life-cycle management, configuration management, visualization, and collaboration.

  9. Software tool for data mining and its applications

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Ye, Chenzhou; Chen, Nianyi

    2002-03-01

    A software tool for data mining is introduced, which integrates pattern recognition (PCA, Fisher, clustering, hyperenvelop, regression), artificial intelligence (knowledge representation, decision trees), statistical learning (rough set, support vector machine), computational intelligence (neural network, genetic algorithm, fuzzy systems). It consists of nine function models: pattern recognition, decision trees, association rule, fuzzy rule, neural network, genetic algorithm, Hyper Envelop, support vector machine, visualization. The principle and knowledge representation of some function models of data mining are described. The software tool of data mining is realized by Visual C++ under Windows 2000. Nonmonotony in data mining is dealt with by concept hierarchy and layered mining. The software tool of data mining has satisfactorily applied in the prediction of regularities of the formation of ternary intermetallic compounds in alloy systems, and diagnosis of brain glioma.

  10. Consistent model driven architecture

    NASA Astrophysics Data System (ADS)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  11. TimeBench: a data model and software library for visual analytics of time-oriented data.

    PubMed

    Rind, Alexander; Lammarsch, Tim; Aigner, Wolfgang; Alsallakh, Bilal; Miksch, Silvia

    2013-12-01

    Time-oriented data play an essential role in many Visual Analytics scenarios such as extracting medical insights from collections of electronic health records or identifying emerging problems and vulnerabilities in network traffic. However, many software libraries for Visual Analytics treat time as a flat numerical data type and insufficiently tackle the complexity of the time domain such as calendar granularities and intervals. Therefore, developers of advanced Visual Analytics designs need to implement temporal foundations in their application code over and over again. We present TimeBench, a software library that provides foundational data structures and algorithms for time-oriented data in Visual Analytics. Its expressiveness and developer accessibility have been evaluated through application examples demonstrating a variety of challenges with time-oriented data and long-term developer studies conducted in the scope of research and student projects.

  12. An Introductory Classroom Exercise on Protein Molecular Model Visualization and Detailed Analysis of Protein-Ligand Binding

    ERIC Educational Resources Information Center

    Poeylaut-Palena, Andres, A.; de los Angeles Laborde, Maria

    2013-01-01

    A learning module for molecular level analysis of protein structure and ligand/drug interaction through the visualization of X-ray diffraction is presented. Using DeepView as molecular model visualization software, students learn about the general concepts of protein structure. This Biochemistry classroom exercise is designed to be carried out by…

  13. Evaluation of Interactive Visualization on Mobile Computing Platforms for Selection of Deep Brain Stimulation Parameters

    PubMed Central

    Butson, Christopher R.; Tamm, Georg; Jain, Sanket; Fogal, Thomas; Krüger, Jens

    2012-01-01

    In recent years there has been significant growth in the use of patient-specific models to predict the effects of neuromodulation therapies such as deep brain stimulation (DBS). However, translating these models from a research environment to the everyday clinical workflow has been a challenge, primarily due to the complexity of the models and the expertise required in specialized visualization software. In this paper, we deploy the interactive visualization system ImageVis3D Mobile, which has been designed for mobile computing devices such as the iPhone or iPad, in an evaluation environment to visualize models of Parkinson’s disease patients who received DBS therapy. Selection of DBS settings is a significant clinical challenge that requires repeated revisions to achieve optimal therapeutic response, and is often performed without any visual representation of the stimulation system in the patient. We used ImageVis3D Mobile to provide models to movement disorders clinicians and asked them to use the software to determine: 1) which of the four DBS electrode contacts they would select for therapy; and 2) what stimulation settings they would choose. We compared the stimulation protocol chosen from the software versus the stimulation protocol that was chosen via clinical practice (independently of the study). Lastly, we compared the amount of time required to reach these settings using the software versus the time required through standard practice. We found that the stimulation settings chosen using ImageVis3D Mobile were similar to those used in standard of care, but were selected in drastically less time. We show how our visualization system, available directly at the point of care on a device familiar to the clinician, can be used to guide clinical decision making for selection of DBS settings. In our view, the positive impact of the system could also translate to areas other than DBS. PMID:22450824

  14. Hemodynamics model of fluid–solid interaction in internal carotid artery aneurysms

    PubMed Central

    Fu-Yu, Wang; Lei, Liu; Xiao-Jun, Zhang; Hai-Yue, Ju

    2010-01-01

    The objective of this study is to present a relatively simple method to reconstruct cerebral aneurysms as 3D numerical grids. The method accurately duplicates the geometry to provide computer simulations of the blood flow. Initial images were obtained by using CT angiography and 3D digital subtraction angiography in DICOM format. The image was processed by using MIMICS software, and the 3D fluid model (blood flow) and 3D solid model (wall) were generated. The subsequent output was exported to the ANSYS workbench software to generate the volumetric mesh for further hemodynamic study. The fluid model was defined and simulated in CFX software while the solid model was calculated in ANSYS software. The force data calculated firstly in the CFX software were transferred to the ANSYS software, and after receiving the force data, total mesh displacement data were calculated in the ANSYS software. Then, the mesh displacement data were transferred back to the CFX software. The data exchange was processed in workbench software. The results of simulation could be visualized in CFX-post. Two examples of grid reconstruction and blood flow simulation for patients with internal carotid artery aneurysms were presented. The wall shear stress, wall total pressure, and von Mises stress could be visualized. This method seems to be relatively simple and suitable for direct use by neurosurgeons or neuroradiologists, and maybe a practical tool for planning treatment and follow-up of patients after neurosurgical or endovascular interventions with 3D angiography. PMID:20812022

  15. Hemodynamics model of fluid-solid interaction in internal carotid artery aneurysms.

    PubMed

    Bai-Nan, Xu; Fu-Yu, Wang; Lei, Liu; Xiao-Jun, Zhang; Hai-Yue, Ju

    2011-01-01

    The objective of this study is to present a relatively simple method to reconstruct cerebral aneurysms as 3D numerical grids. The method accurately duplicates the geometry to provide computer simulations of the blood flow. Initial images were obtained by using CT angiography and 3D digital subtraction angiography in DICOM format. The image was processed by using MIMICS software, and the 3D fluid model (blood flow) and 3D solid model (wall) were generated. The subsequent output was exported to the ANSYS workbench software to generate the volumetric mesh for further hemodynamic study. The fluid model was defined and simulated in CFX software while the solid model was calculated in ANSYS software. The force data calculated firstly in the CFX software were transferred to the ANSYS software, and after receiving the force data, total mesh displacement data were calculated in the ANSYS software. Then, the mesh displacement data were transferred back to the CFX software. The data exchange was processed in workbench software. The results of simulation could be visualized in CFX-post. Two examples of grid reconstruction and blood flow simulation for patients with internal carotid artery aneurysms were presented. The wall shear stress, wall total pressure, and von Mises stress could be visualized. This method seems to be relatively simple and suitable for direct use by neurosurgeons or neuroradiologists, and maybe a practical tool for planning treatment and follow-up of patients after neurosurgical or endovascular interventions with 3D angiography.

  16. The Need for Software Architecture Evaluation in the Acquisition of Software-Intensive Sysetms

    DTIC Science & Technology

    2014-01-01

    Function and Performance Specification GIG Global Information Grid ISO International Standard Organisation MDA Model Driven Architecture...architecture and design, which is a key part of knowledge-based economy UNCLASSIFIED DSTO-TR-2936 UNCLASSIFIED 24  Allow Australian SMEs to

  17. Composable Framework Support for Software-FMEA Through Model Execution

    NASA Astrophysics Data System (ADS)

    Kocsis, Imre; Patricia, Andras; Brancati, Francesco; Rossi, Francesco

    2016-08-01

    Performing Failure Modes and Effect Analysis (FMEA) during software architecture design is becoming a basic requirement in an increasing number of domains; however, due to the lack of standardized early design phase model execution, classic SW-FMEA approaches carry significant risks and are human effort-intensive even in processes that use Model-Driven Engineering.Recently, modelling languages with standardized executable semantics have emerged. Building on earlier results, this paper describes framework support for generating executable error propagation models from such models during software architecture design. The approach carries the promise of increased precision, decreased risk and more automated execution for SW-FMEA during dependability- critical system development.

  18. Volume-rendering on a 3D hyperwall: A molecular visualization platform for research, education and outreach.

    PubMed

    MacDougall, Preston J; Henze, Christopher E; Volkov, Anatoliy

    2016-11-01

    We present a unique platform for molecular visualization and design that uses novel subatomic feature detection software in tandem with 3D hyperwall visualization technology. We demonstrate the fleshing-out of pharmacophores in drug molecules, as well as reactive sites in catalysts, focusing on subatomic features. Topological analysis with picometer resolution, in conjunction with interactive volume-rendering of the Laplacian of the electronic charge density, leads to new insight into docking and catalysis. Visual data-mining is done efficiently and in parallel using a 4×4 3D hyperwall (a tiled array of 3D monitors driven independently by slave GPUs but displaying high-resolution, synchronized and functionally-related images). The visual texture of images for a wide variety of molecular systems are intuitive to experienced chemists but also appealing to neophytes, making the platform simultaneously useful as a tool for advanced research as well as for pedagogical and STEM education outreach purposes. Copyright © 2016. Published by Elsevier Inc.

  19. Automation for System Safety Analysis

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul

    2009-01-01

    This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  20. Models Extracted from Text for System-Software Safety Analyses

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2010-01-01

    This presentation describes extraction and integration of requirements information and safety information in visualizations to support early review of completeness, correctness, and consistency of lengthy and diverse system safety analyses. Software tools have been developed and extended to perform the following tasks: 1) extract model parts and safety information from text in interface requirements documents, failure modes and effects analyses and hazard reports; 2) map and integrate the information to develop system architecture models and visualizations for safety analysts; and 3) provide model output to support virtual system integration testing. This presentation illustrates the methods and products with a rocket motor initiation case.

  1. Real-time computing platform for spiking neurons (RT-spike).

    PubMed

    Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael

    2006-07-01

    A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.

  2. A miniature cable-driven robot for crawling on the heart.

    PubMed

    Patronik, N A; Zenati, M A; Riviere, C N

    2005-01-01

    This document describes the design and preliminary testing of a cable-driven robot for the purpose of traveling on the surface of the beating heart to administer therapy. This methodology obviates mechanical stabilization and lung deflation, which are typically required during minimally invasive cardiac surgery. Previous versions of the robot have been remotely actuated through push-pull wires, while visual feedback was provided by fiber optic transmission. Although these early models were able to perform locomotion in vivo on porcine hearts, the stiffness of the wire-driven transmission and fiber optic camera limited the mobility of the robots. The new prototype described in this document is actuated by two antagonistic cable pairs, and contains a color CCD camera located in the front section of the device. These modifications have resulted in superior mobility and visual feedback. The cable-driven prototype has successfully demonstrated prehension, locomotion, and tissue dye injection during in vitro testing with a poultry model.

  3. GIS based model interfacing : incorporating existing software and new techniques into a streamlined interface package

    DOT National Transportation Integrated Search

    2000-01-01

    The ability to visualize data has grown immensely as the speed and functionality of Geographic Information Systems (GIS) have increased. Now, with modeling software and GIS, planners are able to view a prediction of the future traffic demands in thei...

  4. Test Driven Development of Scientific Models

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.

    2012-01-01

    Test-Driven Development (TDD) is a software development process that promises many advantages for developer productivity and has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices. Of course, scientific/technical software differs from other software categories in a number of important respects, but I nonetheless believe that TDD is quite applicable to the development of such software and has the potential to significantly improve programmer productivity and code quality within the scientific community. After a detailed introduction to TDD, I will present the experience within the Software Systems Support Office (SSSO) in applying the technique to various scientific applications. This discussion will emphasize the various direct and indirect benefits as well as some of the difficulties and limitations of the methodology. I will conclude with a brief description of pFUnit, a unit testing framework I co-developed to support test-driven development of parallel Fortran applications.

  5. Boxes of Model Building and Visualization.

    PubMed

    Turk, Dušan

    2017-01-01

    Macromolecular crystallography and electron microscopy (single-particle and in situ tomography) are merging into a single approach used by the two coalescing scientific communities. The merger is a consequence of technical developments that enabled determination of atomic structures of macromolecules by electron microscopy. Technological progress in experimental methods of macromolecular structure determination, computer hardware, and software changed and continues to change the nature of model building and visualization of molecular structures. However, the increase in automation and availability of structure validation are reducing interactive manual model building to fiddling with details. On the other hand, interactive modeling tools increasingly rely on search and complex energy calculation procedures, which make manually driven changes in geometry increasingly powerful and at the same time less demanding. Thus, the need for accurate manual positioning of a model is decreasing. The user's push only needs to be sufficient to bring the model within the increasing convergence radius of the computing tools. It seems that we can now better than ever determine an average single structure. The tools work better, requirements for engagement of human brain are lowered, and the frontier of intellectual and scientific challenges has moved on. The quest for resolution of new challenges requires out-of-the-box thinking. A few issues such as model bias and correctness of structure, ongoing developments in parameters defining geometric restraints, limitations of the ideal average single structure, and limitations of Bragg spot data are discussed here, together with the challenges that lie ahead.

  6. Cognitive/emotional models for human behavior representation in 3D avatar simulations

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-08-01

    Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.

  7. Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research

    NASA Astrophysics Data System (ADS)

    Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.

    2008-12-01

    In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.

  8. Ionospheric Simulation System for Satellite Observations and Global Assimilative Modeling Experiments (ISOGAME)

    NASA Technical Reports Server (NTRS)

    Pi, Xiaoqing; Mannucci, Anthony J.; Verkhoglyadova, Olga P.; Stephens, Philip; Wilson, Brian D.; Akopian, Vardan; Komjathy, Attila; Lijima, Byron A.

    2013-01-01

    ISOGAME is designed and developed to assess quantitatively the impact of new observation systems on the capability of imaging and modeling the ionosphere. With ISOGAME, one can perform observation system simulation experiments (OSSEs). A typical OSSE using ISOGAME would involve: (1) simulating various ionospheric conditions on global scales; (2) simulating ionospheric measurements made from a constellation of low-Earth-orbiters (LEOs), particularly Global Navigation Satellite System (GNSS) radio occultation data, and from ground-based global GNSS networks; (3) conducting ionospheric data assimilation experiments with the Global Assimilative Ionospheric Model (GAIM); and (4) analyzing modeling results with visualization tools. ISOGAME can provide quantitative assessment of the accuracy of assimilative modeling with the interested observation system. Other observation systems besides those based on GNSS are also possible to analyze. The system is composed of a suite of software that combines the GAIM, including a 4D first-principles ionospheric model and data assimilation modules, an Internal Reference Ionosphere (IRI) model that has been developed by international ionospheric research communities, observation simulator, visualization software, and orbit design, simulation, and optimization software. The core GAIM model used in ISOGAME is based on the GAIM++ code (written in C++) that includes a new high-fidelity geomagnetic field representation (multi-dipole). New visualization tools and analysis algorithms for the OSSEs are now part of ISOGAME.

  9. Building simple multiscale visualizations of outcrop geology using virtual reality modeling language (VRML)

    NASA Astrophysics Data System (ADS)

    Thurmond, John B.; Drzewiecki, Peter A.; Xu, Xueming

    2005-08-01

    Geological data collected from outcrop are inherently three-dimensional (3D) and span a variety of scales, from the megascopic to the microscopic. This presents challenges in both interpreting and communicating observations. The Virtual Reality Modeling Language provides an easy way for geoscientists to construct complex visualizations that can be viewed with free software. Field data in tabular form can be used to generate hierarchical multi-scale visualizations of outcrops, which can convey the complex relationships between a variety of data types simultaneously. An example from carbonate mud-mounds in southeastern New Mexico illustrates the embedding of three orders of magnitude of observation into a single visualization, for the purpose of interpreting depositional facies relationships in three dimensions. This type of raw data visualization can be built without software tools, yet is incredibly useful for interpreting and communicating data. Even simple visualizations can aid in the interpretation of complex 3D relationships that are frequently encountered in the geosciences.

  10. AstroBlend: An astrophysical visualization package for Blender

    NASA Astrophysics Data System (ADS)

    Naiman, J. P.

    2016-04-01

    The rapid growth in scale and complexity of both computational and observational astrophysics over the past decade necessitates efficient and intuitive methods for examining and visualizing large datasets. Here, I present AstroBlend, an open-source Python library for use within the three dimensional modeling software, Blender. While Blender has been a popular open-source software among animators and visual effects artists, in recent years it has also become a tool for visualizing astrophysical datasets. AstroBlend combines the three dimensional capabilities of Blender with the analysis tools of the widely used astrophysical toolset, yt, to afford both computational and observational astrophysicists the ability to simultaneously analyze their data and create informative and appealing visualizations. The introduction of this package includes a description of features, work flow, and various example visualizations. A website - www.astroblend.com - has been developed which includes tutorials, and a gallery of example images and movies, along with links to downloadable data, three dimensional artistic models, and various other resources.

  11. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt

    2013-01-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.

  12. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    NASA Astrophysics Data System (ADS)

    Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.

    2013-12-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.

  13. Are Earth System model software engineering practices fit for purpose? A case study.

    NASA Astrophysics Data System (ADS)

    Easterbrook, S. M.; Johns, T. C.

    2009-04-01

    We present some analysis and conclusions from a case study of the culture and practices of scientists at the Met Office and Hadley Centre working on the development of software for climate and Earth System models using the MetUM infrastructure. The study examined how scientists think about software correctness, prioritize their requirements in making changes, and develop a shared understanding of the resulting models. We conclude that highly customized techniques driven strongly by scientific research goals have evolved for verification and validation of such models. In a formal software engineering context these represents costly, but invaluable, software integration tests with considerable benefits. The software engineering practices seen also exhibit recognisable features of both agile and open source software development projects - self-organisation of teams consistent with a meritocracy rather than top-down organisation, extensive use of informal communication channels, and software developers who are generally also users and science domain experts. We draw some general conclusions on whether these practices work well, and what new software engineering challenges may lie ahead as Earth System models become ever more complex and petascale computing becomes the norm.

  14. Software For Graphical Representation Of A Network

    NASA Technical Reports Server (NTRS)

    Mcallister, R. William; Mclellan, James P.

    1993-01-01

    System Visualization Tool (SVT) computer program developed to provide systems engineers with means of graphically representing networks. Generates diagrams illustrating structures and states of networks defined by users. Provides systems engineers powerful tool simplifing analysis of requirements and testing and maintenance of complex software-controlled systems. Employs visual models supporting analysis of chronological sequences of requirements, simulation data, and related software functions. Applied to pneumatic, hydraulic, and propellant-distribution networks. Used to define and view arbitrary configurations of such major hardware components of system as propellant tanks, valves, propellant lines, and engines. Also graphically displays status of each component. Advantage of SVT: utilizes visual cues to represent configuration of each component within network. Written in Turbo Pascal(R), version 5.0.

  15. UTOPIAN: user-driven topic modeling based on interactive nonnegative matrix factorization.

    PubMed

    Choo, Jaegul; Lee, Changhyun; Reddy, Chandan K; Park, Haesun

    2013-12-01

    Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.

  16. Sensory processing during viewing of cinematographic material: Computational modeling and functional neuroimaging

    PubMed Central

    Bordier, Cecile; Puja, Francesco; Macaluso, Emiliano

    2013-01-01

    The investigation of brain activity using naturalistic, ecologically-valid stimuli is becoming an important challenge for neuroscience research. Several approaches have been proposed, primarily relying on data-driven methods (e.g. independent component analysis, ICA). However, data-driven methods often require some post-hoc interpretation of the imaging results to draw inferences about the underlying sensory, motor or cognitive functions. Here, we propose using a biologically-plausible computational model to extract (multi-)sensory stimulus statistics that can be used for standard hypothesis-driven analyses (general linear model, GLM). We ran two separate fMRI experiments, which both involved subjects watching an episode of a TV-series. In Exp 1, we manipulated the presentation by switching on-and-off color, motion and/or sound at variable intervals, whereas in Exp 2, the video was played in the original version, with all the consequent continuous changes of the different sensory features intact. Both for vision and audition, we extracted stimulus statistics corresponding to spatial and temporal discontinuities of low-level features, as well as a combined measure related to the overall stimulus saliency. Results showed that activity in occipital visual cortex and the superior temporal auditory cortex co-varied with changes of low-level features. Visual saliency was found to further boost activity in extra-striate visual cortex plus posterior parietal cortex, while auditory saliency was found to enhance activity in the superior temporal cortex. Data-driven ICA analyses of the same datasets also identified “sensory” networks comprising visual and auditory areas, but without providing specific information about the possible underlying processes, e.g., these processes could relate to modality, stimulus features and/or saliency. We conclude that the combination of computational modeling and GLM enables the tracking of the impact of bottom–up signals on brain activity during viewing of complex and dynamic multisensory stimuli, beyond the capability of purely data-driven approaches. PMID:23202431

  17. Software Product Lines: Report of the 2009 U.S. Army Software Product Line Workshop

    DTIC Science & Technology

    2009-04-01

    record system was fielded in 2008. One early challenge for Overwatch was coming up with a funding model that would support core asset development (a...match the organizational model to the funding model . Product line architecture is essential. Address product line requirements up front. Put processes...when trying to move from a customer-driven, product-specific funding model to one in which at least some of the funds are allocated to the creation and

  18. Visualization of Stereoscopic Anatomic Models of the Paranasal Sinuses and Cervical Vertebrae from the Surgical and Procedural Perspective

    ERIC Educational Resources Information Center

    Chen, Jian; Smith, Andrew D.; Khan, Majid A.; Sinning, Allan R.; Conway, Marianne L.; Cui, Dongmei

    2017-01-01

    Recent improvements in three-dimensional (3D) virtual modeling software allows anatomists to generate high-resolution, visually appealing, colored, anatomical 3D models from computed tomography (CT) images. In this study, high-resolution CT images of a cadaver were used to develop clinically relevant anatomic models including facial skull, nasal…

  19. Modeling and Visualization Process of the Curve of Pen Point by GeoGebra

    ERIC Educational Resources Information Center

    Aktümen, Muharem; Horzum, Tugba; Ceylan, Tuba

    2013-01-01

    This study describes the mathematical construction of a real-life model by means of parametric equations, as well as the two- and three-dimensional visualization of the model using the software GeoGebra. The model was initially considered as "determining the parametric equation of the curve formed on a plane by the point of a pen, positioned…

  20. Long-Lasting Crossmodal Cortical Reorganization Triggered by Brief Postnatal Visual Deprivation.

    PubMed

    Collignon, Olivier; Dormal, Giulia; de Heering, Adelaide; Lepore, Franco; Lewis, Terri L; Maurer, Daphne

    2015-09-21

    Animal and human studies have demonstrated that transient visual deprivation early in life, even for a very short period, permanently alters the response properties of neurons in the visual cortex and leads to corresponding behavioral visual deficits. While it is acknowledged that early-onset and longstanding blindness leads the occipital cortex to respond to non-visual stimulation, it remains unknown whether a short and transient period of postnatal visual deprivation is sufficient to trigger crossmodal reorganization that persists after years of visual experience. In the present study, we characterized brain responses to auditory stimuli in 11 adults who had been deprived of all patterned vision at birth by congenital cataracts in both eyes until they were treated at 9 to 238 days of age. When compared to controls with typical visual experience, the cataract-reversal group showed enhanced auditory-driven activity in focal visual regions. A combination of dynamic causal modeling with Bayesian model selection indicated that this auditory-driven activity in the occipital cortex was better explained by direct cortico-cortical connections with the primary auditory cortex than by subcortical connections. Thus, a short and transient period of visual deprivation early in life leads to enduring large-scale crossmodal reorganization of the brain circuitry typically dedicated to vision. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Applying the metro map to software development management

    NASA Astrophysics Data System (ADS)

    Aguirregoitia, Amaia; Dolado, J. Javier; Presedo, Concepción

    2010-01-01

    This paper presents MetroMap, a new graphical representation model for controlling and managing the software development process. Metromap uses metaphors and visual representation techniques to explore several key indicators in order to support problem detection and resolution. The resulting visualization addresses diverse management tasks, such as tracking of deviations from the plan, analysis of patterns of failure detection and correction, overall assessment of change management policies, and estimation of product quality. The proposed visualization uses a metaphor with a metro map along with various interactive techniques to represent information concerning the software development process and to deal efficiently with multivariate visual queries. Finally, the paper shows the implementation of the tool in JavaFX with data of a real project and the results of testing the tool with the aforementioned data and users attempting several information retrieval tasks. The conclusion shows the results of analyzing user response time and efficiency using the MetroMap visualization system. The utility of the tool was positively evaluated.

  2. TopoDrive and ParticleFlow--Two Computer Models for Simulation and Visualization of Ground-Water Flow and Transport of Fluid Particles in Two Dimensions

    USGS Publications Warehouse

    Hsieh, Paul A.

    2001-01-01

    This report serves as a user?s guide for two computer models: TopoDrive and ParticleFlow. These two-dimensional models are designed to simulate two ground-water processes: topography-driven flow and advective transport of fluid particles. To simulate topography-driven flow, the user may specify the shape of the water table, which bounds the top of the vertical flow section. To simulate transport of fluid particles, the model domain is a rectangle with overall flow from left to right. In both cases, the flow is under steady state, and the distribution of hydraulic conductivity may be specified by the user. The models compute hydraulic head, ground-water flow paths, and the movement of fluid particles. An interactive visual interface enables the user to easily and quickly explore model behavior, and thereby better understand ground-water flow processes. In this regard, TopoDrive and ParticleFlow are not intended to be comprehensive modeling tools, but are designed for modeling at the exploratory or conceptual level, for visual demonstration, and for educational purposes.

  3. Parallel Software Model Checking

    DTIC Science & Technology

    2015-01-08

    checker. This project will explore this strategy to parallelize the generalized PDR algorithm for software model checking. It belongs to TF1 due to its ... focus on formal verification . Generalized PDR. Generalized Property Driven Rechability (GPDR) i is an algorithm for solving HORN-SMT reachability...subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 08

  4. General Mode Scanning Probe Microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somnath, Suhas; Jesse, Stephen

    A critical part of SPM measurements is the information transfer from the probe-sample junction to the measurement system. Current information transfer methods heavily compress the information-rich data stream by averaging the data over a time interval, or via heterodyne detection approaches such as lock-in amplifiers and phase-locked loops. As a consequence, highly valuable information at the sub-microsecond time scales or information from frequencies outside the measurement band is lost. We have developed a fundamentally new approach called General Mode (G-mode), where we can capture the complete information stream from the detectors in the microscope. The availability of the complete informationmore » allows the microscope operator to analyze the data via information-theory analysis or comprehensive physical models. Furthermore, the complete data stream enables advanced data-driven filtering algorithms, multi-resolution imaging, ultrafast spectroscropic imaging, spatial mapping of multidimensional variability in material properties, etc. Though we applied this approach to scanning probe microscopy, the general philosophy of G-mode can be applied to many other modes of microscopy. G-mode data is captured by completely custom software written in LabVIEW and Matlab. The software generates the waveforms to electrically, thermally, or mechanically excite the SPM probe. It handles real-time communications with the microscope software for operations such as moving the SPM probe position and also controls other instrumentation hardware. The software also controls multiple variants of high-speed data acquisition cards to excite the SPM probe with the excitation waveform and simultaneously measure multiple channels of information from the microscope detectors at sampling rates of 1-100 MHz. The software also saves the raw data to the computer and allows the microscope operator to visualize processed or filtered data during the experiment. The software performs all these features while offering a user-friendly interface.« less

  5. Model-Driven Useware Engineering

    NASA Astrophysics Data System (ADS)

    Meixner, Gerrit; Seissler, Marc; Breiner, Kai

    User-oriented hardware and software development relies on a systematic development process based on a comprehensive analysis focusing on the users' requirements and preferences. Such a development process calls for the integration of numerous disciplines, from psychology and ergonomics to computer sciences and mechanical engineering. Hence, a correspondingly interdisciplinary team must be equipped with suitable software tools to allow it to handle the complexity of a multimodal and multi-device user interface development approach. An abstract, model-based development approach seems to be adequate for handling this complexity. This approach comprises different levels of abstraction requiring adequate tool support. Thus, in this chapter, we present the current state of our model-based software tool chain. We introduce the use model as the core model of our model-based process, transformation processes, and a model-based architecture, and we present different software tools that provide support for creating and maintaining the models or performing the necessary model transformations.

  6. DataFed: A Federated Data System for Visualization and Analysis of Spatio-Temporal Air Quality Data

    NASA Astrophysics Data System (ADS)

    Husar, R. B.; Hoijarvi, K.

    2017-12-01

    DataFed is a distributed web-services-based computing environment for accessing, processing, and visualizing atmospheric data in support of air quality science and management. The flexible, adaptive environment facilitates the access and flow of atmospheric data from provider to users by enabling the creation of user-driven data processing/visualization applications. DataFed `wrapper' components, non-intrusively wrap heterogeneous, distributed datasets for access by standards-based GIS web services. The mediator components (also web services) map the heterogeneous data into a spatio-temporal data model. Chained web services provide homogeneous data views (e.g., geospatial, time views) using a global multi-dimensional data model. In addition to data access and rendering, the data processing component services can be programmed for filtering, aggregation, and fusion of multidimensional data. A complete application software is written in a custom made data flow language. Currently, the federated data pool consists of over 50 datasets originating from globally distributed data providers delivering surface-based air quality measurements, satellite observations, emissions data as well as regional and global-scale air quality models. The web browser-based user interface allows point and click navigation and browsing the XYZT multi-dimensional data space. The key applications of DataFed are for exploring spatial pattern of pollutants, seasonal, weekly, diurnal cycles and frequency distributions for exploratory air quality research. Since 2008, DataFed has been used to support EPA in the implementation of the Exceptional Event Rule. The data system is also used at universities in the US, Europe and Asia.

  7. Animation of finite element models and results

    NASA Technical Reports Server (NTRS)

    Lipman, Robert R.

    1992-01-01

    This is not intended as a complete review of computer hardware and software that can be used for animation of finite element models and results, but is instead a demonstration of the benefits of visualization using selected hardware and software. The role of raw computational power, graphics speed, and the use of videotape are discussed.

  8. Integrated pathway-based transcription regulation network mining and visualization based on gene expression profiles.

    PubMed

    Kibinge, Nelson; Ono, Naoaki; Horie, Masafumi; Sato, Tetsuo; Sugiura, Tadao; Altaf-Ul-Amin, Md; Saito, Akira; Kanaya, Shigehiko

    2016-06-01

    Conventionally, workflows examining transcription regulation networks from gene expression data involve distinct analytical steps. There is a need for pipelines that unify data mining and inference deduction into a singular framework to enhance interpretation and hypotheses generation. We propose a workflow that merges network construction with gene expression data mining focusing on regulation processes in the context of transcription factor driven gene regulation. The pipeline implements pathway-based modularization of expression profiles into functional units to improve biological interpretation. The integrated workflow was implemented as a web application software (TransReguloNet) with functions that enable pathway visualization and comparison of transcription factor activity between sample conditions defined in the experimental design. The pipeline merges differential expression, network construction, pathway-based abstraction, clustering and visualization. The framework was applied in analysis of actual expression datasets related to lung, breast and prostrate cancer. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Parameterized hardware description as object oriented hardware model implementation

    NASA Astrophysics Data System (ADS)

    Drabik, Pawel K.

    2010-09-01

    The paper introduces novel model for design, visualization and management of complex, highly adaptive hardware systems. The model settles component oriented environment for both hardware modules and software application. It is developed on parameterized hardware description research. Establishment of stable link between hardware and software, as a purpose of designed and realized work, is presented. Novel programming framework model for the environment, named Graphic-Functional-Components is presented. The purpose of the paper is to present object oriented hardware modeling with mentioned features. Possible model implementation in FPGA chips and its management by object oriented software in Java is described.

  10. Wired Widgets: Agile Visualization for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Gerschefske, K.; Witmer, J.

    2012-09-01

    Continued advancement in sensors and analysis techniques have resulted in a wealth of Space Situational Awareness (SSA) data, made available via tools and Service Oriented Architectures (SOA) such as those in the Joint Space Operations Center Mission Systems (JMS) environment. Current visualization software cannot quickly adapt to rapidly changing missions and data, preventing operators and analysts from performing their jobs effectively. The value of this wealth of SSA data is not fully realized, as the operators' existing software is not built with the flexibility to consume new or changing sources of data or to rapidly customize their visualization as the mission evolves. While tools like the JMS user-defined operational picture (UDOP) have begun to fill this gap, this paper presents a further evolution, leveraging Web 2.0 technologies for maximum agility. We demonstrate a flexible Web widget framework with inter-widget data sharing, publish-subscribe eventing, and an API providing the basis for consumption of new data sources and adaptable visualization. Wired Widgets offers cross-portal widgets along with a widget communication framework and development toolkit for rapid new widget development, giving operators the ability to answer relevant questions as the mission evolves. Wired Widgets has been applied in a number of dynamic mission domains including disaster response, combat operations, and noncombatant evacuation scenarios. The variety of applications demonstrate that Wired Widgets provides a flexible, data driven solution for visualization in changing environments. In this paper, we show how, deployed in the Ozone Widget Framework portal environment, Wired Widgets can provide an agile, web-based visualization to support the SSA mission. Furthermore, we discuss how the tenets of agile visualization can generally be applied to the SSA problem space to provide operators flexibility, potentially informing future acquisition and system development.

  11. A spatio-temporal model of the human observer for use in display design

    NASA Astrophysics Data System (ADS)

    Bosman, Dick

    1989-08-01

    A "quick look" visual model, a kind of standard observer in software, is being developed to estimate the appearance of new display designs before prototypes are built. It operates on images also stored in software. It is assumed that the majority of display design flaws and technology artefacts can be identified in representations of early visual processing, and insight obtained into very local to global (supra-threshold) brightness distributions. Cognitive aspects are not considered because it seems that poor acceptance of technology and design is only weakly coupled to image content.

  12. Envision: An interactive system for the management and visualization of large geophysical data sets

    NASA Technical Reports Server (NTRS)

    Searight, K. R.; Wojtowicz, D. P.; Walsh, J. E.; Pathi, S.; Bowman, K. P.; Wilhelmson, R. B.

    1995-01-01

    Envision is a software project at the University of Illinois and Texas A&M, funded by NASA's Applied Information Systems Research Project. It provides researchers in the geophysical sciences convenient ways to manage, browse, and visualize large observed or model data sets. Envision integrates data management, analysis, and visualization of geophysical data in an interactive environment. It employs commonly used standards in data formats, operating systems, networking, and graphics. It also attempts, wherever possible, to integrate with existing scientific visualization and analysis software. Envision has an easy-to-use graphical interface, distributed process components, and an extensible design. It is a public domain package, freely available to the scientific community.

  13. Test Driven Development of Scientific Models

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.

    2014-01-01

    Test-Driven Development (TDD), a software development process that promises many advantages for developer productivity and software reliability, has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices.After a brief overview of the TDD process and my experience in applying the methodology for development activities at Goddard, I will delve more deeply into some of the challenges that are posed by numerical and scientific software as well as tools and implementation approaches that should address those challenges.

  14. Model-Driven Study of Visual Memory

    DTIC Science & Technology

    2004-12-01

    dimensional stimuli (synthetic human faces ) afford important insights into episodic recognition memory. The results were well accommodated by a summed...the unusual properties of the z-transformed ROCS. 15. SUBJECT TERMS Memory, visual memory, computational model, human memory, faces , identity 16...3 Accomplishments/New Findings 3 Work on Objective One: Recognition Memory for Synthetic Faces . 3 Experim ent 1

  15. Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization.

    PubMed

    Jung, Sang-Kyu; McDonald, Karen

    2011-08-16

    Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net.

  16. Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization

    PubMed Central

    2011-01-01

    Background Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. Results The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Conclusion Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net. PMID:21846353

  17. A Visualization-Based Tutoring Tool for Engineering Education

    NASA Astrophysics Data System (ADS)

    Nguyen, Tang-Hung; Khoo, I.-Hung

    2010-06-01

    In engineering disciplines, students usually have hard time to visualize different aspects of engineering analysis and design, which inherently are too complex or abstract to fully understand without the aid of visual explanations or visualizations. As examples, when learning materials and sequences of construction process, students need to visualize how all components of a constructed facility are assembled? Such visualization can not be achieved in a textbook and a traditional lecturing environment. In this paper, the authors present the development of a computer tutoring software, in which different visualization tools including video clips, 3 dimensional models, drawings, pictures/photos together with complementary texts are used to assist students in deeply understanding and effectively mastering materials. The paper will also discuss the implementation and the effectiveness evaluation of the proposed tutoring software, which was used to teach a construction engineering management course offered at California State University, Long Beach.

  18. Assessing the effect of adding interactive modeling to the geoscience curriculum

    NASA Astrophysics Data System (ADS)

    Castillo, A.; Marshall, J.; Cardenas, M.

    2013-12-01

    Technology and computer models enhance the learning experience when appropriately utilized. Moreover, learning is significantly improved when effective visualization is combined with models of processes allowing for inquiry-based problem solving. Still, hands-on experiences in real scenarios result in better contextualization of related problems compared to virtual laboratories. Therefore, the role of scientific visualization, technology, and computer modeling is to enhance, not displace, the learning experience by supplementing real-world problem solving and experiences, although in some circumstances, they can adequately serve to take the place of reality. The key to improving scientific education is to embrace an inquiry-based approach that favorably uses technology. This study will attempt to evaluate the effect of adding interactive modeling to the geological sciences curriculum. An assessment tool, designed to assess student understanding of physical hydrology, was used to evaluate a curriculum intervention based on student learning with a data- and modeling-driven approach using COMSOL Multiphysics software. This intervention was implemented in an upper division and graduate physical hydrology course in fall 2012. Students enrolled in the course in fall 2011 served as the control group. Interactive modeling was added to the curriculum in fall 2012 to replace the analogous mathematical modeling done by hand in fall 2011. Pre- and post-test results were used to assess and report its effectiveness. Student interviews were also used to probe student reactions to both the experimental and control curricula. The pre- and post-tests asked students to describe the significant processes in the hydrological cycle and describe the laws governing these processes. Their ability to apply their knowledge in a real-world problem was also assessed. Since the pre- and post-test data failed to meet the assumption of normality, a non-parametric Kruskal-Wallis test was run to determine if there were differences in pre- and post-test scores among the 2011 and 2012 groups. Results reveal significant differences in pretest and posttest scores among the 2011 and 2012 groups. Interview data revealed that students experience both affordances and barriers to using geoscience learning tools. Important affordances included COMSOL's modeling capabilities, the visualizations it offers, as well as the opportunity to use the software in the course. Barriers included lack of COMSOL experience, difficulty with COMSOL instructions, and lack of instruction with the software. Results from this study revealed that a well-designed pre- and post-assessment can be used to infer whether a given instructional intervention has caused a change in understanding in a given group of students, but the results are not necessarily generalizable. However, the student interviews, which were used to probe student reactions to both the experimental and control curricula, revealed that students experience both affordances and barriers to geoscience learning tools. This result has limitations including the number of participants, all from one institution, but the assessment tool was useful to assess the effect of adding interactive modeling to the geoscience curriculum. Supported by NSF CAREER grant (EAR-0955750).

  19. Calibration of visually guided reaching is driven by error-corrective learning and internal dynamics.

    PubMed

    Cheng, Sen; Sabes, Philip N

    2007-04-01

    The sensorimotor calibration of visually guided reaching changes on a trial-to-trial basis in response to random shifts in the visual feedback of the hand. We show that a simple linear dynamical system is sufficient to model the dynamics of this adaptive process. In this model, an internal variable represents the current state of sensorimotor calibration. Changes in this state are driven by error feedback signals, which consist of the visually perceived reach error, the artificial shift in visual feedback, or both. Subjects correct for > or =20% of the error observed on each movement, despite being unaware of the visual shift. The state of adaptation is also driven by internal dynamics, consisting of a decay back to a baseline state and a "state noise" process. State noise includes any source of variability that directly affects the state of adaptation, such as variability in sensory feedback processing, the computations that drive learning, or the maintenance of the state. This noise is accumulated in the state across trials, creating temporal correlations in the sequence of reach errors. These correlations allow us to distinguish state noise from sensorimotor performance noise, which arises independently on each trial from random fluctuations in the sensorimotor pathway. We show that these two noise sources contribute comparably to the overall magnitude of movement variability. Finally, the dynamics of adaptation measured with random feedback shifts generalizes to the case of constant feedback shifts, allowing for a direct comparison of our results with more traditional blocked-exposure experiments.

  20. MBSE-Driven Visualization of Requirements Allocation and Traceability

    NASA Technical Reports Server (NTRS)

    Jackson, Maddalena; Wilkerson, Marcus

    2016-01-01

    In a Model Based Systems Engineering (MBSE) infusion effort, there is a usually a concerted effort to define the information architecture, ontologies, and patterns that drive the construction and architecture of MBSE models, but less attention is given to the logical follow-on of that effort: how to practically leverage the resulting semantic richness of a well-formed populated model to enable systems engineers to work more effectively, as MBSE promises. While ontologies and patterns are absolutely necessary, an MBSE effort must also design and provide practical demonstration of value (through human-understandable representations of model data that address stakeholder concerns) or it will not succeed. This paper will discuss opportunities that exist for visualization in making the richness of a well-formed model accessible to stakeholders, specifically stakeholders who rely on the model for their day-to-day work. This paper will discuss the value added by MBSE-driven visualizations in the context of a small case study of interactive visualizations created and used on NASA's proposed Europa Mission. The case study visualizations were created for the purpose of understanding and exploring targeted aspects of requirements flow, allocation, and comparing the structure of that flow-down to a conceptual project decomposition. The work presented in this paper is an example of a product that leverages the richness and formalisms of our knowledge representation while also responding to the quality attributes SEs care about.

  1. End-to-end observatory software modeling using domain specific languages

    NASA Astrophysics Data System (ADS)

    Filgueira, José M.; Bec, Matthieu; Liu, Ning; Peng, Chien; Soto, José

    2014-07-01

    The Giant Magellan Telescope (GMT) is a 25-meter extremely large telescope that is being built by an international consortium of universities and research institutions. Its software and control system is being developed using a set of Domain Specific Languages (DSL) that supports a model driven development methodology integrated with an Agile management process. This approach promotes the use of standardized models that capture the component architecture of the system, that facilitate the construction of technical specifications in a uniform way, that facilitate communication between developers and domain experts and that provide a framework to ensure the successful integration of the software subsystems developed by the GMT partner institutions.

  2. LIQUID: an-open source software for identifying lipids in LC-MS/MS-based lipidomics data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kyle, Jennifer E.; Crowell, Kevin L.; Casey, Cameron P.

    2017-01-31

    We introduce an open-source software, LIQUID, for semi-automated processing and visualization of LC-MS/MS based lipidomics data. LIQUID provides users with the capability to process high throughput data and contains a customizable target library and scoring model per project needs. The graphical user interface provides visualization of multiple lines of spectral evidence for each lipid identification, allowing rapid examination of data for making confident identifications of lipid molecular species.

  3. CrossTalk: The Journal of Defense Software Engineering. Volume 19, Number 9

    DTIC Science & Technology

    2006-09-01

    it does. Several freely down- loadable methodologies have emerged to support the developer in modeling threats to applications and other soft...SECURIS. Model -Driven Develop - ment and Analysis of Secure Information Systems <www.sintef.no/ content/page1_1824.aspx>. 10. The SECURIS Project ...By applying these methods to the SDLC , we can actively reduce the number of known vulnerabilities in software as it is developed . For

  4. GiPSi:a framework for open source/open architecture software development for organ-level surgical simulation.

    PubMed

    Cavuşoğlu, M Cenk; Göktekin, Tolga G; Tendick, Frank

    2006-04-01

    This paper presents the architectural details of an evolving open source/open architecture software framework for developing organ-level surgical simulations. Our goal is to facilitate shared development of reusable models, to accommodate heterogeneous models of computation, and to provide a framework for interfacing multiple heterogeneous models. The framework provides an application programming interface for interfacing dynamic models defined over spatial domains. It is specifically designed to be independent of the specifics of the modeling methods used, and therefore facilitates seamless integration of heterogeneous models and processes. Furthermore, each model has separate geometries for visualization, simulation, and interfacing, allowing the model developer to choose the most natural geometric representation for each case. Input/output interfaces for visualization and haptics for real-time interactive applications have also been provided.

  5. Development of a methodology for assessing the safety of embedded software systems

    NASA Technical Reports Server (NTRS)

    Garrett, C. J.; Guarro, S. B.; Apostolakis, G. E.

    1993-01-01

    A Dynamic Flowgraph Methodology (DFM) based on an integrated approach to modeling and analyzing the behavior of software-driven embedded systems for assessing and verifying reliability and safety is discussed. DFM is based on an extension of the Logic Flowgraph Methodology to incorporate state transition models. System models which express the logic of the system in terms of causal relationships between physical variables and temporal characteristics of software modules are analyzed to determine how a certain state can be reached. This is done by developing timed fault trees which take the form of logical combinations of static trees relating the system parameters at different point in time. The resulting information concerning the hardware and software states can be used to eliminate unsafe execution paths and identify testing criteria for safety critical software functions.

  6. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    PubMed

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.

  7. Working with the HL7 metamodel in a Model Driven Engineering context.

    PubMed

    Martínez-García, A; García-García, J A; Escalona, M J; Parra-Calderón, C L

    2015-10-01

    HL7 (Health Level 7) International is an organization that defines health information standards. Most HL7 domain information models have been designed according to a proprietary graphic language whose domain models are based on the HL7 metamodel. Many researchers have considered using HL7 in the MDE (Model-Driven Engineering) context. A limitation has been identified: all MDE tools support UML (Unified Modeling Language), which is a standard model language, but most do not support the HL7 proprietary model language. We want to support software engineers without HL7 experience, thus real-world problems would be modeled by them by defining system requirements in UML that are compliant with HL7 domain models transparently. The objective of the present research is to connect HL7 with software analysis using a generic model-based approach. This paper introduces a first approach to an HL7 MDE solution that considers the MIF (Model Interchange Format) metamodel proposed by HL7 by making use of a plug-in developed in the EA (Enterprise Architect) tool. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Linking Goal-Oriented Requirements and Model-Driven Development

    NASA Astrophysics Data System (ADS)

    Pastor, Oscar; Giachetti, Giovanni

    In the context of Goal-Oriented Requirement Engineering (GORE) there are interesting modeling approaches for the analysis of complex scenarios that are oriented to obtain and represent the relevant requirements for the development of software products. However, the way to use these GORE models in an automated Model-Driven Development (MDD) process is not clear, and, in general terms, the translation of these models into the final software products is still manually performed. Therefore, in this chapter, we show an approach to automatically link GORE models and MDD processes, which has been elaborated by considering the experience obtained from linking the i * framework with an industrially applied MDD approach. The linking approach proposed is formulated by means of a generic process that is based on current modeling standards and technologies in order to facilitate its application for different MDD and GORE approaches. Special attention is paid to how this process generates appropriate model transformation mechanisms to automatically obtain MDD conceptual models from GORE models, and how it can be used to specify validation mechanisms to assure the correct model transformations.

  9. The SCEC/UseIT Intern Program: Creating Open-Source Visualization Software Using Diverse Resources

    NASA Astrophysics Data System (ADS)

    Francoeur, H.; Callaghan, S.; Perry, S.; Jordan, T.

    2004-12-01

    The Southern California Earthquake Center undergraduate IT intern program (SCEC UseIT) conducts IT research to benefit collaborative earth science research. Through this program, interns have developed real-time, interactive, 3D visualization software using open-source tools. Dubbed LA3D, a distribution of this software is now in use by the seismic community. LA3D enables the user to interactively view Southern California datasets and models of importance to earthquake scientists, such as faults, earthquakes, fault blocks, digital elevation models, and seismic hazard maps. LA3D is now being extended to support visualizations anywhere on the planet. The new software, called SCEC-VIDEO (Virtual Interactive Display of Earth Objects), makes use of a modular, plugin-based software architecture which supports easy development and integration of new data sets. Currently SCEC-VIDEO is in beta testing, with a full open-source release slated for the future. Both LA3D and SCEC-VIDEO were developed using a wide variety of software technologies. These, which included relational databases, web services, software management technologies, and 3-D graphics in Java, were necessary to integrate the heterogeneous array of data sources which comprise our software. Currently the interns are working to integrate new technologies and larger data sets to increase software functionality and value. In addition, both LA3D and SCEC-VIDEO allow the user to script and create movies. Thus program interns with computer science backgrounds have been writing software while interns with other interests, such as cinema, geology, and education, have been making movies that have proved of great use in scientific talks, media interviews, and education. Thus, SCEC UseIT incorporates a wide variety of scientific and human resources to create products of value to the scientific and outreach communities. The program plans to continue with its interdisciplinary approach, increasing the relevance of the software and expanding its use in the scientific community.

  10. Test Driven Development of a Parameterized Ice Sheet Component

    NASA Astrophysics Data System (ADS)

    Clune, T.

    2011-12-01

    Test driven development (TDD) is a software development methodology that offers many advantages over traditional approaches including reduced development and maintenance costs, improved reliability, and superior design quality. Although TDD is widely accepted in many software communities, the suitability to scientific software is largely undemonstrated and warrants a degree of skepticism. Indeed, numerical algorithms pose several challenges to unit testing in general, and TDD in particular. Among these challenges are the need to have simple, non-redundant closed-form expressions to compare against the results obtained from the implementation as well as realistic error estimates. The necessity for serial and parallel performance raises additional concerns for many scientific applicaitons. In previous work I demonstrated that TDD performed well for the development of a relatively simple numerical model that simulates the growth of snowflakes, but the results were anecdotal and of limited relevance to far more complex software components typical of climate models. This investigation has now been extended by successfully applying TDD to the implementation of a substantial portion of a new parameterized ice sheet component within a full climate model. After a brief introduction to TDD, I will present techniques that address some of the obstacles encountered with numerical algorithms. I will conclude with some quantitative and qualitative comparisons against climate components developed in a more traditional manner.

  11. Visual Literacy and the Integration of Parametric Modeling in the Problem-Based Curriculum

    ERIC Educational Resources Information Center

    Assenmacher, Matthew Benedict

    2013-01-01

    This quasi-experimental study investigated the application of visual literacy skills in the form of parametric modeling software in relation to traditional forms of sketching. The study included two groups of high school technical design students. The control and experimental groups involved in the study consisted of two randomly selected groups…

  12. Multimodal visualization interface for data management, self-learning and data presentation.

    PubMed

    Van Sint Jan, S; Demondion, X; Clapworthy, G; Louryan, S; Rooze, M; Cotten, A; Viceconti, M

    2006-10-01

    A multimodal visualization software, called the Data Manager (DM), has been developed to increase interdisciplinary communication around the topic of visualization and modeling of various aspects of the human anatomy. Numerous tools used in Radiology are integrated in the interface that runs on standard personal computers. The available tools, combined to hierarchical data management and custom layouts, allow analyzing of medical imaging data using advanced features outside radiological premises (for example, for patient review, conference presentation or tutorial preparation). The system is free, and based on an open-source software development architecture, and therefore updates of the system for custom applications are possible.

  13. Eric Wilson | NREL

    Science.gov Websites

    , developing an analysis framework and data visualization for national residential building stock models, and include developing multifamily modeling capabilities for the BEopt building energy optimization software

  14. Software architecture of INO340 telescope control system

    NASA Astrophysics Data System (ADS)

    Ravanmehr, Reza; Khosroshahi, Habib

    2016-08-01

    The software architecture plays an important role in distributed control system of astronomical projects because many subsystems and components must work together in a consistent and reliable way. We have utilized a customized architecture design approach based on "4+1 view model" in order to design INOCS software architecture. In this paper, after reviewing the top level INOCS architecture, we present the software architecture model of INOCS inspired by "4+1 model", for this purpose we provide logical, process, development, physical, and scenario views of our architecture using different UML diagrams and other illustrative visual charts. Each view presents INOCS software architecture from a different perspective. We finish the paper by science data operation of INO340 and the concluding remarks.

  15. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  16. The practice of agent-based model visualization.

    PubMed

    Dorin, Alan; Geard, Nicholas

    2014-01-01

    We discuss approaches to agent-based model visualization. Agent-based modeling has its own requirements for visualization, some shared with other forms of simulation software, and some unique to this approach. In particular, agent-based models are typified by complexity, dynamism, nonequilibrium and transient behavior, heterogeneity, and a researcher's interest in both individual- and aggregate-level behavior. These are all traits requiring careful consideration in the design, experimentation, and communication of results. In the case of all but final communication for dissemination, researchers may not make their visualizations public. Hence, the knowledge of how to visualize during these earlier stages is unavailable to the research community in a readily accessible form. Here we explore means by which all phases of agent-based modeling can benefit from visualization, and we provide examples from the available literature and online sources to illustrate key stages and techniques.

  17. Just-in-time Time Data Analytics and Visualization of Climate Simulations using the Bellerophon Framework

    NASA Astrophysics Data System (ADS)

    Anantharaj, V. G.; Venzke, J.; Lingerfelt, E.; Messer, B.

    2015-12-01

    Climate model simulations are used to understand the evolution and variability of earth's climate. Unfortunately, high-resolution multi-decadal climate simulations can take days to weeks to complete. Typically, the simulation results are not analyzed until the model runs have ended. During the course of the simulation, the output may be processed periodically to ensure that the model is preforming as expected. However, most of the data analytics and visualization are not performed until the simulation is finished. The lengthy time period needed for the completion of the simulation constrains the productivity of climate scientists. Our implementation of near real-time data visualization analytics capabilities allows scientists to monitor the progress of their simulations while the model is running. Our analytics software executes concurrently in a co-scheduling mode, monitoring data production. When new data are generated by the simulation, a co-scheduled data analytics job is submitted to render visualization artifacts of the latest results. These visualization output are automatically transferred to Bellerophon's data server located at ORNL's Compute and Data Environment for Science (CADES) where they are processed and archived into Bellerophon's database. During the course of the experiment, climate scientists can then use Bellerophon's graphical user interface to view animated plots and their associated metadata. The quick turnaround from the start of the simulation until the data are analyzed permits research decisions and projections to be made days or sometimes even weeks sooner than otherwise possible! The supercomputer resources used to run the simulation are unaffected by co-scheduling the data visualization jobs, so the model runs continuously while the data are visualized. Our just-in-time data visualization software looks to increase climate scientists' productivity as climate modeling moves into exascale era of computing.

  18. Application of digital human modeling and simulation for vision analysis of pilots in a jet aircraft: a case study.

    PubMed

    Karmakar, Sougata; Pal, Madhu Sudan; Majumdar, Deepti; Majumdar, Dhurjati

    2012-01-01

    Ergonomic evaluation of visual demands becomes crucial for the operators/users when rapid decision making is needed under extreme time constraint like navigation task of jet aircraft. Research reported here comprises ergonomic evaluation of pilot's vision in a jet aircraft in virtual environment to demonstrate how vision analysis tools of digital human modeling software can be used effectively for such study. Three (03) dynamic digital pilot models, representative of smallest, average and largest Indian pilot population were generated from anthropometric database and interfaced with digital prototype of the cockpit in Jack software for analysis of vision within and outside the cockpit. Vision analysis tools like view cones, eye view windows, blind spot area, obscuration zone, reflection zone etc. were employed during evaluation of visual fields. Vision analysis tool was also used for studying kinematic changes of pilot's body joints during simulated gazing activity. From present study, it can be concluded that vision analysis tool of digital human modeling software was found very effective in evaluation of position and alignment of different displays and controls in the workstation based upon their priorities within the visual fields and anthropometry of the targeted users, long before the development of its physical prototype.

  19. Visual Scan Adaptation During Repeated Visual Search

    DTIC Science & Technology

    2010-01-01

    Junge, J. A. (2004). Searching for stimulus-driven shifts of attention. Psychonomic Bulletin & Review , 11, 876–881. Furst, C. J. (1971...search strategies cannot override attentional capture. Psychonomic Bulletin & Review , 11, 65–70. Wolfe, J. M. (1994). Guided search 2.0: A revised model...of visual search. Psychonomic Bulletin & Review , 1, 202–238. Wolfe, J. M. (1998a). Visual search. In H. Pashler (Ed.), Attention (pp. 13–73). East

  20. RWater - A Novel Cyber-enabled Data-driven Educational Tool for Interpreting and Modeling Hydrologic Processes

    NASA Astrophysics Data System (ADS)

    Rajib, M. A.; Merwade, V.; Zhao, L.; Song, C.

    2014-12-01

    Explaining the complex cause-and-effect relationships in hydrologic cycle can often be challenging in a classroom with the use of traditional teaching approaches. With the availability of observed rainfall, streamflow and other hydrology data on the internet, it is possible to provide the necessary tools to students to explore these relationships and enhance their learning experience. From this perspective, a new online educational tool, called RWater, is developed using Purdue University's HUBzero technology. RWater's unique features include: (i) its accessibility including the R software from any java supported web browser; (ii) no installation of any software on user's computer; (iii) all the work and resulting data are stored in user's working directory on RWater server; and (iv) no prior programming experience with R software is necessary. In its current version, RWater can dynamically extract streamflow data from any USGS gaging station without any need for post-processing for use in the educational modules. By following data-driven modules, students can write small scripts in R and thereby create visualizations to identify the effect of rainfall distribution and watershed characteristics on runoff generation, investigate the impacts of landuse and climate change on streamflow, and explore the changes in extreme hydrologic events in actual locations. Each module contains relevant definitions, instructions on data extraction and coding, as well as conceptual questions based on the possible analyses which the students would perform. In order to assess its suitability in classroom implementation, and to evaluate users' perception over its utility, the current version of RWater has been tested with three different groups: (i) high school students, (ii) middle and high school teachers; and (iii) upper undergraduate/graduate students. The survey results from these trials suggest that the RWater has potential to improve students' understanding on various relationships in hydrologic cycle, leading towards effective dissemination of hydrology education ranging from K-12 to the graduate level. RWater is a publicly available for use at: https://mygeohub.org/tools/rwater.

  1. SAGES: A Suite of Freely-Available Software Tools for Electronic Disease Surveillance in Resource-Limited Settings

    PubMed Central

    Lewis, Sheri L.; Feighner, Brian H.; Loschen, Wayne A.; Wojcik, Richard A.; Skora, Joseph F.; Coberly, Jacqueline S.; Blazes, David L.

    2011-01-01

    Public health surveillance is undergoing a revolution driven by advances in the field of information technology. Many countries have experienced vast improvements in the collection, ingestion, analysis, visualization, and dissemination of public health data. Resource-limited countries have lagged behind due to challenges in information technology infrastructure, public health resources, and the costs of proprietary software. The Suite for Automated Global Electronic bioSurveillance (SAGES) is a collection of modular, flexible, freely-available software tools for electronic disease surveillance in resource-limited settings. One or more SAGES tools may be used in concert with existing surveillance applications or the SAGES tools may be used en masse for an end-to-end biosurveillance capability. This flexibility allows for the development of an inexpensive, customized, and sustainable disease surveillance system. The ability to rapidly assess anomalous disease activity may lead to more efficient use of limited resources and better compliance with World Health Organization International Health Regulations. PMID:21572957

  2. P-MartCancer–Interactive Online Software to Enable Analysis of Shotgun Cancer Proteomic Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webb-Robertson, Bobbie-Jo M.; Bramer, Lisa M.; Jensen, Jeffrey L.

    P-MartCancer is a new interactive web-based software environment that enables biomedical and biological scientists to perform in-depth analyses of global proteomics data without requiring direct interaction with the data or with statistical software. P-MartCancer offers a series of statistical modules associated with quality assessment, peptide and protein statistics, protein quantification and exploratory data analyses driven by the user via customized workflows and interactive visualization. Currently, P-MartCancer offers access to multiple cancer proteomic datasets generated through the Clinical Proteomics Tumor Analysis Consortium (CPTAC) at the peptide, gene and protein levels. P-MartCancer is deployed using Azure technologies (http://pmart.labworks.org/cptac.html), the web-service is alternativelymore » available via Docker Hub (https://hub.docker.com/r/pnnl/pmart-web/) and many statistical functions can be utilized directly from an R package available on GitHub (https://github.com/pmartR).« less

  3. DNA Data Visualization (DDV): Software for Generating Web-Based Interfaces Supporting Navigation and Analysis of DNA Sequence Data of Entire Genomes.

    PubMed

    Neugebauer, Tomasz; Bordeleau, Eric; Burrus, Vincent; Brzezinski, Ryszard

    2015-01-01

    Data visualization methods are necessary during the exploration and analysis activities of an increasingly data-intensive scientific process. There are few existing visualization methods for raw nucleotide sequences of a whole genome or chromosome. Software for data visualization should allow the researchers to create accessible data visualization interfaces that can be exported and shared with others on the web. Herein, novel software developed for generating DNA data visualization interfaces is described. The software converts DNA data sets into images that are further processed as multi-scale images to be accessed through a web-based interface that supports zooming, panning and sequence fragment selection. Nucleotide composition frequencies and GC skew of a selected sequence segment can be obtained through the interface. The software was used to generate DNA data visualization of human and bacterial chromosomes. Examples of visually detectable features such as short and long direct repeats, long terminal repeats, mobile genetic elements, heterochromatic segments in microbial and human chromosomes, are presented. The software and its source code are available for download and further development. The visualization interfaces generated with the software allow for the immediate identification and observation of several types of sequence patterns in genomes of various sizes and origins. The visualization interfaces generated with the software are readily accessible through a web browser. This software is a useful research and teaching tool for genetics and structural genomics.

  4. Distributed Visualization Project

    NASA Technical Reports Server (NTRS)

    Craig, Douglas; Conroy, Michael; Kickbusch, Tracey; Mazone, Rebecca

    2016-01-01

    Distributed Visualization allows anyone, anywhere to see any simulation at any time. Development focuses on algorithms, software, data formats, data systems and processes to enable sharing simulation-based information across temporal and spatial boundaries without requiring stakeholders to possess highly-specialized and very expensive display systems. It also introduces abstraction between the native and shared data, which allows teams to share results without giving away proprietary or sensitive data. The initial implementation of this capability is the Distributed Observer Network (DON) version 3.1. DON 3.1 is available for public release in the NASA Software Store (https://software.nasa.gov/software/KSC-13775) and works with version 3.0 of the Model Process Control specification (an XML Simulation Data Representation and Communication Language) to display complex graphical information and associated Meta-Data.

  5. An ontology based trust verification of software license agreement

    NASA Astrophysics Data System (ADS)

    Lu, Wenhuan; Li, Xiaoqing; Gan, Zengqin; Wei, Jianguo

    2017-08-01

    When we install software or download software, there will show up so big mass document to state the rights and obligations, for which lots of person are not patient to read it or understand it. That would may make users feel distrust for the software. In this paper, we propose an ontology based verification for Software License Agreement. First of all, this work proposed an ontology model for domain of Software License Agreement. The domain ontology is constructed by proposed methodology according to copyright laws and 30 software license agreements. The License Ontology can act as a part of generalized copyright law knowledge model, and also can work as visualization of software licenses. Based on this proposed ontology, a software license oriented text summarization approach is proposed which performances showing that it can improve the accuracy of software licenses summarizing. Based on the summarization, the underline purpose of the software license can be explicitly explored for trust verification.

  6. A Model-Driven Approach to Teaching Concurrency

    ERIC Educational Resources Information Center

    Carro, Manuel; Herranz, Angel; Marino, Julio

    2013-01-01

    We present an undergraduate course on concurrent programming where formal models are used in different stages of the learning process. The main practical difference with other approaches lies in the fact that the ability to develop correct concurrent software relies on a systematic transformation of formal models of inter-process interaction (so…

  7. GeoFramework: A Modeling Framework for Solid Earth Geophysics

    NASA Astrophysics Data System (ADS)

    Gurnis, M.; Aivazis, M.; Tromp, J.; Tan, E.; Thoutireddy, P.; Liu, Q.; Choi, E.; Dicaprio, C.; Chen, M.; Simons, M.; Quenette, S.; Appelbe, B.; Aagaard, B.; Williams, C.; Lavier, L.; Moresi, L.; Law, H.

    2003-12-01

    As data sets in geophysics become larger and of greater relevance to other earth science disciplines, and as earth science becomes more interdisciplinary in general, modeling tools are being driven in new directions. There is now a greater need to link modeling codes to one another, link modeling codes to multiple datasets, and to make modeling software available to non modeling specialists. Coupled with rapid progress in computer hardware (including the computational speed afforded by massively parallel computers), progress in numerical algorithms, and the introduction of software frameworks, these lofty goals of merging software in geophysics are now possible. The GeoFramework project, a collaboration between computer scientists and geoscientists, is a response to these needs and opportunities. GeoFramework is based on and extends Pyre, a Python-based modeling framework, recently developed to link solid (Lagrangian) and fluid (Eulerian) models, as well as mesh generators, visualization packages, and databases, with one another for engineering applications. The utility and generality of Pyre as a general purpose framework in science is now being recognized. Besides its use in engineering and geophysics, it is also being used in particle physics and astronomy. Geology and geophysics impose their own unique requirements on software frameworks which are not generally available in existing frameworks and so there is a need for research in this area. One of the special requirements is the way Lagrangian and Eulerian codes will need to be linked in time and space within a plate tectonics context. GeoFramework has grown beyond its initial goal of linking a limited number of exiting codes together. The following codes are now being reengineered within the context of Pyre: Tecton, 3-D FE Visco-elastic code for lithospheric relaxation; CitComS, a code for spherical mantle convection; SpecFEM3D, a SEM code for global and regional seismic waves; eqsim, a FE code for dynamic earthquake rupture; SNAC, a developing 3-D coded based on the FLAC method for visco-elastoplastic deformation; SNARK, a 3-D FE-PIC method for viscoplastic deformation; and gPLATES an open source paleogeographic/plate tectonics modeling package. We will demonstrate how codes can be linked with themselves, such as a regional and global model of mantle convection and a visco-elastoplastic representation of the crust within viscous mantle flow. Finally, we will describe how http://GeoFramework.org has become a distribution site for a suite of modeling software in geophysics.

  8. High Performance Visualization using Query-Driven Visualizationand Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E. Wes; Campbell, Scott; Dart, Eli

    2006-06-15

    Query-driven visualization and analytics is a unique approach for high-performance visualization that offers new capabilities for knowledge discovery and hypothesis testing. The new capabilities akin to finding needles in haystacks are the result of combining technologies from the fields of scientific visualization and scientific data management. This approach is crucial for rapid data analysis and visualization in the petascale regime. This article describes how query-driven visualization is applied to a hero-sized network traffic analysis problem.

  9. A Web-based Visualization System for Three Dimensional Geological Model using Open GIS

    NASA Astrophysics Data System (ADS)

    Nemoto, T.; Masumoto, S.; Nonogaki, S.

    2017-12-01

    A three dimensional geological model is an important information in various fields such as environmental assessment, urban planning, resource development, waste management and disaster mitigation. In this study, we have developed a web-based visualization system for 3D geological model using free and open source software. The system has been successfully implemented by integrating web mapping engine MapServer and geographic information system GRASS. MapServer plays a role of mapping horizontal cross sections of 3D geological model and a topographic map. GRASS provides the core components for management, analysis and image processing of the geological model. Online access to GRASS functions has been enabled using PyWPS that is an implementation of WPS (Web Processing Service) Open Geospatial Consortium (OGC) standard. The system has two main functions. Two dimensional visualization function allows users to generate horizontal and vertical cross sections of 3D geological model. These images are delivered via WMS (Web Map Service) and WPS OGC standards. Horizontal cross sections are overlaid on the topographic map. A vertical cross section is generated by clicking a start point and an end point on the map. Three dimensional visualization function allows users to visualize geological boundary surfaces and a panel diagram. The user can visualize them from various angles by mouse operation. WebGL is utilized for 3D visualization. WebGL is a web technology that brings hardware-accelerated 3D graphics to the browser without installing additional software. The geological boundary surfaces can be downloaded to incorporate the geologic structure in a design on CAD and model for various simulations. This study was supported by JSPS KAKENHI Grant Number JP16K00158.

  10. Bioinformatic pipelines in Python with Leaf

    PubMed Central

    2013-01-01

    Background An incremental, loosely planned development approach is often used in bioinformatic studies when dealing with custom data analysis in a rapidly changing environment. Unfortunately, the lack of a rigorous software structuring can undermine the maintainability, communicability and replicability of the process. To ameliorate this problem we propose the Leaf system, the aim of which is to seamlessly introduce the pipeline formality on top of a dynamical development process with minimum overhead for the programmer, thus providing a simple layer of software structuring. Results Leaf includes a formal language for the definition of pipelines with code that can be transparently inserted into the user’s Python code. Its syntax is designed to visually highlight dependencies in the pipeline structure it defines. While encouraging the developer to think in terms of bioinformatic pipelines, Leaf supports a number of automated features including data and session persistence, consistency checks between steps of the analysis, processing optimization and publication of the analytic protocol in the form of a hypertext. Conclusions Leaf offers a powerful balance between plan-driven and change-driven development environments in the design, management and communication of bioinformatic pipelines. Its unique features make it a valuable alternative to other related tools. PMID:23786315

  11. GAMOLA2, a Comprehensive Software Package for the Annotation and Curation of Draft and Complete Microbial Genomes

    PubMed Central

    Altermann, Eric; Lu, Jingli; McCulloch, Alan

    2017-01-01

    Expert curated annotation remains one of the critical steps in achieving a reliable biological relevant annotation. Here we announce the release of GAMOLA2, a user friendly and comprehensive software package to process, annotate and curate draft and complete bacterial, archaeal, and viral genomes. GAMOLA2 represents a wrapping tool to combine gene model determination, functional Blast, COG, Pfam, and TIGRfam analyses with structural predictions including detection of tRNAs, rRNA genes, non-coding RNAs, signal protein cleavage sites, transmembrane helices, CRISPR repeats and vector sequence contaminations. GAMOLA2 has already been validated in a wide range of bacterial and archaeal genomes, and its modular concept allows easy addition of further functionality in future releases. A modified and adapted version of the Artemis Genome Viewer (Sanger Institute) has been developed to leverage the additional features and underlying information provided by the GAMOLA2 analysis, and is part of the software distribution. In addition to genome annotations, GAMOLA2 features, among others, supplemental modules that assist in the creation of custom Blast databases, annotation transfers between genome versions, and the preparation of Genbank files for submission via the NCBI Sequin tool. GAMOLA2 is intended to be run under a Linux environment, whereas the subsequent visualization and manual curation in Artemis is mobile and platform independent. The development of GAMOLA2 is ongoing and community driven. New functionality can easily be added upon user requests, ensuring that GAMOLA2 provides information relevant to microbiologists. The software is available free of charge for academic use. PMID:28386247

  12. GAMOLA2, a Comprehensive Software Package for the Annotation and Curation of Draft and Complete Microbial Genomes.

    PubMed

    Altermann, Eric; Lu, Jingli; McCulloch, Alan

    2017-01-01

    Expert curated annotation remains one of the critical steps in achieving a reliable biological relevant annotation. Here we announce the release of GAMOLA2, a user friendly and comprehensive software package to process, annotate and curate draft and complete bacterial, archaeal, and viral genomes. GAMOLA2 represents a wrapping tool to combine gene model determination, functional Blast, COG, Pfam, and TIGRfam analyses with structural predictions including detection of tRNAs, rRNA genes, non-coding RNAs, signal protein cleavage sites, transmembrane helices, CRISPR repeats and vector sequence contaminations. GAMOLA2 has already been validated in a wide range of bacterial and archaeal genomes, and its modular concept allows easy addition of further functionality in future releases. A modified and adapted version of the Artemis Genome Viewer (Sanger Institute) has been developed to leverage the additional features and underlying information provided by the GAMOLA2 analysis, and is part of the software distribution. In addition to genome annotations, GAMOLA2 features, among others, supplemental modules that assist in the creation of custom Blast databases, annotation transfers between genome versions, and the preparation of Genbank files for submission via the NCBI Sequin tool. GAMOLA2 is intended to be run under a Linux environment, whereas the subsequent visualization and manual curation in Artemis is mobile and platform independent. The development of GAMOLA2 is ongoing and community driven. New functionality can easily be added upon user requests, ensuring that GAMOLA2 provides information relevant to microbiologists. The software is available free of charge for academic use.

  13. The RAVEN Toolbox and Its Use for Generating a Genome-scale Metabolic Model for Penicillium chrysogenum

    PubMed Central

    Agren, Rasmus; Liu, Liming; Shoaie, Saeed; Vongsangnak, Wanwipa; Nookaew, Intawat; Nielsen, Jens

    2013-01-01

    We present the RAVEN (Reconstruction, Analysis and Visualization of Metabolic Networks) Toolbox: a software suite that allows for semi-automated reconstruction of genome-scale models. It makes use of published models and/or the KEGG database, coupled with extensive gap-filling and quality control features. The software suite also contains methods for visualizing simulation results and omics data, as well as a range of methods for performing simulations and analyzing the results. The software is a useful tool for system-wide data analysis in a metabolic context and for streamlined reconstruction of metabolic networks based on protein homology. The RAVEN Toolbox workflow was applied in order to reconstruct a genome-scale metabolic model for the important microbial cell factory Penicillium chrysogenum Wisconsin54-1255. The model was validated in a bibliomic study of in total 440 references, and it comprises 1471 unique biochemical reactions and 1006 ORFs. It was then used to study the roles of ATP and NADPH in the biosynthesis of penicillin, and to identify potential metabolic engineering targets for maximization of penicillin production. PMID:23555215

  14. THE U.S. ENVIRONMENTAL PROTECTION AGENCY VISUAL PLUMES MODELING SOFTWARE

    EPA Science Inventory

    The U.S. Environmental Protection Agency's Center for Exposure Assessment Modeling (CEAM) at the Ecosystems Research Division in Athens, Georgia develops environmental exposure models, including plume models, and provides technical assistance to model users. The mixing zone and f...

  15. Three-dimensional representations of salt-dome margins at four active strategic petroleum reserve sites.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rautman, Christopher Arthur; Stein, Joshua S.

    2003-01-01

    Existing paper-based site characterization models of salt domes at the four active U.S. Strategic Petroleum Reserve sites have been converted to digital format and visualized using modern computer software. The four sites are the Bayou Choctaw dome in Iberville Parish, Louisiana; the Big Hill dome in Jefferson County, Texas; the Bryan Mound dome in Brazoria County, Texas; and the West Hackberry dome in Cameron Parish, Louisiana. A new modeling algorithm has been developed to overcome limitations of many standard geological modeling software packages in order to deal with structurally overhanging salt margins that are typical of many salt domes. Thismore » algorithm, and the implementing computer program, make use of the existing interpretive modeling conducted manually using professional geological judgement and presented in two dimensions in the original site characterization reports as structure contour maps on the top of salt. The algorithm makes use of concepts of finite-element meshes of general engineering usage. Although the specific implementation of the algorithm described in this report and the resulting output files are tailored to the modeling and visualization software used to construct the figures contained herein, the algorithm itself is generic and other implementations and output formats are possible. The graphical visualizations of the salt domes at the four Strategic Petroleum Reserve sites are believed to be major improvements over the previously available two-dimensional representations of the domes via conventional geologic drawings (cross sections and contour maps). Additionally, the numerical mesh files produced by this modeling activity are available for import into and display by other software routines. The mesh data are not explicitly tabulated in this report; however an electronic version in simple ASCII format is included on a PC-based compact disk.« less

  16. Feature Integration Theory Revisited: Dissociating Feature Detection and Attentional Guidance in Visual Search

    ERIC Educational Resources Information Center

    Chan, Louis K. H.; Hayward, William G.

    2009-01-01

    In feature integration theory (FIT; A. Treisman & S. Sato, 1990), feature detection is driven by independent dimensional modules, and other searches are driven by a master map of locations that integrates dimensional information into salience signals. Although recent theoretical models have largely abandoned this distinction, some observed…

  17. Challenges in Managing Trustworthy Large-scale Digital Science

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.

    2017-12-01

    The increased use of large-scale international digital science has opened a number of challenges for managing, handling, using and preserving scientific information. The large volumes of information are driven by three main categories - model outputs including coupled models and ensembles, data products that have been processing to a level of usability, and increasingly heuristically driven data analysis. These data products are increasingly the ones that are usable by the broad communities, and far in excess of the raw instruments data outputs. The data, software and workflows are then shared and replicated to allow broad use at an international scale, which places further demands of infrastructure to support how the information is managed reliably across distributed resources. Users necessarily rely on these underlying "black boxes" so that they are productive to produce new scientific outcomes. The software for these systems depend on computational infrastructure, software interconnected systems, and information capture systems. This ranges from the fundamentals of the reliability of the compute hardware, system software stacks and libraries, and the model software. Due to these complexities and capacity of the infrastructure, there is an increased emphasis of transparency of the approach and robustness of the methods over the full reproducibility. Furthermore, with large volume data management, it is increasingly difficult to store the historical versions of all model and derived data. Instead, the emphasis is on the ability to access the updated products and the reliability by which both previous outcomes are still relevant and can be updated for the new information. We will discuss these challenges and some of the approaches underway that are being used to address these issues.

  18. A versatile system for processing geostationary satellite data with run-time visualization capability

    NASA Technical Reports Server (NTRS)

    Landsfeld, M.; Gautier, C.; Figel, T.

    1995-01-01

    To better predict global climate change, scientists are developing climate models that require interdisciplinary and collaborative efforts in their building. We are currently involved in several such projects but will briefly discuss activities in support of two such complementary projects: the Atmospheric Radiation Measurement (ARM) program of the Department of Energy and Sequoia 2000, a joint venture of the University of California, the private sector, and government agencies. Our contribution to the ARM program is to investigate the role of clouds on the top of the atmosphere and on surface radiance fields through the data analysis of surface and satellite observations and complex modeling of the interaction of radiation with clouds. One of our first ARM research activities involves the computation of the broadband shortwave surface irradiance from satellite observations. Geostationary satellite images centered over the first ARM observation site are received hourly over the Internet network and processed in real time to compute hourly and daily composite shortwave irradiance fields. The images and the results are transferred via a high-speed network to the Sequoia 2000 storage facility in Berkeley, where they are archived These satellite-derived results are compared with the surface observations to evaluate the accuracy of the satellite estimate and the spatial representation of the surface observations. In developing the software involved in calculating the surface shortwave irradiance, we have produced an environment whereby we can easily modify and monitor the data processing as required. Through the principles of modular programming, we have developed software that is easily modified as new algorithms for computation are developed or input data availability changes. In addition, the software was designed so that it could be run from an interactive, icon-driven, graphical interface, TCL-TK, developed by Sequoia 2000 participants. In this way, the data flow can be interactively assessed and altered as needed. In this environment, the intermediate data processing 'images' can be viewed, enabling the investigator to easily monitor the various data processing steps as they progress. Additionally, this environment allows the rapid testing of new processing modules and allows their effects to be visually compared with previous results.

  19. Dependency visualization for complex system understanding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smart, J. Allison Cory

    1994-09-01

    With the volume of software in production use dramatically increasing, the importance of software maintenance has become strikingly apparent. Techniques now sought and developed for reverse engineering and design extraction and recovery. At present, numerous commercial products and research tools exist which are capable of visualizing a variety of programming languages and software constructs. The list of new tools and services continues to grow rapidly. Although the scope of the existing commercial and academic product set is quite broad, these tools still share a common underlying problem. The ability of each tool to visually organize object representations is increasingly impairedmore » as the number of components and component dependencies within systems increases. Regardless of how objects are defined, complex ``spaghetti`` networks result in nearly all large system cases. While this problem is immediately apparent in modem systems analysis involving large software implementations, it is not new. As will be discussed in Chapter 2, related problems involving the theory of graphs were identified long ago. This important theoretical foundation provides a useful vehicle for representing and analyzing complex system structures. While the utility of directed graph based concepts in software tool design has been demonstrated in literature, these tools still lack the capabilities necessary for large system comprehension. This foundation must therefore be expanded with new organizational and visualization constructs necessary to meet this challenge. This dissertation addresses this need by constructing a conceptual model and a set of methods for interactively exploring, organizing, and understanding the structure of complex software systems.« less

  20. 3D Geo-Structures Visualization Education Project (3dgeostructuresvis.ucdavis.edu)

    NASA Astrophysics Data System (ADS)

    Billen, M. I.

    2014-12-01

    Students of field-based geology must master a suite of challenging skills from recognizing rocks, to measuring orientations of features in the field, to finding oneself (and the outcrop) on a map and placing structural information on maps. Students must then synthesize this information to derive meaning from the observations and ultimately to determine the three-dimensional (3D) shape of the deformed structures and their kinematic history. Synthesizing this kind of information requires sophisticated visualizations skills in order to extrapolate observations into the subsurface or missing (eroded) material. The good news is that students can learn 3D visualization skills through practice, and virtual tools can help provide some of that practice. Here I present a suite of learning modules focused at developing students' ability to imagine (visualize) complex 3D structures and their exposure through digital topographic surfaces. Using the software 3DVisualizer, developed by KeckCAVES (keckcaves.org) we have developed visualizations of common geologic structures (e.g., syncline, dipping fold) in which the rock is represented by originally flat-lying layers of sediment, each with a different color, which have been subsequently deformed. The exercises build up in complexity, first focusing on understanding the structure in 3D (penetrative understanding), and then moving to the exposure of the structure at a topographic surface. Individual layers can be rendered as a transparent feature to explore how the layer extends above and below the topographic surface (e.g., to follow an eroded fold limb across a valley). The exercises are provided using either movies of the visualization (which can also be used for examples during lectures), or the data and software can be downloaded to allow for more self-driven exploration and learning. These virtual field models and exercises can be used as "practice runs" before going into the field, as make-up assignments, as a field experience in regions without good geologic outcrops, or for students with disabilities that prevent them from going into the field. These exercises and modules are available from 3dgeostructuresvis.ucdavis.edu. We plan to add several new structures to the site each year. This project was funded by a National Science Foundation CAREER grant to Billen.

  1. A Quantitative Model for Assessing Visual Simulation Software Architecture

    DTIC Science & Technology

    2011-09-01

    Software Engineering Arnold Buss Research Associate Professor of MOVES LtCol Jeff Boleng, PhD Associate Professor of Computer Science U.S. Air Force Academy... science (operating and programming systems series). New York, NY, USA: Elsevier Science Ltd. Henry, S., & Kafura, D. (1984). The evaluation of software...Rudy Darken Professor of Computer Science Dissertation Supervisor Ted Lewis Professor of Computer Science Richard Riehle Professor of Practice

  2. Studies and analyses of the space shuttle main engine. Failure information propagation model data base and software

    NASA Technical Reports Server (NTRS)

    Tischer, A. E.

    1987-01-01

    The failure information propagation model (FIPM) data base was developed to store and manipulate the large amount of information anticipated for the various Space Shuttle Main Engine (SSME) FIPMs. The organization and structure of the FIPM data base is described, including a summary of the data fields and key attributes associated with each FIPM data file. The menu-driven software developed to facilitate and control the entry, modification, and listing of data base records is also discussed. The transfer of the FIPM data base and software to the NASA Marshall Space Flight Center is described. Complete listings of all of the data base definition commands and software procedures are included in the appendixes.

  3. SCEC-VDO: A New 3-Dimensional Visualization and Movie Making Software for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Milner, K. R.; Sanskriti, F.; Yu, J.; Callaghan, S.; Maechling, P. J.; Jordan, T. H.

    2016-12-01

    Researchers and undergraduate interns at the Southern California Earthquake Center (SCEC) have created a new 3-dimensional (3D) visualization software tool called SCEC Virtual Display of Objects (SCEC-VDO). SCEC-VDO is written in Java and uses the Visualization Toolkit (VTK) backend to render 3D content. SCEC-VDO offers advantages over existing 3D visualization software for viewing georeferenced data beneath the Earth's surface. Many popular visualization packages, such as Google Earth, restrict the user to views of the Earth from above, obstructing views of geological features such as faults and earthquake hypocenters at depth. SCEC-VDO allows the user to view data both above and below the Earth's surface at any angle. It includes tools for viewing global earthquakes from the U.S. Geological Survey, faults from the SCEC Community Fault Model, and results from the latest SCEC models of earthquake hazards in California including UCERF3 and RSQSim. Its object-oriented plugin architecture allows for the easy integration of new regional and global datasets, regardless of the science domain. SCEC-VDO also features rich animation capabilities, allowing users to build a timeline with keyframes of camera position and displayed data. The software is built with the concept of statefulness, allowing for reproducibility and collaboration using an xml file. A prior version of SCEC-VDO, which began development in 2005 under the SCEC Undergraduate Studies in Earthquake Information Technology internship, used the now unsupported Java3D library. Replacing Java3D with the widely supported and actively developed VTK libraries not only ensures that SCEC-VDO can continue to function for years to come, but allows for the export of 3D scenes to web viewers and popular software such as Paraview. SCEC-VDO runs on all recent 64-bit Windows, Mac OS X, and Linux systems with Java 8 or later. More information, including downloads, tutorials, and example movies created fully within SCEC-VDO is available here: http://scecvdo.usc.edu

  4. Development of visual programming techniques to integrate theoretical modeling into the scientific planning and instrument operations environment of ISTP

    NASA Technical Reports Server (NTRS)

    Goodrich, Charles C.

    1993-01-01

    The goal of this project is to investigate the use of visualization software based on the visual programming and data-flow paradigms to meet the needs of the SPOF and through it the International Solar Terrestrial Physics (ISTP) science community. Specific needs we address include science planning, data interpretation, comparisons of data with simulation and model results, and data acquisition. Our accomplishments during the twelve month grant period are discussed below.

  5. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    An OpenStudio Measure is a script that can manipulate an OpenStudio model and associated data to apply energy conservation measures (ECMs), run supplemental simulations, or visualize simulation results. The OpenStudio software development kit (SDK) and accessibility of the Ruby scripting language makes measure authorship accessible to both software developers and energy modelers. This paper discusses the life cycle of an OpenStudio Measure from development, testing, and distribution, to application.

  7. GRAPHICS MANAGER (GFXMGR): An interactive graphics software program for the Advanced Electronics Design (AED) graphics controller, Model 767

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faculjak, D.A.

    1988-03-01

    Graphics Manager (GFXMGR) is menu-driven, user-friendly software designed to interactively create, edit, and delete graphics displays on the Advanced Electronics Design (AED) graphics controller, Model 767. The software runs on the VAX family of computers and has been used successfully in security applications to create and change site layouts (maps) of specific facilities. GFXMGR greatly benefits graphics development by minimizing display-development time, reducing tedium on the part of the user, and improving system performance. It is anticipated that GFXMGR can be used to create graphics displays for many types of applications. 8 figs., 2 tabs.

  8. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  9. An overview of 3D software visualization.

    PubMed

    Teyseyre, Alfredo R; Campo, Marcelo R

    2009-01-01

    Software visualization studies techniques and methods for graphically representing different aspects of software. Its main goal is to enhance, simplify and clarify the mental representation a software engineer has of a computer system. During many years, visualization in 2D space has been actively studied, but in the last decade, researchers have begun to explore new 3D representations for visualizing software. In this article, we present an overview of current research in the area, describing several major aspects like: visual representations, interaction issues, evaluation methods and development tools. We also perform a survey of some representative tools to support different tasks, i.e., software maintenance and comprehension, requirements validation and algorithm animation for educational purposes, among others. Finally, we conclude identifying future research directions.

  10. Modeling and analysis of visual digital impact model for a Chinese human thorax.

    PubMed

    Zhu, Jin; Wang, Kai-Ming; Li, Shu; Liu, Hai-Yan; Jing, Xiao; Li, Xiao-Fang; Liu, Yi-He

    2017-01-01

    To establish a three-dimensional finite element model of the human chest for engineering research on individual protection. Computed tomography (CT) scanning data were used for three-dimensional reconstruction with the medical image reconstruction software Mimics. The finite element method (FEM) preprocessing software ANSYS ICEM CFD was used for cell mesh generation, and the relevant material behavior parameters of all of the model's parts were specified. The finite element model was constructed with the FEM software, and the model availability was verified based on previous cadaver experimental data. A finite element model approximating the anatomical structure of the human chest was established, and the model's simulation results conformed to the results of the cadaver experiment overall. Segment data of the human body and specialized software can be utilized for FEM model reconstruction to satisfy the need for numerical analysis of shocks to the human chest in engineering research on body mechanics.

  11. Applied Computational Chemistry for the Blind and Visually Impaired

    ERIC Educational Resources Information Center

    Wedler, Henry B.; Cohen, Sarah R.; Davis, Rebecca L.; Harrison, Jason G.; Siebert, Matthew R.; Willenbring, Dan; Hamann, Christian S.; Shaw, Jared T.; Tantillo, Dean J.

    2012-01-01

    We describe accommodations that we have made to our applied computational-theoretical chemistry laboratory to provide access for blind and visually impaired students interested in independent investigation of structure-function relationships. Our approach utilizes tactile drawings, molecular model kits, existing software, Bash and Perl scripts…

  12. Variability and Correlations in Primary Visual Cortical Neurons Driven by Fixational Eye Movements

    PubMed Central

    McFarland, James M.; Cumming, Bruce G.

    2016-01-01

    The ability to distinguish between elements of a sensory neuron's activity that are stimulus independent versus driven by the stimulus is critical for addressing many questions in systems neuroscience. This is typically accomplished by measuring neural responses to repeated presentations of identical stimuli and identifying the trial-variable components of the response as noise. In awake primates, however, small “fixational” eye movements (FEMs) introduce uncontrolled trial-to-trial differences in the visual stimulus itself, potentially confounding this distinction. Here, we describe novel analytical methods that directly quantify the stimulus-driven and stimulus-independent components of visual neuron responses in the presence of FEMs. We apply this approach, combined with precise model-based eye tracking, to recordings from primary visual cortex (V1), finding that standard approaches that ignore FEMs typically miss more than half of the stimulus-driven neural response variance, creating substantial biases in measures of response reliability. We show that these effects are likely not isolated to the particular experimental conditions used here, such as the choice of visual stimulus or spike measurement time window, and thus will be a more general problem for V1 recordings in awake primates. We also demonstrate that measurements of the stimulus-driven and stimulus-independent correlations among pairs of V1 neurons can be greatly biased by FEMs. These results thus illustrate the potentially dramatic impact of FEMs on measures of signal and noise in visual neuron activity and also demonstrate a novel approach for controlling for these eye-movement-induced effects. SIGNIFICANCE STATEMENT Distinguishing between the signal and noise in a sensory neuron's activity is typically accomplished by measuring neural responses to repeated presentations of an identical stimulus. For recordings from the visual cortex of awake animals, small “fixational” eye movements (FEMs) inevitably introduce trial-to-trial variability in the visual stimulus, potentially confounding such measures. Here, we show that FEMs often have a dramatic impact on several important measures of response variability for neurons in primary visual cortex. We also present an analytical approach for quantifying signal and noise in visual neuron activity in the presence of FEMs. These results thus highlight the importance of controlling for FEMs in studies of visual neuron function, and demonstrate novel methods for doing so. PMID:27277801

  13. Balancing Plan-Driven and Agile Methods in Software Engineering Project Courses

    NASA Astrophysics Data System (ADS)

    Boehm, Barry; Port, Dan; Winsor Brown, A.

    2002-09-01

    For the past 6 years, we have been teaching a two-semester software engineering project course. The students organize into 5-person teams and develop largely web-based electronic services projects for real USC campus clients. We have been using and evolving a method called Model- Based (System) Architecting and Software Engineering (MBASE) for use in both the course and in industrial applications. The MBASE Guidelines include a lot of documents. We teach risk-driven documentation: if it is risky to document something, and not risky to leave it out (e.g., GUI screen placements), leave it out. Even so, students tend to associate more documentation with higher grades, although our grading eventually discourages this. We are always on the lookout for ways to have students learn best practices without having to produce excessive documentation. Thus, we were very interested in analyzing the various emerging agile methods. We found that agile methods and milestone plan-driven methods are part of a “how much planning is enough?” spectrum. Both agile and plan-driven methods have home grounds of project characteristics where they clearly work best, and where the other will have difficulties. Hybrid agile/plan-driven approaches are feasible, and necessary for projects having a mix of agile and plan-driven home ground characteristics. Information technology trends are going more toward the agile methods' home ground characteristics of emergent requirements and rapid change, although there is a concurrent increase in concern with dependability. As a result, we are currently experimenting with risk-driven combinations of MBASE and agile methods, such as integrating requirements, test plans, peer reviews, and pair programming into “agile quality management.”

  14. Velocity Mapping Toolbox (VMT): a processing and visualization suite for moving-vessel ADCP measurements

    USGS Publications Warehouse

    Parsons, D.R.; Jackson, P.R.; Czuba, J.A.; Engel, F.L.; Rhoads, B.L.; Oberg, K.A.; Best, J.L.; Mueller, D.S.; Johnson, K.K.; Riley, J.D.

    2013-01-01

    The use of acoustic Doppler current profilers (ADCP) for discharge measurements and three-dimensional flow mapping has increased rapidly in recent years and has been primarily driven by advances in acoustic technology and signal processing. Recent research has developed a variety of methods for processing data obtained from a range of ADCP deployments and this paper builds on this progress by describing new software for processing and visualizing ADCP data collected along transects in rivers or other bodies of water. The new utility, the Velocity Mapping Toolbox (VMT), allows rapid processing (vector rotation, projection, averaging and smoothing), visualization (planform and cross-section vector and contouring), and analysis of a range of ADCP-derived datasets. The paper documents the data processing routines in the toolbox and presents a set of diverse examples that demonstrate its capabilities. The toolbox is applicable to the analysis of ADCP data collected in a wide range of aquatic environments and is made available as open-source code along with this publication.

  15. A Software Hub for High Assurance Model-Driven Development and Analysis

    DTIC Science & Technology

    2007-01-23

    verification of UML models in TLPVS. In Thomas Baar, Alfred Strohmeier, Ana Moreira, and Stephen J. Mellor, editors, UML 2004 - The Unified Modeling...volume 3785 of Lecture Notes in Computer Science, pages 52–65, Manchester, UK, Nov 2005. Springer. [GH04] Günter Graw and Peter Herrmann. Transformation

  16. AstroBlend: Visualization package for use with Blender

    NASA Astrophysics Data System (ADS)

    Naiman, J. P.

    2015-12-01

    AstroBlend is a visualization package for use in the three dimensional animation and modeling software, Blender. It reads data in via a text file or can use pre-fab isosurface files stored as OBJ or Wavefront files. AstroBlend supports a variety of codes such as FLASH (ascl:1010.082), Enzo (ascl:1010.072), and Athena (ascl:1010.014), and combines artistic 3D models with computational astrophysics datasets to create models and animations.

  17. SU-F-J-72: A Clinical Usable Integrated Contouring Quality Evaluation Software for Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, S; Dolly, S; Cai, B

    Purpose: To introduce the Auto Contour Evaluation (ACE) software, which is the clinical usable, user friendly, efficient and all-in-one toolbox for automatically identify common contouring errors in radiotherapy treatment planning using supervised machine learning techniques. Methods: ACE is developed with C# using Microsoft .Net framework and Windows Presentation Foundation (WPF) for elegant GUI design and smooth GUI transition animations through the integration of graphics engines and high dots per inch (DPI) settings on modern high resolution monitors. The industrial standard software design pattern, Model-View-ViewModel (MVVM) pattern, is chosen to be the major architecture of ACE for neat coding structure, deepmore » modularization, easy maintainability and seamless communication with other clinical software. ACE consists of 1) a patient data importing module integrated with clinical patient database server, 2) a 2D DICOM image and RT structure simultaneously displaying module, 3) a 3D RT structure visualization module using Visualization Toolkit or VTK library and 4) a contour evaluation module using supervised pattern recognition algorithms to detect contouring errors and display detection results. ACE relies on supervised learning algorithms to handle all image processing and data processing jobs. Implementations of related algorithms are powered by Accord.Net scientific computing library for better efficiency and effectiveness. Results: ACE can take patient’s CT images and RT structures from commercial treatment planning software via direct user input or from patients’ database. All functionalities including 2D and 3D image visualization and RT contours error detection have been demonstrated with real clinical patient cases. Conclusion: ACE implements supervised learning algorithms and combines image processing and graphical visualization modules for RT contours verification. ACE has great potential for automated radiotherapy contouring quality verification. Structured with MVVM pattern, it is highly maintainable and extensible, and support smooth connections with other clinical software tools.« less

  18. Colaborated Architechture Framework for Composition UML 2.0 in Zachman Framework

    NASA Astrophysics Data System (ADS)

    Hermawan; Hastarista, Fika

    2016-01-01

    Zachman Framework (ZF) is the framework of enterprise architechture that most widely adopted in the Enterprise Information System (EIS) development. In this study, has been developed Colaborated Architechture Framework (CAF) to collaborate ZF with Unified Modeling Language (UML) 2.0 modeling. The CAF provides the composition of ZF matrix that each cell is consist of the Model Driven architechture (MDA) from the various UML models and many Software Requirement Specification (SRS) documents. Implementation of this modeling is used to develops Enterprise Resource Planning (ERP). Because ERP have a coverage of applications in large numbers and complexly relations, it is necessary to use Agile Model Driven Design (AMDD) approach as an advanced method to transforms MDA into components of application modules with efficiently and accurately. Finally, through the using of the CAF, give good achievement in fullfilment the needs from all stakeholders that are involved in the overall process stage of Rational Unified Process (RUP), and also obtaining a high satisfaction to fullfiled the functionality features of the ERP software in PT. Iglas (Persero) Gresik.

  19. An Integrated Software Package to Enable Predictive Simulation Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Fitzhenry, Erin B.; Jin, Shuangshuang

    The power grid is increasing in complexity due to the deployment of smart grid technologies. Such technologies vastly increase the size and complexity of power grid systems for simulation and modeling. This increasing complexity necessitates not only the use of high-performance-computing (HPC) techniques, but a smooth, well-integrated interplay between HPC applications. This paper presents a new integrated software package that integrates HPC applications and a web-based visualization tool based on a middleware framework. This framework can support the data communication between different applications. Case studies with a large power system demonstrate the predictive capability brought by the integrated software package,more » as well as the better situational awareness provided by the web-based visualization tool in a live mode. Test results validate the effectiveness and usability of the integrated software package.« less

  20. Can We Practically Bring Physics-based Modeling Into Operational Analytics Tools?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granderson, Jessica; Bonvini, Marco; Piette, Mary Ann

    We present that analytics software is increasingly used to improve and maintain operational efficiency in commercial buildings. Energy managers, owners, and operators are using a diversity of commercial offerings often referred to as Energy Information Systems, Fault Detection and Diagnostic (FDD) systems, or more broadly Energy Management and Information Systems, to cost-effectively enable savings on the order of ten to twenty percent. Most of these systems use data from meters and sensors, with rule-based and/or data-driven models to characterize system and building behavior. In contrast, physics-based modeling uses first-principles and engineering models (e.g., efficiency curves) to characterize system and buildingmore » behavior. Historically, these physics-based approaches have been used in the design phase of the building life cycle or in retrofit analyses. Researchers have begun exploring the benefits of integrating physics-based models with operational data analytics tools, bridging the gap between design and operations. In this paper, we detail the development and operator use of a software tool that uses hybrid data-driven and physics-based approaches to cooling plant FDD and optimization. Specifically, we describe the system architecture, models, and FDD and optimization algorithms; advantages and disadvantages with respect to purely data-driven approaches; and practical implications for scaling and replicating these techniques. Finally, we conclude with an evaluation of the future potential for such tools and future research opportunities.« less

  1. Visualization Case Study: Eyjafjallajökull Ash (Invited)

    NASA Astrophysics Data System (ADS)

    Simmon, R.

    2010-12-01

    Although data visualization is a powerful tool in Earth science, the resulting imagery is often complex and difficult to interpret for non-experts. Students, journalists, web site visitors, or museum attendees often have difficulty understanding some of the imagery scientists create, particularly false-color imagery and data-driven maps. Many visualizations are designed for data exploration or peer communication, and often follow discipline conventions or are constrained by software defaults. Different techniques are necessary for communication with a broad audience. Data visualization combines ideas from cognitive science, graphic design, and cartography, and applies them to the challenge of presenting data clearly. Visualizers at NASA's Earth Observatory web site (earthobservatory.nasa.gov) use these techniques to craft remote sensing imagery for interested but non-expert readers. Images range from natural-color satellite images and multivariate maps to illustrations of abstract concepts. I will use imagery of the eruption of Iceland's Eyjafjallajökull volcano as a case study, showing specific applications of general design techniques. By using color carefully (including contextual data), precisely aligning disparate data sets, and highlighting important features, we crafted an image that clearly conveys the complex vertical and horizontal distribution of airborne ash.

  2. Value-driven attentional capture in the auditory domain.

    PubMed

    Anderson, Brian A

    2016-01-01

    It is now well established that the visual attention system is shaped by reward learning. When visual features are associated with a reward outcome, they acquire high priority and can automatically capture visual attention. To date, evidence for value-driven attentional capture has been limited entirely to the visual system. In the present study, I demonstrate that previously reward-associated sounds also capture attention, interfering more strongly with the performance of a visual task. This finding suggests that value-driven attention reflects a broad principle of information processing that can be extended to other sensory modalities and that value-driven attention can bias cross-modal stimulus competition.

  3. Software attribute visualization for high integrity software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pollock, G.M.

    1998-03-01

    This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification.

  4. Accuracy and efficiency of computer-aided anatomical analysis using 3D visualization software based on semi-automated and automated segmentations.

    PubMed

    An, Gao; Hong, Li; Zhou, Xiao-Bing; Yang, Qiong; Li, Mei-Qing; Tang, Xiang-Yang

    2017-03-01

    We investigated and compared the functionality of two 3D visualization software provided by a CT vendor and a third-party vendor, respectively. Using surgical anatomical measurement as baseline, we evaluated the accuracy of 3D visualization and verified their utility in computer-aided anatomical analysis. The study cohort consisted of 50 adult cadavers fixed with the classical formaldehyde method. The computer-aided anatomical analysis was based on CT images (in DICOM format) acquired by helical scan with contrast enhancement, using a CT vendor provided 3D visualization workstation (Syngo) and a third-party 3D visualization software (Mimics) that was installed on a PC. Automated and semi-automated segmentations were utilized in the 3D visualization workstation and software, respectively. The functionality and efficiency of automated and semi-automated segmentation methods were compared. Using surgical anatomical measurement as a baseline, the accuracy of 3D visualization based on automated and semi-automated segmentations was quantitatively compared. In semi-automated segmentation, the Mimics 3D visualization software outperformed the Syngo 3D visualization workstation. No significant difference was observed in anatomical data measurement by the Syngo 3D visualization workstation and the Mimics 3D visualization software (P>0.05). Both the Syngo 3D visualization workstation provided by a CT vendor and the Mimics 3D visualization software by a third-party vendor possessed the needed functionality, efficiency and accuracy for computer-aided anatomical analysis. Copyright © 2016 Elsevier GmbH. All rights reserved.

  5. PeptideDepot: flexible relational database for visual analysis of quantitative proteomic data and integration of existing protein information.

    PubMed

    Yu, Kebing; Salomon, Arthur R

    2009-12-01

    Recently, dramatic progress has been achieved in expanding the sensitivity, resolution, mass accuracy, and scan rate of mass spectrometers able to fragment and identify peptides through MS/MS. Unfortunately, this enhanced ability to acquire proteomic data has not been accompanied by a concomitant increase in the availability of flexible tools allowing users to rapidly assimilate, explore, and analyze this data and adapt to various experimental workflows with minimal user intervention. Here we fill this critical gap by providing a flexible relational database called PeptideDepot for organization of expansive proteomic data sets, collation of proteomic data with available protein information resources, and visual comparison of multiple quantitative proteomic experiments. Our software design, built upon the synergistic combination of a MySQL database for safe warehousing of proteomic data with a FileMaker-driven graphical user interface for flexible adaptation to diverse workflows, enables proteomic end-users to directly tailor the presentation of proteomic data to the unique analysis requirements of the individual proteomics lab. PeptideDepot may be deployed as an independent software tool or integrated directly with our high throughput autonomous proteomic pipeline used in the automated acquisition and post-acquisition analysis of proteomic data.

  6. Software Management Environment (SME): Components and algorithms

    NASA Technical Reports Server (NTRS)

    Hendrick, Robert; Kistler, David; Valett, Jon

    1994-01-01

    This document presents the components and algorithms of the Software Management Environment (SME), a management tool developed for the Software Engineering Branch (Code 552) of the Flight Dynamics Division (FDD) of the Goddard Space Flight Center (GSFC). The SME provides an integrated set of visually oriented experienced-based tools that can assist software development managers in managing and planning software development projects. This document describes and illustrates the analysis functions that underlie the SME's project monitoring, estimation, and planning tools. 'SME Components and Algorithms' is a companion reference to 'SME Concepts and Architecture' and 'Software Engineering Laboratory (SEL) Relationships, Models, and Management Rules.'

  7. Behavior Models for Software Architecture

    DTIC Science & Technology

    2014-11-01

    MP. Existing process modeling frameworks (BPEL, BPMN [Grosskopf et al. 2009], IDEF) usually follow the “single flowchart” paradigm. MP separates...Process: Business Process Modeling using BPMN , Meghan Kiffer Press. HAREL, D., 1987, A Visual Formalism for Complex Systems. Science of Computer

  8. Software LS-MIDA for efficient mass isotopomer distribution analysis in metabolic modelling.

    PubMed

    Ahmed, Zeeshan; Zeeshan, Saman; Huber, Claudia; Hensel, Michael; Schomburg, Dietmar; Münch, Richard; Eisenreich, Wolfgang; Dandekar, Thomas

    2013-07-09

    The knowledge of metabolic pathways and fluxes is important to understand the adaptation of organisms to their biotic and abiotic environment. The specific distribution of stable isotope labelled precursors into metabolic products can be taken as fingerprints of the metabolic events and dynamics through the metabolic networks. An open-source software is required that easily and rapidly calculates from mass spectra of labelled metabolites, derivatives and their fragments global isotope excess and isotopomer distribution. The open-source software "Least Square Mass Isotopomer Analyzer" (LS-MIDA) is presented that processes experimental mass spectrometry (MS) data on the basis of metabolite information such as the number of atoms in the compound, mass to charge ratio (m/e or m/z) values of the compounds and fragments under study, and the experimental relative MS intensities reflecting the enrichments of isotopomers in 13C- or 15 N-labelled compounds, in comparison to the natural abundances in the unlabelled molecules. The software uses Brauman's least square method of linear regression. As a result, global isotope enrichments of the metabolite or fragment under study and the molar abundances of each isotopomer are obtained and displayed. The new software provides an open-source platform that easily and rapidly converts experimental MS patterns of labelled metabolites into isotopomer enrichments that are the basis for subsequent observation-driven analysis of pathways and fluxes, as well as for model-driven metabolic flux calculations.

  9. Towards Test Driven Development for Computational Science with pFUnit

    NASA Technical Reports Server (NTRS)

    Rilee, Michael L.; Clune, Thomas L.

    2014-01-01

    Developers working in Computational Science & Engineering (CSE)/High Performance Computing (HPC) must contend with constant change due to advances in computing technology and science. Test Driven Development (TDD) is a methodology that mitigates software development risks due to change at the cost of adding comprehensive and continuous testing to the development process. Testing frameworks tailored for CSE/HPC, like pFUnit, can lower the barriers to such testing, yet CSE software faces unique constraints foreign to the broader software engineering community. Effective testing of numerical software requires a comprehensive suite of oracles, i.e., use cases with known answers, as well as robust estimates for the unavoidable numerical errors associated with implementation with finite-precision arithmetic. At first glance these concerns often seem exceedingly challenging or even insurmountable for real-world scientific applications. However, we argue that this common perception is incorrect and driven by (1) a conflation between model validation and software verification and (2) the general tendency in the scientific community to develop relatively coarse-grained, large procedures that compound numerous algorithmic steps.We believe TDD can be applied routinely to numerical software if developers pursue fine-grained implementations that permit testing, neatly side-stepping concerns about needing nontrivial oracles as well as the accumulation of errors. We present an example of a successful, complex legacy CSE/HPC code whose development process shares some aspects with TDD, which we contrast with current and potential capabilities. A mix of our proposed methodology and framework support should enable everyday use of TDD by CSE-expert developers.

  10. Generating DEM from LIDAR data - comparison of available software tools

    NASA Astrophysics Data System (ADS)

    Korzeniowska, K.; Lacka, M.

    2011-12-01

    In recent years many software tools and applications have appeared that offer procedures, scripts and algorithms to process and visualize ALS data. This variety of software tools and of "point cloud" processing methods contributed to the aim of this study: to assess algorithms available in various software tools that are used to classify LIDAR "point cloud" data, through a careful examination of Digital Elevation Models (DEMs) generated from LIDAR data on a base of these algorithms. The works focused on the most important available software tools: both commercial and open source ones. Two sites in a mountain area were selected for the study. The area of each site is 0.645 sq km. DEMs generated with analysed software tools ware compared with a reference dataset, generated using manual methods to eliminate non ground points. Surfaces were analysed using raster analysis. Minimum, maximum and mean differences between reference DEM and DEMs generated with analysed software tools were calculated, together with Root Mean Square Error. Differences between DEMs were also examined visually using transects along the grid axes in the test sites.

  11. Visualizing Interaction Patterns in Online Discussions and Indices of Cognitive Presence

    ERIC Educational Resources Information Center

    Gibbs, William J.

    2006-01-01

    This paper discusses Mapping Temporal Relations of Discussions Software (MTRDS), a Web-based application that visually represents the temporal relations of online discussions. MTRDS was used to observe interaction characteristics of three online discussions. In addition, the research employed the Practical Inquiry Model to identify indices of…

  12. A Bio-Energetic Model for North Atlantic Right Whales: Locomotion, Anatomy and Diving Behavior

    DTIC Science & Technology

    2008-01-01

    tracker was visualized with Avizo , formerly Amira, software (Mercury Computing, Inc.) (Figure 3). The particle tracker feature greatly increased our...plots in Figures 3 and 4, respectively. Figure 3. Example of a network script from Avizo that controls the manipulation and visualization

  13. Integrating a geographic information system, a scientific visualization system and an orographic precipitation model

    USGS Publications Warehouse

    Hay, L.; Knapp, L.

    1996-01-01

    Investigating natural, potential, and man-induced impacts on hydrological systems commonly requires complex modelling with overlapping data requirements, and massive amounts of one- to four-dimensional data at multiple scales and formats. Given the complexity of most hydrological studies, the requisite software infrastructure must incorporate many components including simulation modelling, spatial analysis and flexible, intuitive displays. There is a general requirement for a set of capabilities to support scientific analysis which, at this time, can only come from an integration of several software components. Integration of geographic information systems (GISs) and scientific visualization systems (SVSs) is a powerful technique for developing and analysing complex models. This paper describes the integration of an orographic precipitation model, a GIS and a SVS. The combination of these individual components provides a robust infrastructure which allows the scientist to work with the full dimensionality of the data and to examine the data in a more intuitive manner.

  14. ATGC transcriptomics: a web-based application to integrate, explore and analyze de novo transcriptomic data.

    PubMed

    Gonzalez, Sergio; Clavijo, Bernardo; Rivarola, Máximo; Moreno, Patricio; Fernandez, Paula; Dopazo, Joaquín; Paniego, Norma

    2017-02-22

    In the last years, applications based on massively parallelized RNA sequencing (RNA-seq) have become valuable approaches for studying non-model species, e.g., without a fully sequenced genome. RNA-seq is a useful tool for detecting novel transcripts and genetic variations and for evaluating differential gene expression by digital measurements. The large and complex datasets resulting from functional genomic experiments represent a challenge in data processing, management, and analysis. This problem is especially significant for small research groups working with non-model species. We developed a web-based application, called ATGC transcriptomics, with a flexible and adaptable interface that allows users to work with new generation sequencing (NGS) transcriptomic analysis results using an ontology-driven database. This new application simplifies data exploration, visualization, and integration for a better comprehension of the results. ATGC transcriptomics provides access to non-expert computer users and small research groups to a scalable storage option and simple data integration, including database administration and management. The software is freely available under the terms of GNU public license at http://atgcinta.sourceforge.net .

  15. An OpenEarth Framework (OEF) for Integrating and Visualizing Earth Science Data

    NASA Astrophysics Data System (ADS)

    Moreland, J. L.; Nadeau, D. R.; Baru, C.; Crosby, C. J.

    2009-12-01

    The integration of data is essential to make transformative progress in understanding the complex processes operating at the Earth’s surface and within its interior. While our current ability to collect massive amounts of data, develop structural models, and generate high-resolution dynamics models is well developed, our ability to quantitatively integrate these data and models into holistic interpretations of Earth systems is poorly developed. We lack the basic tools to realize a first-order goal in Earth science of developing integrated 4D models of Earth structure and processes using a complete range of available constraints, at a time when the research agenda of major efforts such as EarthScope demand such a capability. Among the challenges to 3D data integration are data that may be in different coordinate spaces, units, value ranges, file formats, and data structures. While several file format standards exist, they are infrequently or incorrectly used. Metadata is often missing, misleading, or relegated to README text files along side the data. This leaves much of the work to integrate data bogged down by simple data management tasks. The OpenEarth Framework (OEF) being developed by GEON addresses these data management difficulties. The software incorporates file format parsers, data interpretation heuristics, user interfaces to prompt for missing information, and visualization techniques to merge data into a common visual model. The OEF’s data access libraries parse formal and de facto standard file formats and map their data into a common data model. The software handles file format quirks, storage details, caching, local and remote file access, and web service protocol handling. Heuristics are used to determine coordinate spaces, units, and other key data features. Where multiple data structure, naming, and file organization conventions exist, those heuristics check for each convention’s use to find a high confidence interpretation of the data. When no convention or embedded data yields a suitable answer, the user is prompted to fill in the blanks. The OEF’s interaction libraries assist in the construction of user interfaces for data management. These libraries support data import, data prompting, data introspection, the management of the contents of a common data model, and the creation of derived data to support visualization. Finally, visualization libraries provide interactive visualization using an extended version of NASA WorldWind. The OEF viewer supports visualization of terrains, point clouds, 3D volumes, imagery, cutting planes, isosurfaces, and more. Data may be color coded, shaded, and displayed above, or below the terrain, and always registered into a common coordinate space. The OEF architecture is open and cross-platform software libraries are available separately for use with other software projects, while modules from other projects may be integrated into the OEF to extend its features. The OEF is currently being used to visualize data from EarthScope-related research in the Western US.

  16. Research on Visualization Design Method in the Field of New Media Software Engineering

    NASA Astrophysics Data System (ADS)

    Deqiang, Hu

    2018-03-01

    In the new period of increasingly developed science and technology, with the increasingly fierce competition in the market and the increasing demand of the masses, new design and application methods have emerged in the field of new media software engineering, that is, the visualization design method. Applying the visualization design method to the field of new media software engineering can not only improve the actual operation efficiency of new media software engineering but more importantly the quality of software development can be enhanced by means of certain media of communication and transformation; on this basis, the progress and development of new media software engineering in China are also continuously promoted. Therefore, the application of visualization design method in the field of new media software engineering is analysed concretely in this article from the perspective of the overview of visualization design methods and on the basis of systematic analysis of the basic technology.

  17. Simulation and Visualization of Chaos in a Driven Nonlinear Pendulum -- An Aid to Introducing Chaotic Systems in Physics

    NASA Astrophysics Data System (ADS)

    Akpojotor, Godfrey; Ehwerhemuepha, Louis; Amromanoh, Ogheneriobororue

    2013-03-01

    The presence of physical systems whose characteristics change in a seemingly erratic manner gives rise to the study of chaotic systems. The characteristics of these systems are due to their hypersensitivity to changes in initial conditions. In order to understand chaotic systems, some sort of simulation and visualization is pertinent. Consequently, in this work, we have simulated and graphically visualized chaos in a driven nonlinear pendulum as a means of introducing chaotic systems. The results obtained which highlight the hypersensitivity of the pendulum are used to discuss the effectiveness of teaching and learning the physics of chaotic system using Python. This study is one of the many studies under the African Computational Science and Engineering Tour Project (PASET) which is using Python to model, simulate and visualize concepts, laws and phenomena in Science and Engineering to compliment the teaching/learning of theory and experiment.

  18. Identification of visual evoked response parameters sensitive to pilot mental state

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.

    1988-01-01

    Systems analysis techniques were developed and demonstrated for modeling the electroencephalographic (EEG) steady state visual evoked response (ssVER), for use in EEG data compression and as an indicator of mental workload. The study focused on steady state frequency domain stimulation and response analysis, implemented with a sum-of-sines (SOS) stimulus generator and an off-line describing function response analyzer. Three major tasks were conducted: (1) VER related systems identification material was reviewed; (2) Software for experiment control and data analysis was developed and implemented; and (3) ssVER identification and modeling was demonstrated, via a mental loading experiment. It was found that a systems approach to ssVER functional modeling can serve as the basis for eventual development of a mental workload indicator. The review showed how transient visual evoked response (tVER) and ssVER research are related at the functional level, the software development showed how systems techniques can be used for ssVER characterization, and the pilot experiment showed how a simple model can be used to capture the basic dynamic response of the ssVER, under varying loads.

  19. Evaluation of Visualization Software

    NASA Technical Reports Server (NTRS)

    Globus, Al; Uselton, Sam

    1995-01-01

    Visualization software is widely used in scientific and engineering research. But computed visualizations can be very misleading, and the errors are easy to miss. We feel that the software producing the visualizations must be thoroughly evaluated and the evaluation process as well as the results must be made available. Testing and evaluation of visualization software is not a trivial problem. Several methods used in testing other software are helpful, but these methods are (apparently) often not used. When they are used, the description and results are generally not available to the end user. Additional evaluation methods specific to visualization must also be developed. We present several useful approaches to evaluation, ranging from numerical analysis of mathematical portions of algorithms to measurement of human performance while using visualization systems. Along with this brief survey, we present arguments for the importance of evaluations and discussions of appropriate use of some methods.

  20. MATTS- A Step Towards Model Based Testing

    NASA Astrophysics Data System (ADS)

    Herpel, H.-J.; Willich, G.; Li, J.; Xie, J.; Johansen, B.; Kvinnesland, K.; Krueger, S.; Barrios, P.

    2016-08-01

    In this paper we describe a Model Based approach to testing of on-board software and compare it with traditional validation strategy currently applied to satellite software. The major problems that software engineering will face over at least the next two decades are increasing application complexity driven by the need for autonomy and serious application robustness. In other words, how do we actually get to declare success when trying to build applications one or two orders of magnitude more complex than today's applications. To solve the problems addressed above the software engineering process has to be improved at least for two aspects: 1) Software design and 2) Software testing. The software design process has to evolve towards model-based approaches with extensive use of code generators. Today, testing is an essential, but time and resource consuming activity in the software development process. Generating a short, but effective test suite usually requires a lot of manual work and expert knowledge. In a model-based process, among other subtasks, test construction and test execution can also be partially automated. The basic idea behind the presented study was to start from a formal model (e.g. State Machines), generate abstract test cases which are then converted to concrete executable test cases (input and expected output pairs). The generated concrete test cases were applied to an on-board software. Results were collected and evaluated wrt. applicability, cost-efficiency, effectiveness at fault finding, and scalability.

  1. Integrating Visualizations into Modeling NEST Simulations

    PubMed Central

    Nowke, Christian; Zielasko, Daniel; Weyers, Benjamin; Peyser, Alexander; Hentschel, Bernd; Kuhlen, Torsten W.

    2015-01-01

    Modeling large-scale spiking neural networks showing realistic biological behavior in their dynamics is a complex and tedious task. Since these networks consist of millions of interconnected neurons, their simulation produces an immense amount of data. In recent years it has become possible to simulate even larger networks. However, solutions to assist researchers in understanding the simulation's complex emergent behavior by means of visualization are still lacking. While developing tools to partially fill this gap, we encountered the challenge to integrate these tools easily into the neuroscientists' daily workflow. To understand what makes this so challenging, we looked into the workflows of our collaborators and analyzed how they use the visualizations to solve their daily problems. We identified two major issues: first, the analysis process can rapidly change focus which requires to switch the visualization tool that assists in the current problem domain. Second, because of the heterogeneous data that results from simulations, researchers want to relate data to investigate these effectively. Since a monolithic application model, processing and visualizing all data modalities and reflecting all combinations of possible workflows in a holistic way, is most likely impossible to develop and to maintain, a software architecture that offers specialized visualization tools that run simultaneously and can be linked together to reflect the current workflow, is a more feasible approach. To this end, we have developed a software architecture that allows neuroscientists to integrate visualization tools more closely into the modeling tasks. In addition, it forms the basis for semantic linking of different visualizations to reflect the current workflow. In this paper, we present this architecture and substantiate the usefulness of our approach by common use cases we encountered in our collaborative work. PMID:26733860

  2. Using VCL as an Aspect-Oriented Approach to Requirements Modelling

    NASA Astrophysics Data System (ADS)

    Amálio, Nuno; Kelsen, Pierre; Ma, Qin; Glodt, Christian

    Software systems are becoming larger and more complex. By tackling the modularisation of crosscutting concerns, aspect orientation draws attention to modularity as a means to address the problems of scalability, complexity and evolution in software systems development. Aspect-oriented modelling (AOM) applies aspect-orientation to the construction of models. Most existing AOM approaches are designed without a formal semantics, and use multi-view partial descriptions of behaviour. This paper presents an AOM approach based on the Visual Contract Language (VCL): a visual language for abstract and precise modelling, designed with a formal semantics, and comprising a novel approach to visual behavioural modelling based on design by contract where behavioural descriptions are total. By applying VCL to a large case study of a car-crash crisis management system, the paper demonstrates how modularity of VCL's constructs, at different levels of granularity, help to tackle complexity. In particular, it shows how VCL's package construct and its associated composition mechanisms are key in supporting separation of concerns, coarse-grained problem decomposition and aspect-orientation. The case study's modelling solution has a clear and well-defined modular structure; the backbone of this structure is a collection of packages encapsulating local solutions to concerns.

  3. Modernized Approach for Generating Reproducible Heterogeneity Using Transmitted-Light for Flow Visualization Experiments

    NASA Astrophysics Data System (ADS)

    Jones, A. A.; Holt, R. M.

    2017-12-01

    Image capturing in flow experiments has been used for fluid mechanics research since the early 1970s. Interactions of fluid flow between the vadose zone and permanent water table are of great interest because this zone is responsible for all recharge waters, pollutant transport and irrigation efficiency for agriculture. Griffith, et al. (2011) developed an approach where constructed reproducible "geologically realistic" sand configurations are deposited in sandfilled experimental chambers for light-transmitted flow visualization experiments. This method creates reproducible, reverse graded, layered (stratified) thin-slab sand chambers for point source experiments visualizing multiphase flow through porous media. Reverse-graded stratification of sand chambers mimic many naturally occurring sedimentary deposits. Sandfilled chambers use light as nonintrusive tools for measuring water saturation in two-dimensions (2-D). Homogeneous and heterogeneous sand configurations can be produced to visualize the complex physics of the unsaturated zone. The experimental procedure developed by Griffith, et al. (2011) was designed using now outdated and obsolete equipment. We have modernized this approach with new Parker Deadel linear actuator and programed projects/code for multiple configurations. We have also updated the Roper CCD software and image processing software with the latest in industry standards. Modernization of transmitted-light source, robotic equipment, redesigned experimental chambers, and newly developed analytical procedures have greatly reduced time and cost per experiment. We have verified the ability of the new equipment to generate reproducible heterogeneous sand-filled chambers and demonstrated the functionality of the new equipment and procedures by reproducing several gravity-driven fingering experiments conducted by Griffith (2008).

  4. Ontology-Driven Information Integration

    NASA Technical Reports Server (NTRS)

    Tissot, Florence; Menzel, Chris

    2005-01-01

    Ontology-driven information integration (ODII) is a method of computerized, automated sharing of information among specialists who have expertise in different domains and who are members of subdivisions of a large, complex enterprise (e.g., an engineering project, a government agency, or a business). In ODII, one uses rigorous mathematical techniques to develop computational models of engineering and/or business information and processes. These models are then used to develop software tools that support the reliable processing and exchange of information among the subdivisions of this enterprise or between this enterprise and other enterprises.

  5. Road embankment and slope stabilization.

    DOT National Transportation Integrated Search

    2010-07-31

    This report and the accompanying software are part of efforts to improve the characterization and analysis of pilestabilized : slopes using one or two rows of driven piles. A combination of the limit equilibrium analysis and strain : wedge (SW) model...

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keefer, Donald A.; Shaffer, Eric G.; Storsved, Brynne

    A free software application, RVA, has been developed as a plugin to the US DOE-funded ParaView visualization package, to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed as an open-source plugin to the 64 bit Windows version of ParaView 3.14. RVA was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing jointmore » visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed on enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including: sophisticated connectivity analysis, cross sections through simulation results between selected wells, simplified volumetric calculations, global vertical exaggeration adjustments, ingestion of UTChem simulation results, ingestion of Isatis geostatistical framework models, interrogation of joint geologic and reservoir modeling results, joint visualization and analysis of well history files, location-targeted visualization, advanced correlation analysis, visualization of flow paths, and creation of static images and animations highlighting targeted reservoir features.« less

  7. CoCoNUT: an efficient system for the comparison and analysis of genomes

    PubMed Central

    2008-01-01

    Background Comparative genomics is the analysis and comparison of genomes from different species. This area of research is driven by the large number of sequenced genomes and heavily relies on efficient algorithms and software to perform pairwise and multiple genome comparisons. Results Most of the software tools available are tailored for one specific task. In contrast, we have developed a novel system CoCoNUT (Computational Comparative geNomics Utility Toolkit) that allows solving several different tasks in a unified framework: (1) finding regions of high similarity among multiple genomic sequences and aligning them, (2) comparing two draft or multi-chromosomal genomes, (3) locating large segmental duplications in large genomic sequences, and (4) mapping cDNA/EST to genomic sequences. Conclusion CoCoNUT is competitive with other software tools w.r.t. the quality of the results. The use of state of the art algorithms and data structures allows CoCoNUT to solve comparative genomics tasks more efficiently than previous tools. With the improved user interface (including an interactive visualization component), CoCoNUT provides a unified, versatile, and easy-to-use software tool for large scale studies in comparative genomics. PMID:19014477

  8. Model-Driven Development for scientific computing. An upgrade of the RHEEDGr program

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej

    2009-11-01

    Model-Driven Engineering (MDE) is the software engineering discipline, which considers models as the most important element for software development, and for the maintenance and evolution of software, through model transformation. Model-Driven Architecture (MDA) is the approach for software development under the Model-Driven Engineering framework. This paper surveys the core MDA technology that was used to upgrade of the RHEEDGR program to C++0x language standards. New version program summaryProgram title: RHEEDGR-09 Catalogue identifier: ADUY_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUY_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 21 263 No. of bytes in distributed program, including test data, etc.: 1 266 982 Distribution format: tar.gz Programming language: Code Gear C++ Builder Computer: Intel Core Duo-based PC Operating system: Windows XP, Vista, 7 RAM: more than 1 MB Classification: 4.3, 7.2, 6.2, 8, 14 Does the new version supersede the previous version?: Yes Nature of problem: Reflection High-Energy Electron Diffraction (RHEED) is a very useful technique for studying growth and surface analysis of thin epitaxial structures prepared by the Molecular Beam Epitaxy (MBE). The RHEED technique can reveal, almost instantaneously, changes either in the coverage of the sample surface by adsorbates or in the surface structure of a thin film. Solution method: The calculations are based on the use of a dynamical diffraction theory in which the electrons are taken to be diffracted by a potential, which is periodic in the dimension perpendicular to the surface. Reasons for new version: Responding to the user feedback the graphical version of the RHEED program has been upgraded to C++0x language standards. Also, functionality and documentation of the program have been improved. Summary of revisions: Model-Driven Architecture (MDA) is the approach defined by the Object Management Group (OMG) for software development under the Model-Driven Engineering framework [1]. The MDA approach shifts the focus of software development from writing code to building models. By adapting a model-centric approach, the MDA approach hopes to automate the generation of system implementation artifacts directly from the model. The following three models are the core of the MDA: (i) the Computation Independent Model (CIM), which is focused on basic requirements of the system, (ii) the Platform Independent Model (PIM), which is used by software architects and designers, and is focused on the operational capabilities of a system outside the context of a specific platform, and (iii) the Platform Specific Model (PSM), which is used by software developers and programmers, and includes details relating to the system for a specific platform. Basic requirements for the calculation of the RHEED intensity rocking curves in the one-beam condition have been described in Ref. [2]. Fig. 1 shows the PIM for the present version of the program. Fig. 2 presents the PSM for the program. The TGraph2D.bpk package has been recompiled to Graph2D0x.bpl and upgraded according to C++0x language standards. Fig. 3 shows the PSM of the Graph2D component, which is manifested by the Graph2D0x.bpl package presently. This diagram is a graphic presentation of the static view, which shows a collection of declarative model elements and their relationships. Installation instructions of the Graph2D0x package can be found in the new distribution. The program requires the user to provide the appropriate parameters for the crystal structure under investigation. These parameters are loaded from the parameters.ini file at run-time. Instructions for the preparation of the .ini files can be found in the new distribution. The program enables carrying out one-dimensional dynamical calculations for the fcc lattice, with a two-atoms basis and fcc lattice, with one atom basis but yet the zeroth Fourier component of the scattering potential in the TRHEED1D::crystPotUg() function can be modified according to users' specific application requirements. A graphical user interface (GUI) for the program has been reconstructed. The program has been compiled with English/USA regional and language options. Unusual features: The program is distributed in the form of main projects RHEEDGr_09.cbproj and Graph2D0x.cbproj with associated files, and should be compiled using Code Gear C++ Builder 2009 compilers. Running time: The typical running time is machine and user-parameters dependent. References: OMG, Model Driven Architecture Guide Version 1.0.1, 2003, http://www.omg.org/cgi-bin/doc?omg/03-06-01. A. Daniluk, Comput. Phys. Comm. 166 (2005) 123.

  9. Information Extraction for System-Software Safety Analysis: Calendar Year 2007 Year-End Report

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2008-01-01

    This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis on the models to identify possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations; 4) perform discrete-time-based simulation on the models to investigate scenarios where these paths may play a role in failures and mishaps; and 5) identify resulting candidate scenarios for software integration testing. This paper describes new challenges in a NASA abort system case, and enhancements made to develop the integrated tool set.

  10. Visualization and Analysis of Climate Simulation Performance Data

    NASA Astrophysics Data System (ADS)

    Röber, Niklas; Adamidis, Panagiotis; Behrens, Jörg

    2015-04-01

    Visualization is the key process of transforming abstract (scientific) data into a graphical representation, to aid in the understanding of the information hidden within the data. Climate simulation data sets are typically quite large, time varying, and consist of many different variables sampled on an underlying grid. A large variety of climate models - and sub models - exist to simulate various aspects of the climate system. Generally, one is mainly interested in the physical variables produced by the simulation runs, but model developers are also interested in performance data measured along with these simulations. Climate simulation models are carefully developed complex software systems, designed to run in parallel on large HPC systems. An important goal thereby is to utilize the entire hardware as efficiently as possible, that is, to distribute the workload as even as possible among the individual components. This is a very challenging task, and detailed performance data, such as timings, cache misses etc. have to be used to locate and understand performance problems in order to optimize the model implementation. Furthermore, the correlation of performance data to the processes of the application and the sub-domains of the decomposed underlying grid is vital when addressing communication and load imbalance issues. High resolution climate simulations are carried out on tens to hundreds of thousands of cores, thus yielding a vast amount of profiling data, which cannot be analyzed without appropriate visualization techniques. This PICO presentation displays and discusses the ICON simulation model, which is jointly developed by the Max Planck Institute for Meteorology and the German Weather Service and in partnership with DKRZ. The visualization and analysis of the models performance data allows us to optimize and fine tune the model, as well as to understand its execution on the HPC system. We show and discuss our workflow, as well as present new ideas and solutions that greatly aided our understanding. The software employed is based on Avizo Green, ParaView and SimVis, as well as own developed software extensions.

  11. P-MartCancer-Interactive Online Software to Enable Analysis of Shotgun Cancer Proteomic Datasets.

    PubMed

    Webb-Robertson, Bobbie-Jo M; Bramer, Lisa M; Jensen, Jeffrey L; Kobold, Markus A; Stratton, Kelly G; White, Amanda M; Rodland, Karin D

    2017-11-01

    P-MartCancer is an interactive web-based software environment that enables statistical analyses of peptide or protein data, quantitated from mass spectrometry-based global proteomics experiments, without requiring in-depth knowledge of statistical programming. P-MartCancer offers a series of statistical modules associated with quality assessment, peptide and protein statistics, protein quantification, and exploratory data analyses driven by the user via customized workflows and interactive visualization. Currently, P-MartCancer offers access and the capability to analyze multiple cancer proteomic datasets generated through the Clinical Proteomics Tumor Analysis Consortium at the peptide, gene, and protein levels. P-MartCancer is deployed as a web service (https://pmart.labworks.org/cptac.html), alternatively available via Docker Hub (https://hub.docker.com/r/pnnl/pmart-web/). Cancer Res; 77(21); e47-50. ©2017 AACR . ©2017 American Association for Cancer Research.

  12. An overview of suite for automated global electronic biosurveillance (SAGES)

    NASA Astrophysics Data System (ADS)

    Lewis, Sheri L.; Feighner, Brian H.; Loschen, Wayne A.; Wojcik, Richard A.; Skora, Joseph F.; Coberly, Jacqueline S.; Blazes, David L.

    2012-06-01

    Public health surveillance is undergoing a revolution driven by advances in the field of information technology. Many countries have experienced vast improvements in the collection, ingestion, analysis, visualization, and dissemination of public health data. Resource-limited countries have lagged behind due to challenges in information technology infrastructure, public health resources, and the costs of proprietary software. The Suite for Automated Global Electronic bioSurveillance (SAGES) is a collection of modular, flexible, freely-available software tools for electronic disease surveillance in resource-limited settings. One or more SAGES tools may be used in concert with existing surveillance applications or the SAGES tools may be used en masse for an end-to-end biosurveillance capability. This flexibility allows for the development of an inexpensive, customized, and sustainable disease surveillance system. The ability to rapidly assess anomalous disease activity may lead to more efficient use of limited resources and better compliance with World Health Organization International Health Regulations.

  13. A Study of Visualization for Mathematics Education

    NASA Technical Reports Server (NTRS)

    Daugherty, Sarah C.

    2008-01-01

    Graphical representations such as figures, illustrations, and diagrams play a critical role in mathematics and they are equally important in mathematics education. However, graphical representations in mathematics textbooks are static, Le. they are used to illustrate only a specific example or a limited set. of examples. By using computer software to visualize mathematical principles, virtually there is no limit to the number of specific cases and examples that can be demonstrated. However, we have not seen widespread adoption of visualization software in mathematics education. There are currently a number of software packages that provide visualization of mathematics for research and also software packages specifically developed for mathematics education. We conducted a survey of mathematics visualization software packages, summarized their features and user bases, and analyzed their limitations. In this survey, we focused on evaluating the software packages for their use with mathematical subjects adopted by institutions of secondary education in the United States (middle schools and high schools), including algebra, geometry, trigonometry, and calculus. We found that cost, complexity, and lack of flexibility are the major factors that hinder the widespread use of mathematics visualization software in education.

  14. Software Tools for Emittance Measurement and Matching for 12 GeV CEBAF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Dennis L.

    2016-05-01

    This paper discusses model-driven setup of the Continuous Electron Beam Accelerator Facility (CEBAF) for the 12GeV era, focusing on qsUtility. qsUtility is a set of software tools created to perform emittance measurements, analyze those measurements, and compute optics corrections based upon the measurements.qsUtility was developed as a toolset to facilitate reducing machine configuration time and reproducibility by way of an accurate accelerator model, and to provide Operations staff with tools to measure and correct machine optics with little or no assistance from optics experts.

  15. The Visual System of Zebrafish and its Use to Model Human Ocular Diseases

    PubMed Central

    Gestri, Gaia; Link, Brian A; Neuhauss, Stephan CF

    2011-01-01

    Free swimming zebrafish larvae depend mainly on their sense of vision to evade predation and to catch prey. Hence there is strong selective pressure on the fast maturation of visual function and indeed the visual system already supports a number of visually-driven behaviors in the newly hatched larvae. The ability to exploit the genetic and embryonic accessibility of the zebrafish in combination with a behavioral assessment of visual system function has made the zebrafish a popular model to study vision and its diseases. Here, we review the anatomy, physiology and development of the zebrafish eye as the basis to relate the contributions of the zebrafish to our understanding of human ocular diseases. PMID:21595048

  16. Visualization techniques to aid in the analysis of multi-spectral astrophysical data sets

    NASA Technical Reports Server (NTRS)

    Brugel, Edward W.; Domik, Gitta O.; Ayres, Thomas R.

    1993-01-01

    The goal of this project was to support the scientific analysis of multi-spectral astrophysical data by means of scientific visualization. Scientific visualization offers its greatest value if it is not used as a method separate or alternative to other data analysis methods but rather in addition to these methods. Together with quantitative analysis of data, such as offered by statistical analysis, image or signal processing, visualization attempts to explore all information inherent in astrophysical data in the most effective way. Data visualization is one aspect of data analysis. Our taxonomy as developed in Section 2 includes identification and access to existing information, preprocessing and quantitative analysis of data, visual representation and the user interface as major components to the software environment of astrophysical data analysis. In pursuing our goal to provide methods and tools for scientific visualization of multi-spectral astrophysical data, we therefore looked at scientific data analysis as one whole process, adding visualization tools to an already existing environment and integrating the various components that define a scientific data analysis environment. As long as the software development process of each component is separate from all other components, users of data analysis software are constantly interrupted in their scientific work in order to convert from one data format to another, or to move from one storage medium to another, or to switch from one user interface to another. We also took an in-depth look at scientific visualization and its underlying concepts, current visualization systems, their contributions, and their shortcomings. The role of data visualization is to stimulate mental processes different from quantitative data analysis, such as the perception of spatial relationships or the discovery of patterns or anomalies while browsing through large data sets. Visualization often leads to an intuitive understanding of the meaning of data values and their relationships by sacrificing accuracy in interpreting the data values. In order to be accurate in the interpretation, data values need to be measured, computed on, and compared to theoretical or empirical models (quantitative analysis). If visualization software hampers quantitative analysis (which happens with some commercial visualization products), its use is greatly diminished for astrophysical data analysis. The software system STAR (Scientific Toolkit for Astrophysical Research) was developed as a prototype during the course of the project to better understand the pragmatic concerns raised in the project. STAR led to a better understanding on the importance of collaboration between astrophysicists and computer scientists.

  17. Using Computer Visualization Models in High School Chemistry: The Role of Teacher Beliefs.

    ERIC Educational Resources Information Center

    Robblee, Karen M.; Garik, Peter; Abegg, Gerald L.; Faux, Russell; Horwitz, Paul

    This paper discusses the role of high school chemistry teachers' beliefs in implementing computer visualization software to teach atomic and molecular structure from a quantum mechanical perspective. The informants in this study were four high school chemistry teachers with comparable academic and professional backgrounds. These teachers received…

  18. Designing a Visual Factors-Based Screen Display Interface: The New Role of the Graphic Technologist.

    ERIC Educational Resources Information Center

    Faiola, Tony; DeBloois, Michael L.

    1988-01-01

    Discusses the role of the graphic technologist in preparing computer screen displays for interactive videodisc systems, and suggests screen design guidelines. Topics discussed include the grid system; typography; visual factors research; color; course mobility through branching and software menus; and a model of course integration. (22 references)…

  19. A high-speed, large-capacity, 'jukebox' optical disk system

    NASA Technical Reports Server (NTRS)

    Ammon, G. J.; Calabria, J. A.; Thomas, D. T.

    1985-01-01

    Two optical disk 'jukebox' mass storage systems which provide access to any data in a store of 10 to the 13th bits (1250G bytes) within six seconds have been developed. The optical disk jukebox system is divided into two units, including a hardware/software controller and a disk drive. The controller provides flexibility and adaptability, through a ROM-based microcode-driven data processor and a ROM-based software-driven control processor. The cartridge storage module contains 125 optical disks housed in protective cartridges. Attention is given to a conceptual view of the disk drive unit, the NASA optical disk system, the NASA database management system configuration, the NASA optical disk system interface, and an open systems interconnect reference model.

  20. Simultaneous modeling of visual saliency and value computation improves predictions of economic choice.

    PubMed

    Towal, R Blythe; Mormann, Milica; Koch, Christof

    2013-10-01

    Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift-diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions.

  1. Simultaneous modeling of visual saliency and value computation improves predictions of economic choice

    PubMed Central

    Towal, R. Blythe; Mormann, Milica; Koch, Christof

    2013-01-01

    Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift–diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions. PMID:24019496

  2. The Knowledge-Based Software Assistant: Beyond CASE

    NASA Technical Reports Server (NTRS)

    Carozzoni, Joseph A.

    1993-01-01

    This paper will outline the similarities and differences between two paradigms of software development. Both support the whole software life cycle and provide automation for most of the software development process, but have different approaches. The CASE approach is based on a set of tools linked by a central data repository. This tool-based approach is data driven and views software development as a series of sequential steps, each resulting in a product. The Knowledge-Based Software Assistant (KBSA) approach, a radical departure from existing software development practices, is knowledge driven and centers around a formalized software development process. KBSA views software development as an incremental, iterative, and evolutionary process with development occurring at the specification level.

  3. Constructing graph models for software system development and analysis

    NASA Astrophysics Data System (ADS)

    Pogrebnoy, Andrey V.

    2017-01-01

    We propose a concept for creating the instrumentation for functional and structural decisions rationale during the software system (SS) development. We propose to develop SS simultaneously on two models - functional (FM) and structural (SM). FM is a source code of the SS. Adequate representation of the FM in the form of a graph model (GM) is made automatically and called SM. The problem of creating and visualizing GM is considered from the point of applying it as a uniform platform for the adequate representation of the SS source code. We propose three levels of GM detailing: GM1 - for visual analysis of the source code and for SS version control, GM2 - for resources optimization and analysis of connections between SS components, GM3 - for analysis of the SS functioning in dynamics. The paper includes examples of constructing all levels of GM.

  4. ADOPT: A tool for automatic detection of tectonic plates at the surface of convection models

    NASA Astrophysics Data System (ADS)

    Mallard, C.; Jacquet, B.; Coltice, N.

    2017-08-01

    Mantle convection models with plate-like behavior produce surface structures comparable to Earth's plate boundaries. However, analyzing those structures is a difficult task, since convection models produce, as on Earth, diffuse deformation and elusive plate boundaries. Therefore we present here and share a quantitative tool to identify plate boundaries and produce plate polygon layouts from results of numerical models of convection: Automatic Detection Of Plate Tectonics (ADOPT). This digital tool operates within the free open-source visualization software Paraview. It is based on image segmentation techniques to detect objects. The fundamental algorithm used in ADOPT is the watershed transform. We transform the output of convection models into a topographic map, the crest lines being the regions of deformation (plate boundaries) and the catchment basins being the plate interiors. We propose two generic protocols (the field and the distance methods) that we test against an independent visual detection of plate polygons. We show that ADOPT is effective to identify the smaller plates and to close plate polygons in areas where boundaries are diffuse or elusive. ADOPT allows the export of plate polygons in the standard OGR-GMT format for visualization, modification, and analysis under generic softwares like GMT or GPlates.

  5. New software for 3D fracture network analysis and visualization

    NASA Astrophysics Data System (ADS)

    Song, J.; Noh, Y.; Choi, Y.; Um, J.; Hwang, S.

    2013-12-01

    This study presents new software to perform analysis and visualization of the fracture network system in 3D. The developed software modules for the analysis and visualization, such as BOUNDARY, DISK3D, FNTWK3D, CSECT and BDM, have been developed using Microsoft Visual Basic.NET and Visualization TookKit (VTK) open-source library. Two case studies revealed that each module plays a role in construction of analysis domain, visualization of fracture geometry in 3D, calculation of equivalent pipes, production of cross-section map and management of borehole data, respectively. The developed software for analysis and visualization of the 3D fractured rock mass can be used to tackle the geomechanical problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.

  6. Tau pathology does not affect experience-driven single-neuron and network-wide Arc/Arg3.1 responses.

    PubMed

    Rudinskiy, Nikita; Hawkes, Jonathan M; Wegmann, Susanne; Kuchibhotla, Kishore V; Muzikansky, Alona; Betensky, Rebecca A; Spires-Jones, Tara L; Hyman, Bradley T

    2014-06-10

    Intraneuronal neurofibrillary tangles (NFTs) - a characteristic pathological feature of Alzheimer's and several other neurodegenerative diseases - are considered a major target for drug development. Tangle load correlates well with the severity of cognitive symptoms and mouse models of tauopathy are behaviorally impaired. However, there is little evidence that NFTs directly impact physiological properties of host neurons. Here we used a transgenic mouse model of tauopathy to study how advanced tau pathology in different brain regions affects activity-driven expression of immediate-early gene Arc required for experience-dependent consolidation of long-term memories. We demonstrate in vivo that visual cortex neurons with tangles are as likely to express comparable amounts of Arc in response to structured visual stimulation as their neighbors without tangles. Probability of experience-dependent Arc response was not affected by tau tangles in both visual cortex and hippocampal pyramidal neurons as determined postmortem. Moreover, whole brain analysis showed that network-wide activity-driven Arc expression was not affected by tau pathology in any of the brain regions, including brain areas with the highest tangle load. Our findings suggest that intraneuronal NFTs do not affect signaling cascades leading to experience-dependent gene expression required for long-term synaptic plasticity.

  7. Bushland Evapotranspiration and Agricultural Remote Sensing System (BEARS) software

    NASA Astrophysics Data System (ADS)

    Gowda, P. H.; Moorhead, J.; Brauer, D. K.

    2017-12-01

    Evapotranspiration (ET) is a major component of the hydrologic cycle. ET data are used for a variety of water management and research purposes such as irrigation scheduling, water and crop modeling, streamflow, water availability, and many more. Remote sensing products have been widely used to create spatially representative ET data sets which provide important information from field to regional scales. As UAV capabilities increase, remote sensing use is likely to also increase. For that purpose, scientists at the USDA-ARS research laboratory in Bushland, TX developed the Bushland Evapotranspiration and Agricultural Remote Sensing System (BEARS) software. The BEARS software is a Java based software that allows users to process remote sensing data to generate ET outputs using predefined models, or enter custom equations and models. The capability to define new equations and build new models expands the applicability of the BEARS software beyond ET mapping to any remote sensing application. The software also includes an image viewing tool that allows users to visualize outputs, as well as draw an area of interest using various shapes. This software is freely available from the USDA-ARS Conservation and Production Research Laboratory website.

  8. CAMBerVis: visualization software to support comparative analysis of multiple bacterial strains.

    PubMed

    Woźniak, Michał; Wong, Limsoon; Tiuryn, Jerzy

    2011-12-01

    A number of inconsistencies in genome annotations are documented among bacterial strains. Visualization of the differences may help biologists to make correct decisions in spurious cases. We have developed a visualization tool, CAMBerVis, to support comparative analysis of multiple bacterial strains. The software manages simultaneous visualization of multiple bacterial genomes, enabling visual analysis focused on genome structure annotations. The CAMBerVis software is freely available at the project website: http://bioputer.mimuw.edu.pl/camber. Input datasets for Mycobacterium tuberculosis and Staphylocacus aureus are integrated with the software as examples. m.wozniak@mimuw.edu.pl Supplementary data are available at Bioinformatics online.

  9. Tethys: A Platform for Water Resources Modeling and Decision Support Apps

    NASA Astrophysics Data System (ADS)

    Swain, N. R.; Christensen, S. D.; Jones, N.; Nelson, E. J.

    2014-12-01

    Cloud-based applications or apps are a promising medium through which water resources models and data can be conveyed in a user-friendly environment—making them more accessible to decision-makers and stakeholders. In the context of this work, a water resources web app is a web application that exposes limited modeling functionality for a scenario exploration activity in a structured workflow (e.g.: land use change runoff analysis, snowmelt runoff prediction, and flood potential analysis). The technical expertise required to develop water resources web apps can be a barrier to many potential developers of water resources apps. One challenge that developers face is in providing spatial storage, analysis, and visualization for the spatial data that is inherent to water resources models. The software projects that provide this functionality are non-standard to web development and there are a large number of free and open source software (FOSS) projects to choose from. In addition, it is often required to synthesize several software projects to provide all of the needed functionality. Another challenge for the developer will be orchestrating the use of several software components. Consequently, the initial software development investment required to deploy an effective water resources cloud-based application can be substantial. The Tethys Platform has been developed to lower the technical barrier and minimize the initial development investment that prohibits many scientists and engineers from making use of the web app medium. Tethys synthesizes several software projects including PostGIS for spatial storage, 52°North WPS for spatial analysis, GeoServer for spatial publishing, Google Earth™, Google Maps™ and OpenLayers for spatial visualization, and Highcharts for plotting tabular data. The software selection came after a literature review of software projects being used to create existing earth sciences web apps. All of the software is linked via a Python-powered software development kit (SDK). Tethys developers use the SDK to build their apps and incorporate the needed functionality from the software suite. The presentation will include several apps that have been developed using Tethys to demonstrate its capabilities. Based upon work supported by the National Science Foundation under Grant No. 1135483.

  10. Interactive Visualization of Assessment Data: The Software Package Mondrian

    ERIC Educational Resources Information Center

    Unlu, Ali; Sargin, Anatol

    2009-01-01

    Mondrian is state-of-the-art statistical data visualization software featuring modern interactive visualization techniques for a wide range of data types. This article reviews the capabilities, functionality, and interactive properties of this software package. Key features of Mondrian are illustrated with data from the Programme for International…

  11. Novel probabilistic models of spatial genetic ancestry with applications to stratification correction in genome-wide association studies.

    PubMed

    Bhaskar, Anand; Javanmard, Adel; Courtade, Thomas A; Tse, David

    2017-03-15

    Genetic variation in human populations is influenced by geographic ancestry due to spatial locality in historical mating and migration patterns. Spatial population structure in genetic datasets has been traditionally analyzed using either model-free algorithms, such as principal components analysis (PCA) and multidimensional scaling, or using explicit spatial probabilistic models of allele frequency evolution. We develop a general probabilistic model and an associated inference algorithm that unify the model-based and data-driven approaches to visualizing and inferring population structure. Our spatial inference algorithm can also be effectively applied to the problem of population stratification in genome-wide association studies (GWAS), where hidden population structure can create fictitious associations when population ancestry is correlated with both the genotype and the trait. Our algorithm Geographic Ancestry Positioning (GAP) relates local genetic distances between samples to their spatial distances, and can be used for visually discerning population structure as well as accurately inferring the spatial origin of individuals on a two-dimensional continuum. On both simulated and several real datasets from diverse human populations, GAP exhibits substantially lower error in reconstructing spatial ancestry coordinates compared to PCA. We also develop an association test that uses the ancestry coordinates inferred by GAP to accurately account for ancestry-induced correlations in GWAS. Based on simulations and analysis of a dataset of 10 metabolic traits measured in a Northern Finland cohort, which is known to exhibit significant population structure, we find that our method has superior power to current approaches. Our software is available at https://github.com/anand-bhaskar/gap . abhaskar@stanford.edu or ajavanma@usc.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  12. Reflection of a Year Long Model-Driven Business and UI Modeling Development Project

    NASA Astrophysics Data System (ADS)

    Sukaviriya, Noi; Mani, Senthil; Sinha, Vibha

    Model-driven software development enables users to specify an application at a high level - a level that better matches problem domain. It also promises the users with better analysis and automation. Our work embarks on two collaborating domains - business process and human interactions - to build an application. Business modeling expresses business operations and flows then creates business flow implementation. Human interaction modeling expresses a UI design, its relationship with business data, logic, and flow, and can generate working UI. This double modeling approach automates the production of a working system with UI and business logic connected. This paper discusses the human aspects of this modeling approach after a year long of building a procurement outsourcing contract application using the approach - the result of which was deployed in December 2008. The paper discusses in multiple areas the happy endings and some heartache. We end with insights on how a model-driven approach could do better for humans in the process.

  13. Towards a visual modeling approach to designing microelectromechanical system transducers

    NASA Astrophysics Data System (ADS)

    Dewey, Allen; Srinivasan, Vijay; Icoz, Evrim

    1999-12-01

    In this paper, we address initial design capture and system conceptualization of microelectromechanical system transducers based on visual modeling and design. Visual modeling frames the task of generating hardware description language (analog and digital) component models in a manner similar to the task of generating software programming language applications. A structured topological design strategy is employed, whereby microelectromechanical foundry cell libraries are utilized to facilitate the design process of exploring candidate cells (topologies), varying key aspects of the transduction for each topology, and determining which topology best satisfies design requirements. Coupled-energy microelectromechanical system characterizations at a circuit level of abstraction are presented that are based on branch constitutive relations and an overall system of simultaneous differential and algebraic equations. The resulting design methodology is called visual integrated-microelectromechanical VHDL-AMS interactive design (VHDL-AMS is visual hardware design language for analog and mixed signal).

  14. Multi-Mission Power Analysis Tool (MMPAT) Version 3

    NASA Technical Reports Server (NTRS)

    Wood, Eric G.; Chang, George W.; Chen, Fannie C.

    2012-01-01

    The Multi-Mission Power Analysis Tool (MMPAT) simulates a spacecraft power subsystem including the power source (solar array and/or radioisotope thermoelectric generator), bus-voltage control, secondary battery (lithium-ion or nickel-hydrogen), thermostatic heaters, and power-consuming equipment. It handles multiple mission types including heliocentric orbiters, planetary orbiters, and surface operations. Being parametrically driven along with its user-programmable features can reduce or even eliminate any need for software modifications when configuring it for a particular spacecraft. It provides multiple levels of fidelity, thereby fulfilling the vast majority of a project s power simulation needs throughout the lifecycle. It can operate in a stand-alone mode with a graphical user interface, in batch mode, or as a library linked with other tools. This software can simulate all major aspects of a spacecraft power subsystem. It is parametrically driven to reduce or eliminate the need for a programmer. Added flexibility is provided through user-designed state models and table-driven parameters. MMPAT is designed to be used by a variety of users, such as power subsystem engineers for sizing power subsystem components; mission planners for adjusting mission scenarios using power profiles generated by the model; system engineers for performing system- level trade studies using the results of the model during the early design phases of a spacecraft; and operations personnel for high-fidelity modeling of the essential power aspect of the planning picture.

  15. Test-driven programming

    NASA Astrophysics Data System (ADS)

    Georgiev, Bozhidar; Georgieva, Adriana

    2013-12-01

    In this paper, are presented some possibilities concerning the implementation of a test-driven development as a programming method. Here is offered a different point of view for creation of advanced programming techniques (build tests before programming source with all necessary software tools and modules respectively). Therefore, this nontraditional approach for easier programmer's work through building tests at first is preferable way of software development. This approach allows comparatively simple programming (applied with different object-oriented programming languages as for example JAVA, XML, PYTHON etc.). It is predictable way to develop software tools and to provide help about creating better software that is also easier to maintain. Test-driven programming is able to replace more complicated casual paradigms, used by many programmers.

  16. Architecture of a platform for hardware-in-the-loop simulation of flying vehicle control systems

    NASA Astrophysics Data System (ADS)

    Belokon', S. A.; Zolotukhin, Yu. N.; Filippov, M. N.

    2017-07-01

    A hardware-software platform is presented, which is designed for the development and hardware-in-the-loop simulation of flying vehicle control systems. This platform ensures the construction of the mathematical model of the plant, development of algorithms and software for onboard radioelectronic equipment and ground control station, and visualization of the three-dimensional model of the vehicle and external environment of the cockpit in the simulator training mode.

  17. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 3 software suite can be compiled for Microsoft Windows®4 and Linux®5 operating systems; the source code is available in a Microsoft Visual Studio®6 2013 solution; Linux Makefiles are also provided. PEST++ Version 3 continues to build a foundation for an open-source framework capable of producing robust and efficient parameter estimation tools for large environmental models.

  18. Intelligent Model Management in a Forest Ecosystem Management Decision Support System

    Treesearch

    Donald Nute; Walter D. Potter; Frederick Maier; Jin Wang; Mark Twery; H. Michael Rauscher; Peter Knopp; Scott Thomasma; Mayukh Dass; Hajime Uchiyama

    2002-01-01

    Decision making for forest ecosystem management can include the use of a wide variety of modeling tools. These tools include vegetation growth models, wildlife models, silvicultural models, GIS, and visualization tools. NED-2 is a robust, intelligent, goal-driven decision support system that integrates tools in each of these categories. NED-2 uses a blackboard...

  19. PRay - A graphical user interface for interactive visualization and modification of rayinvr models

    NASA Astrophysics Data System (ADS)

    Fromm, T.

    2016-01-01

    PRay is a graphical user interface for interactive displaying and editing of velocity models for seismic refraction. It is optimized for editing rayinvr models but can also be used as a dynamic viewer for ray tracing results from other software. The main features are the graphical editing of nodes and fast adjusting of the display (stations and phases). It can be extended by user-defined shell scripts and links to phase picking software. PRay is open source software written in the scripting language Perl, runs on Unix-like operating systems including Mac OS X and provides a version controlled source code repository for community development (https://sourceforge.net/projects/pray-plot-rayinvr/).

  20. Impulse processing: A dynamical systems model of incremental eye movements in the visual world paradigm

    PubMed Central

    Kukona, Anuenue; Tabor, Whitney

    2011-01-01

    The visual world paradigm presents listeners with a challenging problem: they must integrate two disparate signals, the spoken language and the visual context, in support of action (e.g., complex movements of the eyes across a scene). We present Impulse Processing, a dynamical systems approach to incremental eye movements in the visual world that suggests a framework for integrating language, vision, and action generally. Our approach assumes that impulses driven by the language and the visual context impinge minutely on a dynamical landscape of attractors corresponding to the potential eye-movement behaviors of the system. We test three unique predictions of our approach in an empirical study in the visual world paradigm, and describe an implementation in an artificial neural network. We discuss the Impulse Processing framework in relation to other models of the visual world paradigm. PMID:21609355

  1. Older drivers and rapid deceleration events: Salisbury Eye Evaluation Driving Study.

    PubMed

    Keay, Lisa; Munoz, Beatriz; Duncan, Donald D; Hahn, Daniel; Baldwin, Kevin; Turano, Kathleen A; Munro, Cynthia A; Bandeen-Roche, Karen; West, Sheila K

    2013-09-01

    Drivers who rapidly change speed while driving may be more at risk for a crash. We sought to determine the relationship of demographic, vision, and cognitive variables with episodes of rapid decelerations during five days of normal driving in a cohort of older drivers. In the Salisbury Eye Evaluation Driving Study, 1425 older drivers aged 67-87 were recruited from the Maryland Motor Vehicle Administration's rolls for licensees in Salisbury, Maryland. Participants had several measures of vision tested: visual acuity, contrast sensitivity, visual fields, and the attentional visual field. Participants were also tested for various domains of cognitive function including executive function, attention, psychomotor speed, and visual search. A custom created driving monitoring system (DMS) was used to capture rapid deceleration events (RDEs), defined as at least 350 milli-g deceleration, during a five day period of monitoring. The rate of RDE per mile driven was modeled using a negative binomial regression model with an offset of the logarithm of the number of miles driven. We found that 30% of older drivers had one or more RDE during a five day period, and of those, about 1/3 had four or more. The rate of RDE per mile driven was highest for those drivers driving<59 miles during the 5-day period of monitoring. However, older drivers with RDE's were more likely to have better scores in cognitive tests of psychomotor speed and visual search, and have faster brake reaction time. Further, greater average speed and maximum speed per driving segment was protective against RDE events. In conclusion, contrary to our hypothesis, older drivers who perform rapid decelerations tend to be more "fit", with better measures of vision and cognition compared to those who do not have events of rapid deceleration. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Older Drivers and Rapid Deceleration Events: Salisbury Eye Evaluation Driving Study

    PubMed Central

    Keay, Lisa; Munoz, Beatriz; Duncan, Donald D; Hahn, Daniel; Baldwin, Kevin; Turano, Kathleen A; Munro, Cynthia A; Bandeen-Roche, Karen; West, Sheila K

    2012-01-01

    Drivers who rapidly change speed while driving may be more at risk for a crash. We sought to determine the relationship of demographic, vision, and cognitive variables with episodes of rapid decelerations during five days of normal driving in a cohort of older drivers. In the Salisbury Eye Evaluation Driving Study, 1425 older drivers ages 67 to 87 were recruited from the Maryland Motor Vehicle Administration’s rolls for licensees in Salisbury, Maryland. Participants had several measures of vision tested: visual acuity, contrast sensitivity, visual fields, and the attentional visual field. Participants were also tested for various domains of cognitive function including executive function, attention, psychomotor speed, and visual search. A custom created Driving Monitor System (DMS) was used to capture rapid deceleration events (RDE), defined as at least 350 milli-g deceleration, during a five day period of monitoring. The rate of RDE per mile driven was modeled using a negative binomial regression model with an offset of the logarithm of the number of miles driven. We found that 30% of older drivers had one or more RDE during a five day period, and of those, about 1/3 had four or more. The rate of RDE per mile driven was highest for those drivers driving <59 miles during the 5-day period of monitoring. However, older drivers with RDE’s were more likely to have better scores in cognitive tests of psychomotor speed and visual search, and have faster brake reaction time. Further, greater average speed and maximum speed per driving segment was protective against RDE events. In conclusion, contrary to our hypothesis, older drivers who perform rapid decelerations tend to be more “fit”, with better measures of vision and cognition compared to those who do not have events of rapid deceleration. PMID:22742775

  3. TU-C-17A-03: An Integrated Contour Evaluation Software Tool Using Supervised Pattern Recognition for Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, H; Tan, J; Kavanaugh, J

    Purpose: Radiotherapy (RT) contours delineated either manually or semiautomatically require verification before clinical usage. Manual evaluation is very time consuming. A new integrated software tool using supervised pattern contour recognition was thus developed to facilitate this process. Methods: The contouring tool was developed using an object-oriented programming language C# and application programming interfaces, e.g. visualization toolkit (VTK). The C# language served as the tool design basis. The Accord.Net scientific computing libraries were utilized for the required statistical data processing and pattern recognition, while the VTK was used to build and render 3-D mesh models from critical RT structures in real-timemore » and 360° visualization. Principal component analysis (PCA) was used for system self-updating geometry variations of normal structures based on physician-approved RT contours as a training dataset. The inhouse design of supervised PCA-based contour recognition method was used for automatically evaluating contour normality/abnormality. The function for reporting the contour evaluation results was implemented by using C# and Windows Form Designer. Results: The software input was RT simulation images and RT structures from commercial clinical treatment planning systems. Several abilities were demonstrated: automatic assessment of RT contours, file loading/saving of various modality medical images and RT contours, and generation/visualization of 3-D images and anatomical models. Moreover, it supported the 360° rendering of the RT structures in a multi-slice view, which allows physicians to visually check and edit abnormally contoured structures. Conclusion: This new software integrates the supervised learning framework with image processing and graphical visualization modules for RT contour verification. This tool has great potential for facilitating treatment planning with the assistance of an automatic contour evaluation module in avoiding unnecessary manual verification for physicians/dosimetrists. In addition, its nature as a compact and stand-alone tool allows for future extensibility to include additional functions for physicians’ clinical needs.« less

  4. The geospatial modeling interface (GMI) framework for deploying and assessing environmental models

    USDA-ARS?s Scientific Manuscript database

    Geographical information systems (GIS) software packages have been used for close to three decades as analytical tools in environmental management for geospatial data assembly, processing, storage, and visualization of input data and model output. However, with increasing availability and use of ful...

  5. An open-source Java-based Toolbox for environmental model evaluation: The MOUSE Software Application

    USDA-ARS?s Scientific Manuscript database

    A consequence of environmental model complexity is that the task of understanding how environmental models work and identifying their sensitivities/uncertainties, etc. becomes progressively more difficult. Comprehensive numerical and visual evaluation tools have been developed such as the Monte Carl...

  6. Visualization techniques to aid in the analysis of multispectral astrophysical data sets

    NASA Technical Reports Server (NTRS)

    Brugel, E. W.; Domik, Gitta O.; Ayres, T. R.

    1993-01-01

    The goal of this project was to support the scientific analysis of multi-spectral astrophysical data by means of scientific visualization. Scientific visualization offers its greatest value if it is not used as a method separate or alternative to other data analysis methods but rather in addition to these methods. Together with quantitative analysis of data, such as offered by statistical analysis, image or signal processing, visualization attempts to explore all information inherent in astrophysical data in the most effective way. Data visualization is one aspect of data analysis. Our taxonomy as developed in Section 2 includes identification and access to existing information, preprocessing and quantitative analysis of data, visual representation and the user interface as major components to the software environment of astrophysical data analysis. In pursuing our goal to provide methods and tools for scientific visualization of multi-spectral astrophysical data, we therefore looked at scientific data analysis as one whole process, adding visualization tools to an already existing environment and integrating the various components that define a scientific data analysis environment. As long as the software development process of each component is separate from all other components, users of data analysis software are constantly interrupted in their scientific work in order to convert from one data format to another, or to move from one storage medium to another, or to switch from one user interface to another. We also took an in-depth look at scientific visualization and its underlying concepts, current visualization systems, their contributions and their shortcomings. The role of data visualization is to stimulate mental processes different from quantitative data analysis, such as the perception of spatial relationships or the discovery of patterns or anomalies while browsing through large data sets. Visualization often leads to an intuitive understanding of the meaning of data values and their relationships by sacrificing accuracy in interpreting the data values. In order to be accurate in the interpretation, data values need to be measured, computed on, and compared to theoretical or empirical models (quantitative analysis). If visualization software hampers quantitative analysis (which happens with some commercial visualization products), its use is greatly diminished for astrophysical data analysis. The software system STAR (Scientific Toolkit for Astrophysical Research) was developed as a prototype during the course of the project to better understand the pragmatic concerns raised in the project. STAR led to a better understanding on the importance of collaboration between astrophysicists and computer scientists. Twenty-one examples of the use of visualization for astrophysical data are included with this report. Sixteen publications related to efforts performed during or initiated through work on this project are listed at the end of this report.

  7. CLMSVault: A Software Suite for Protein Cross-Linking Mass-Spectrometry Data Analysis and Visualization.

    PubMed

    Courcelles, Mathieu; Coulombe-Huntington, Jasmin; Cossette, Émilie; Gingras, Anne-Claude; Thibault, Pierre; Tyers, Mike

    2017-07-07

    Protein cross-linking mass spectrometry (CL-MS) enables the sensitive detection of protein interactions and the inference of protein complex topology. The detection of chemical cross-links between protein residues can identify intra- and interprotein contact sites or provide physical constraints for molecular modeling of protein structure. Recent innovations in cross-linker design, sample preparation, mass spectrometry, and software tools have significantly improved CL-MS approaches. Although a number of algorithms now exist for the identification of cross-linked peptides from mass spectral data, a dearth of user-friendly analysis tools represent a practical bottleneck to the broad adoption of the approach. To facilitate the analysis of CL-MS data, we developed CLMSVault, a software suite designed to leverage existing CL-MS algorithms and provide intuitive and flexible tools for cross-platform data interpretation. CLMSVault stores and combines complementary information obtained from different cross-linkers and search algorithms. CLMSVault provides filtering, comparison, and visualization tools to support CL-MS analyses and includes a workflow for label-free quantification of cross-linked peptides. An embedded 3D viewer enables the visualization of quantitative data and the mapping of cross-linked sites onto PDB structural models. We demonstrate the application of CLMSVault for the analysis of a noncovalent Cdc34-ubiquitin protein complex cross-linked under different conditions. CLMSVault is open-source software (available at https://gitlab.com/courcelm/clmsvault.git ), and a live demo is available at http://democlmsvault.tyerslab.com/ .

  8. An exchange format for use-cases of hospital information systems.

    PubMed

    Masuda, G; Sakamoto, N; Sakai, R; Yamamoto, R

    2001-01-01

    Object-oriented software development is a powerful methodology for development of large hospital information systems. We think use-case driven approach is particularly useful for the development. In the use-cases driven approach, use-cases are documented at the first stage in the software development process and they are used through the whole steps in a variety of ways. Therefore, it is important to exchange and share the use-cases and make effective use of them through the overall lifecycle of a development process. In this paper, we propose a method of sharing and exchanging use-case models between applications, developers, and projects. We design an XML based exchange format for use-cases. We then discuss an application of the exchange format to support several software development activities. We preliminarily implemented a support system for object-oriented analysis based on the exchange format. The result shows that using the structural and semantic information in the exchange format enables the support system to assist the object-oriented analysis successfully.

  9. Software Validation via Model Animation

    NASA Technical Reports Server (NTRS)

    Dutle, Aaron M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Butler, Ricky W.

    2015-01-01

    This paper explores a new approach to validating software implementations that have been produced from formally-verified algorithms. Although visual inspection gives some confidence that the implementations faithfully reflect the formal models, it does not provide complete assurance that the software is correct. The proposed approach, which is based on animation of formal specifications, compares the outputs computed by the software implementations on a given suite of input values to the outputs computed by the formal models on the same inputs, and determines if they are equal up to a given tolerance. The approach is illustrated on a prototype air traffic management system that computes simple kinematic trajectories for aircraft. Proofs for the mathematical models of the system's algorithms are carried out in the Prototype Verification System (PVS). The animation tool PVSio is used to evaluate the formal models on a set of randomly generated test cases. Output values computed by PVSio are compared against output values computed by the actual software. This comparison improves the assurance that the translation from formal models to code is faithful and that, for example, floating point errors do not greatly affect correctness and safety properties.

  10. Just-in-time Database-Driven Web Applications

    PubMed Central

    2003-01-01

    "Just-in-time" database-driven Web applications are inexpensive, quickly-developed software that can be put to many uses within a health care organization. Database-driven Web applications garnered 73873 hits on our system-wide intranet in 2002. They enabled collaboration and communication via user-friendly Web browser-based interfaces for both mission-critical and patient-care-critical functions. Nineteen database-driven Web applications were developed. The application categories that comprised 80% of the hits were results reporting (27%), graduate medical education (26%), research (20%), and bed availability (8%). The mean number of hits per application was 3888 (SD = 5598; range, 14-19879). A model is described for just-in-time database-driven Web application development and an example given with a popular HTML editor and database program. PMID:14517109

  11. Implementation of an ADME enabling selection and visualization tool for drug discovery.

    PubMed

    Stoner, Chad L; Gifford, Eric; Stankovic, Charles; Lepsy, Christopher S; Brodfuehrer, Joanne; Prasad, J V N Vara; Surendran, Narayanan

    2004-05-01

    The pharmaceutical industry has large investments in compound library enrichment, high throughput biological screening, and biopharmaceutical (ADME) screening. As the number of compounds submitted for in vitro ADME screens increases, data analysis, interpretation, and reporting will become rate limiting in providing ADME-structure-activity relationship information to guide the synthetic strategy for chemical series. To meet these challenges, a software tool was developed and implemented that enables scientists to explore in vitro and in silico ADME and chemistry data in a multidimensional framework. The present work integrates physicochemical and ADME data, encompassing results for Caco-2 permeability, human liver microsomal half-life, rat liver microsomal half-life, kinetic solubility, measured log P, rule of 5 descriptors (molecular weight, hydrogen bond acceptors, hydrogen bond donors, calculated log P), polar surface area, chemical stability, and CYP450 3A4 inhibition. To facilitate interpretation of this data, a semicustomized software solution using Spotfire was designed that allows for multidimensional data analysis and visualization. The solution also enables simultaneous viewing and export of chemical structures with the corresponding ADME properties, enabling a more facile analysis of ADME-structure-activity relationship. In vitro and in silico ADME data were generated for 358 compounds from a series of human immunodeficiency virus protease inhibitors, resulting in a data set of 5370 experimental values which were subsequently analyzed and visualized using the customized Spotfire application. Implementation of this analysis and visualization tool has accelerated the selection of molecules for further development based on optimum ADME characteristics, and provided medicinal chemistry with specific, data driven structural recommendations for improvements in the ADME profile. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 93: 1131-1141, 2004

  12. Students' different understandings of class diagrams

    NASA Astrophysics Data System (ADS)

    Boustedt, Jonas

    2012-03-01

    The software industry needs well-trained software designers and one important aspect of software design is the ability to model software designs visually and understand what visual models represent. However, previous research indicates that software design is a difficult task to many students. This article reports empirical findings from a phenomenographic investigation on how students understand class diagrams, Unified Modeling Language (UML) symbols, and relations to object-oriented (OO) concepts. The informants were 20 Computer Science students from four different universities in Sweden. The results show qualitatively different ways to understand and describe UML class diagrams and the "diamond symbols" representing aggregation and composition. The purpose of class diagrams was understood in a varied way, from describing it as a documentation to a more advanced view related to communication. The descriptions of class diagrams varied from seeing them as a specification of classes to a more advanced view, where they were described to show hierarchic structures of classes and relations. The diamond symbols were seen as "relations" and a more advanced way was seeing the white and the black diamonds as different symbols for aggregation and composition. As a consequence of the results, it is recommended that UML should be adopted in courses. It is briefly indicated how the phenomenographic results in combination with variation theory can be used by teachers to enhance students' possibilities to reach advanced understanding of phenomena related to UML class diagrams. Moreover, it is recommended that teachers should put more effort in assessing skills in proper usage of the basic symbols and models and students should be provided with opportunities to practise collaborative design, e.g. using whiteboards.

  13. PeptideDepot: Flexible Relational Database for Visual Analysis of Quantitative Proteomic Data and Integration of Existing Protein Information

    PubMed Central

    Yu, Kebing; Salomon, Arthur R.

    2010-01-01

    Recently, dramatic progress has been achieved in expanding the sensitivity, resolution, mass accuracy, and scan rate of mass spectrometers able to fragment and identify peptides through tandem mass spectrometry (MS/MS). Unfortunately, this enhanced ability to acquire proteomic data has not been accompanied by a concomitant increase in the availability of flexible tools allowing users to rapidly assimilate, explore, and analyze this data and adapt to a variety of experimental workflows with minimal user intervention. Here we fill this critical gap by providing a flexible relational database called PeptideDepot for organization of expansive proteomic data sets, collation of proteomic data with available protein information resources, and visual comparison of multiple quantitative proteomic experiments. Our software design, built upon the synergistic combination of a MySQL database for safe warehousing of proteomic data with a FileMaker-driven graphical user interface for flexible adaptation to diverse workflows, enables proteomic end-users to directly tailor the presentation of proteomic data to the unique analysis requirements of the individual proteomics lab. PeptideDepot may be deployed as an independent software tool or integrated directly with our High Throughput Autonomous Proteomic Pipeline (HTAPP) used in the automated acquisition and post-acquisition analysis of proteomic data. PMID:19834895

  14. Domain Modeling and Application Development of an Archetype- and XML-based EHRS. Practical Experiences and Lessons Learnt.

    PubMed

    Kropf, Stefan; Chalopin, Claire; Lindner, Dirk; Denecke, Kerstin

    2017-06-28

    Access to patient data within the hospital or between hospitals is still problematic since a variety of information systems is in use applying different vendor specific terminologies and underlying knowledge models. Beyond, the development of electronic health record systems (EHRSs) is time and resource consuming. Thus, there is a substantial need for a development strategy of standardized EHRSs. We are applying a reuse-oriented process model and demonstrate its feasibility and realization on a practical medical use case, which is an EHRS holding all relevant data arising in the context of treatment of tumors of the sella region. In this paper, we describe the development process and our practical experiences. Requirements towards the development of the EHRS were collected by interviews with a neurosurgeon and patient data analysis. For modelling of patient data, we selected openEHR as standard and exploited the software tools provided by the openEHR foundation. The patient information model forms the core of the development process, which comprises the EHR generation and the implementation of an EHRS architecture. Moreover, a reuse-oriented process model from the business domain was adapted to the development of the EHRS. The reuse-oriented process model is a model for a suitable abstraction of both, modeling and development of an EHR centralized EHRS. The information modeling process resulted in 18 archetypes that were aggregated in a template and built the boilerplate of the model driven development. The EHRs and the EHRS were developed by openEHR and W3C standards, tightly supported by well-established XML techniques. The GUI of the final EHRS integrates and visualizes information from various examinations, medical reports, findings and laboratory test results. We conclude that the development of a standardized overarching EHR and an EHRS is feasible using openEHR and W3C standards, enabling a high degree of semantic interoperability. The standardized representation visualizes data and can in this way support the decision process of clinicians.

  15. Visualizing Astronomical Data with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2014-01-01

    We present methods for using the 3D graphics program Blender in the visualization of astronomical data. The software's forte for animating 3D data lends itself well to use in astronomy. The Blender graphical user interface and Python scripting capabilities can be utilized in the generation of models for data cubes, catalogs, simulations, and surface maps. We review methods for data import, 2D and 3D voxel texture applications, animations, camera movement, and composite renders. Rendering times can be improved by using graphic processing units (GPUs). A number of examples are shown using the software features most applicable to various kinds of data paradigms in astronomy.

  16. A Virtual World of Visualization

    NASA Technical Reports Server (NTRS)

    1998-01-01

    In 1990, Sterling Software, Inc., developed the Flow Analysis Software Toolkit (FAST) for NASA Ames on contract. FAST is a workstation based modular analysis and visualization tool. It is used to visualize and animate grids and grid oriented data, typically generated by finite difference, finite element and other analytical methods. FAST is now available through COSMIC, NASA's software storehouse.

  17. KinImmerse: Macromolecular VR for NMR ensembles

    PubMed Central

    Block, Jeremy N; Zielinski, David J; Chen, Vincent B; Davis, Ian W; Vinson, E Claire; Brady, Rachael; Richardson, Jane S; Richardson, David C

    2009-01-01

    Background In molecular applications, virtual reality (VR) and immersive virtual environments have generally been used and valued for the visual and interactive experience – to enhance intuition and communicate excitement – rather than as part of the actual research process. In contrast, this work develops a software infrastructure for research use and illustrates such use on a specific case. Methods The Syzygy open-source toolkit for VR software was used to write the KinImmerse program, which translates the molecular capabilities of the kinemage graphics format into software for display and manipulation in the DiVE (Duke immersive Virtual Environment) or other VR system. KinImmerse is supported by the flexible display construction and editing features in the KiNG kinemage viewer and it implements new forms of user interaction in the DiVE. Results In addition to molecular visualizations and navigation, KinImmerse provides a set of research tools for manipulation, identification, co-centering of multiple models, free-form 3D annotation, and output of results. The molecular research test case analyzes the local neighborhood around an individual atom within an ensemble of nuclear magnetic resonance (NMR) models, enabling immersive visual comparison of the local conformation with the local NMR experimental data, including target curves for residual dipolar couplings (RDCs). Conclusion The promise of KinImmerse for production-level molecular research in the DiVE is shown by the locally co-centered RDC visualization developed there, which gave new insights now being pursued in wider data analysis. PMID:19222844

  18. A robust and flexible Geospatial Modeling Interface (GMI) for deploying and evaluating natural resource models

    USDA-ARS?s Scientific Manuscript database

    Geographical information systems (GIS) software packages have been used for nearly three decades as analytical tools in natural resource management for geospatial data assembly, processing, storage, and visualization of input data and model output. However, with increasing availability and use of fu...

  19. Documentation and virtual reconstruction of historical objects in Peru damaged by an earthquake and climatic events

    NASA Astrophysics Data System (ADS)

    Hanzalová, K.; Pavelka, K.

    2013-07-01

    This paper deals with the possibilities of creating a 3-D model and a visualization technique for a presentation of historical buildings and sites in Peru. The project Nasca/CTU is documenting historical objects by using several techniques. This paper describes the documentation and the visualization of two historical churches (San Jose and San Xavier Churches) and the pre-Hispanic archaeological site La Ciudad Perdida de Huayuri (Abandoned town near Huayuri) in Nasca region by using photogrammetry and remote sensing. Both churches were damaged by an earthquake. We use different process for the documentation of these objects. Firstly, PhotoModeler software was used for the photogrammetric data processing of the acquired images. The subsequent making models of both churches were different too. Google SketchUp software was used for the San Jose Church and the 3-D model of San Xavier Church was created in MicroStation software. While in the modelling of the "Abandoned town" near Huayuri, which was destroyed by a climatic event (El Niño), the terrestrial photogrammetry, satellite data and GNSS measurement were applied. The general output of the project is a thematic map of this archaeological site; C14 method was used for dating.

  20. Monitoring Areal Snow Cover Using NASA Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Harshburger, Brian J.; Blandford, Troy; Moore, Brandon

    2011-01-01

    The objective of this project is to develop products and tools to assist in the hydrologic modeling process, including tools to help prepare inputs for hydrologic models and improved methods for the visualization of streamflow forecasts. In addition, this project will facilitate the use of NASA satellite imagery (primarily snow cover imagery) by other federal and state agencies with operational streamflow forecasting responsibilities. A GIS software toolkit for monitoring areal snow cover extent and producing streamflow forecasts is being developed. This toolkit will be packaged as multiple extensions for ArcGIS 9.x and an opensource GIS software package. The toolkit will provide users with a means for ingesting NASA EOS satellite imagery (snow cover analysis), preparing hydrologic model inputs, and visualizing streamflow forecasts. Primary products include a software tool for predicting the presence of snow under clouds in satellite images; a software tool for producing gridded temperature and precipitation forecasts; and a suite of tools for visualizing hydrologic model forecasting results. The toolkit will be an expert system designed for operational users that need to generate accurate streamflow forecasts in a timely manner. The Remote Sensing of Snow Cover Toolbar will ingest snow cover imagery from multiple sources, including the MODIS Operational Snowcover Data and convert them to gridded datasets that can be readily used. Statistical techniques will then be applied to the gridded snow cover data to predict the presence of snow under cloud cover. The toolbar has the ability to ingest both binary and fractional snow cover data. Binary mapping techniques use a set of thresholds to determine whether a pixel contains snow or no snow. Fractional mapping techniques provide information regarding the percentage of each pixel that is covered with snow. After the imagery has been ingested, physiographic data is attached to each cell in the snow cover image. This data can be obtained from a digital elevation model (DEM) for the area of interest.

  1. Distinct Effects of Trial-Driven and Task Set-Related Control in Primary Visual Cortex

    PubMed Central

    Vaden, Ryan J.; Visscher, Kristina M.

    2015-01-01

    Task sets are task-specific configurations of cognitive processes that facilitate task-appropriate reactions to stimuli. While it is established that the trial-by-trial deployment of visual attention to expected stimuli influences neural responses in primary visual cortex (V1) in a retinotopically specific manner, it is not clear whether the mechanisms that help maintain a task set over many trials also operate with similar retinotopic specificity. Here, we address this question by using BOLD fMRI to characterize how portions of V1 that are specialized for different eccentricities respond during distinct components of an attention-demanding discrimination task: cue-driven preparation for a trial, trial-driven processing, task-initiation at the beginning of a block of trials, and task-maintenance throughout a block of trials. Tasks required either unimodal attention to an auditory or a visual stimulus or selective intermodal attention to the visual or auditory component of simultaneously presented visual and auditory stimuli. We found that while the retinotopic patterns of trial-driven and cue-driven activity depended on the attended stimulus, the retinotopic patterns of task-initiation and task-maintenance activity did not. Further, only the retinotopic patterns of trial-driven activity were found to depend on the presence of intermodal distraction. Participants who performed well on the intermodal selective attention tasks showed strong task-specific modulations of both trial-driven and task-maintenance activity. Importantly, task-related modulations of trial-driven and task-maintenance activity were in opposite directions. Together, these results confirm that there are (at least) two different processes for top-down control of V1: One, working trial-by-trial, differently modulates activity across different eccentricity sectors—portions of V1 corresponding to different visual eccentricities. The second process works across longer epochs of task performance, and does not differ among eccentricity sectors. These results are discussed in the context of previous literature examining top-down control of visual cortical areas. PMID:26163806

  2. New Algorithm and Software (BNOmics) for Inferring and Visualizing Bayesian Networks from Heterogeneous Big Biological and Genetic Data

    PubMed Central

    Gogoshin, Grigoriy; Boerwinkle, Eric

    2017-01-01

    Abstract Bayesian network (BN) reconstruction is a prototypical systems biology data analysis approach that has been successfully used to reverse engineer and model networks reflecting different layers of biological organization (ranging from genetic to epigenetic to cellular pathway to metabolomic). It is especially relevant in the context of modern (ongoing and prospective) studies that generate heterogeneous high-throughput omics datasets. However, there are both theoretical and practical obstacles to the seamless application of BN modeling to such big data, including computational inefficiency of optimal BN structure search algorithms, ambiguity in data discretization, mixing data types, imputation and validation, and, in general, limited scalability in both reconstruction and visualization of BNs. To overcome these and other obstacles, we present BNOmics, an improved algorithm and software toolkit for inferring and analyzing BNs from omics datasets. BNOmics aims at comprehensive systems biology—type data exploration, including both generating new biological hypothesis and testing and validating the existing ones. Novel aspects of the algorithm center around increasing scalability and applicability to varying data types (with different explicit and implicit distributional assumptions) within the same analysis framework. An output and visualization interface to widely available graph-rendering software is also included. Three diverse applications are detailed. BNOmics was originally developed in the context of genetic epidemiology data and is being continuously optimized to keep pace with the ever-increasing inflow of available large-scale omics datasets. As such, the software scalability and usability on the less than exotic computer hardware are a priority, as well as the applicability of the algorithm and software to the heterogeneous datasets containing many data types—single-nucleotide polymorphisms and other genetic/epigenetic/transcriptome variables, metabolite levels, epidemiological variables, endpoints, and phenotypes, etc. PMID:27681505

  3. New Algorithm and Software (BNOmics) for Inferring and Visualizing Bayesian Networks from Heterogeneous Big Biological and Genetic Data.

    PubMed

    Gogoshin, Grigoriy; Boerwinkle, Eric; Rodin, Andrei S

    2017-04-01

    Bayesian network (BN) reconstruction is a prototypical systems biology data analysis approach that has been successfully used to reverse engineer and model networks reflecting different layers of biological organization (ranging from genetic to epigenetic to cellular pathway to metabolomic). It is especially relevant in the context of modern (ongoing and prospective) studies that generate heterogeneous high-throughput omics datasets. However, there are both theoretical and practical obstacles to the seamless application of BN modeling to such big data, including computational inefficiency of optimal BN structure search algorithms, ambiguity in data discretization, mixing data types, imputation and validation, and, in general, limited scalability in both reconstruction and visualization of BNs. To overcome these and other obstacles, we present BNOmics, an improved algorithm and software toolkit for inferring and analyzing BNs from omics datasets. BNOmics aims at comprehensive systems biology-type data exploration, including both generating new biological hypothesis and testing and validating the existing ones. Novel aspects of the algorithm center around increasing scalability and applicability to varying data types (with different explicit and implicit distributional assumptions) within the same analysis framework. An output and visualization interface to widely available graph-rendering software is also included. Three diverse applications are detailed. BNOmics was originally developed in the context of genetic epidemiology data and is being continuously optimized to keep pace with the ever-increasing inflow of available large-scale omics datasets. As such, the software scalability and usability on the less than exotic computer hardware are a priority, as well as the applicability of the algorithm and software to the heterogeneous datasets containing many data types-single-nucleotide polymorphisms and other genetic/epigenetic/transcriptome variables, metabolite levels, epidemiological variables, endpoints, and phenotypes, etc.

  4. A Computational Systems Biology Software Platform for Multiscale Modeling and Simulation: Integrating Whole-Body Physiology, Disease Biology, and Molecular Reaction Networks

    PubMed Central

    Eissing, Thomas; Kuepfer, Lars; Becker, Corina; Block, Michael; Coboeken, Katrin; Gaub, Thomas; Goerlitz, Linus; Jaeger, Juergen; Loosen, Roland; Ludewig, Bernd; Meyer, Michaela; Niederalt, Christoph; Sevestre, Michael; Siegmund, Hans-Ulrich; Solodenko, Juri; Thelen, Kirstin; Telle, Ulrich; Weiss, Wolfgang; Wendl, Thomas; Willmann, Stefan; Lippert, Joerg

    2011-01-01

    Today, in silico studies and trial simulations already complement experimental approaches in pharmaceutical R&D and have become indispensable tools for decision making and communication with regulatory agencies. While biology is multiscale by nature, project work, and software tools usually focus on isolated aspects of drug action, such as pharmacokinetics at the organism scale or pharmacodynamic interaction on the molecular level. We present a modeling and simulation software platform consisting of PK-Sim® and MoBi® capable of building and simulating models that integrate across biological scales. A prototypical multiscale model for the progression of a pancreatic tumor and its response to pharmacotherapy is constructed and virtual patients are treated with a prodrug activated by hepatic metabolization. Tumor growth is driven by signal transduction leading to cell cycle transition and proliferation. Free tumor concentrations of the active metabolite inhibit Raf kinase in the signaling cascade and thereby cell cycle progression. In a virtual clinical study, the individual therapeutic outcome of the chemotherapeutic intervention is simulated for a large population with heterogeneous genomic background. Thereby, the platform allows efficient model building and integration of biological knowledge and prior data from all biological scales. Experimental in vitro model systems can be linked with observations in animal experiments and clinical trials. The interplay between patients, diseases, and drugs and topics with high clinical relevance such as the role of pharmacogenomics, drug–drug, or drug–metabolite interactions can be addressed using this mechanistic, insight driven multiscale modeling approach. PMID:21483730

  5. High-Performance 3D Articulated Robot Display

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy

    2011-01-01

    In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle over or on the terrain correctly. For long traverses over terrain, the visualization can stream in terrain piecewise in order to maintain the current area of interest for the operator without incurring unreasonable resource constraints on the computing platform. The visualization software is designed to run on laptops that can operate in field-testing environments without Internet access, which is a frequently encountered situation when testing in remote locations that simulate planetary environments such as Mars and other planetary bodies.

  6. iContraception(®): a software tool to assist professionals in choosing contraceptive methods according to WHO medical eligibility criteria.

    PubMed

    Lopez, Ramón Guisado; Polo, Isabel Ramirez; Berral, Jose Eduardo Arjona; Fernandez, Julia Guisado; Castelo-Branco, Camil

    2015-04-01

    To design software to assist health care providers with contraceptive counselling. The Model-View-Controller software architecture pattern was used. Decision logic was incorporated to automatically compute the safety category of each contraceptive option. Decisions are made according to the specific characteristics or known medical conditions of each potential contraception user. The software is an app designed for the iOS and Android platforms and is available in four languages. iContraception(®) facilitates presentation of visual data on medical eligibility criteria for contraceptive treatments. The use of this software was evaluated by a sample of 54 health care providers. The general satisfaction with the use of the app was over 8 on a 0-10 visual analogue scale in 96.3% of cases. iContraception provides easy access to medical eligibility criteria of contraceptive options and may help with contraceptive counselling. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  7. Federating Cyber and Physical Models for Event-Driven Situational Awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stephan, Eric G.; Pawlowski, Ronald A.; Sridhar, Siddharth

    The purpose of this paper is to describe a novel method to improve electric power system monitoring and control software application interoperability. This method employs the concept of federation, which is defined as the use of existing models that represent aspects of a system in specific domains (such as physical and cyber security domains) and building interface to link all of domain models.

  8. Information-computational platform for collaborative multidisciplinary investigations of regional climatic changes and their impacts

    NASA Astrophysics Data System (ADS)

    Gordov, Evgeny; Lykosov, Vasily; Krupchatnikov, Vladimir; Okladnikov, Igor; Titov, Alexander; Shulgina, Tamara

    2013-04-01

    Analysis of growing volume of related to climate change data from sensors and model outputs requires collaborative multidisciplinary efforts of researchers. To do it timely and in reliable way one needs in modern information-computational infrastructure supporting integrated studies in the field of environmental sciences. Recently developed experimental software and hardware platform Climate (http://climate.scert.ru/) provides required environment for regional climate change related investigations. The platform combines modern web 2.0 approach, GIS-functionality and capabilities to run climate and meteorological models, process large geophysical datasets and support relevant analysis. It also supports joint software development by distributed research groups, and organization of thematic education for students and post-graduate students. In particular, platform software developed includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also run of integrated into the platform WRF and «Planet Simulator» models, modeling results data preprocessing and visualization is provided. All functions of the platform are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of selection of geographical region of interest (pan and zoom), data layers manipulation (order, enable/disable, features extraction) and visualization of results. Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches. Using it even unskilled user without specific knowledge can perform reliable computational processing and visualization of large meteorological, climatic and satellite monitoring datasets through unified graphical web-interface. Partial support of RF Ministry of Education and Science grant 8345, SB RAS Program VIII.80.2 and Projects 69, 131, 140 and APN CBA2012-16NSY project is acknowledged.

  9. Assessing Visual-Spatial Creativity in Youth on the Autism Spectrum

    ERIC Educational Resources Information Center

    Diener, Marissa L.; Wright, Cheryl A.; Smith, Katherine N.; Wright, Scott D.

    2014-01-01

    The goal of this study was to develop a measure of creativity that builds on the strengths of youth with autism spectrum disorders (ASD). The assessment of creativity focused on the visual-spatial abilities of these youth using 3D modeling software. One of the objectives of the research was to develop a measure of creativity in an authentic…

  10. Stimulus- and goal-driven control of eye movements: action videogame players are faster but not better.

    PubMed

    Heimler, Benedetta; Pavani, Francesco; Donk, Mieke; van Zoest, Wieske

    2014-11-01

    Action videogame players (AVGPs) have been shown to outperform nongamers (NVGPs) in covert visual attention tasks. These advantages have been attributed to improved top-down control in this population. The time course of visual selection, which permits researchers to highlight when top-down strategies start to control performance, has rarely been investigated in AVGPs. Here, we addressed specifically this issue through an oculomotor additional-singleton paradigm. Participants were instructed to make a saccadic eye movement to a unique orientation singleton. The target was presented among homogeneous nontargets and one additional orientation singleton that was more, equally, or less salient than the target. Saliency was manipulated in the color dimension. Our results showed similar patterns of performance for both AVGPs and NVGPs: Fast-initiated saccades were saliency-driven, whereas later-initiated saccades were more goal-driven. However, although AVGPs were faster than NVGPs, they were also less accurate. Importantly, a multinomial model applied to the data revealed comparable underlying saliency-driven and goal-driven functions for the two groups. Taken together, the observed differences in performance are compatible with the presence of a lower decision bound for releasing saccades in AVGPs than in NVGPs, in the context of comparable temporal interplay between the underlying attentional mechanisms. In sum, the present findings show that in both AVGPs and NVGPs, the implementation of top-down control in visual selection takes time to come about, and they argue against the idea of a general enhancement of top-down control in AVGPs.

  11. Scientists' sense making when hypothesizing about disease mechanisms from expression data and their needs for visualization support.

    PubMed

    Mirel, Barbara; Görg, Carsten

    2014-04-26

    A common class of biomedical analysis is to explore expression data from high throughput experiments for the purpose of uncovering functional relationships that can lead to a hypothesis about mechanisms of a disease. We call this analysis expression driven, -omics hypothesizing. In it, scientists use interactive data visualizations and read deeply in the research literature. Little is known, however, about the actual flow of reasoning and behaviors (sense making) that scientists enact in this analysis, end-to-end. Understanding this flow is important because if bioinformatics tools are to be truly useful they must support it. Sense making models of visual analytics in other domains have been developed and used to inform the design of useful and usable tools. We believe they would be helpful in bioinformatics. To characterize the sense making involved in expression-driven, -omics hypothesizing, we conducted an in-depth observational study of one scientist as she engaged in this analysis over six months. From findings, we abstracted a preliminary sense making model. Here we describe its stages and suggest guidelines for developing visualization tools that we derived from this case. A single case cannot be generalized. But we offer our findings, sense making model and case-based tool guidelines as a first step toward increasing interest and further research in the bioinformatics field on scientists' analytical workflows and their implications for tool design.

  12. Scientists’ sense making when hypothesizing about disease mechanisms from expression data and their needs for visualization support

    PubMed Central

    2014-01-01

    A common class of biomedical analysis is to explore expression data from high throughput experiments for the purpose of uncovering functional relationships that can lead to a hypothesis about mechanisms of a disease. We call this analysis expression driven, -omics hypothesizing. In it, scientists use interactive data visualizations and read deeply in the research literature. Little is known, however, about the actual flow of reasoning and behaviors (sense making) that scientists enact in this analysis, end-to-end. Understanding this flow is important because if bioinformatics tools are to be truly useful they must support it. Sense making models of visual analytics in other domains have been developed and used to inform the design of useful and usable tools. We believe they would be helpful in bioinformatics. To characterize the sense making involved in expression-driven, -omics hypothesizing, we conducted an in-depth observational study of one scientist as she engaged in this analysis over six months. From findings, we abstracted a preliminary sense making model. Here we describe its stages and suggest guidelines for developing visualization tools that we derived from this case. A single case cannot be generalized. But we offer our findings, sense making model and case-based tool guidelines as a first step toward increasing interest and further research in the bioinformatics field on scientists’ analytical workflows and their implications for tool design. PMID:24766796

  13. Extraordinary Oscillations of an Ordinary Forced Pendulum

    ERIC Educational Resources Information Center

    Butikov, Eugene I.

    2008-01-01

    Several well-known and newly discovered counterintuitive regular and chaotic modes of the sinusoidally driven rigid planar pendulum are discussed and illustrated by computer simulations. The software supporting the investigation offers many interesting predefined examples that demonstrate various peculiarities of this famous physical model.…

  14. COMPUTER SIMULATIONS OF LUNG AIRWAY STRUCTURES USING DATA-DRIVEN SURFACE MODELING TECHNIQUES

    EPA Science Inventory

    ABSTRACT

    Knowledge of human lung morphology is a subject critical to many areas of medicine. The visualization of lung structures naturally lends itself to computer graphics modeling due to the large number of airways involved and the complexities of the branching systems...

  15. Scenario driven data modelling: a method for integrating diverse sources of data and data streams

    PubMed Central

    2011-01-01

    Background Biology is rapidly becoming a data intensive, data-driven science. It is essential that data is represented and connected in ways that best represent its full conceptual content and allows both automated integration and data driven decision-making. Recent advancements in distributed multi-relational directed graphs, implemented in the form of the Semantic Web make it possible to deal with complicated heterogeneous data in new and interesting ways. Results This paper presents a new approach, scenario driven data modelling (SDDM), that integrates multi-relational directed graphs with data streams. SDDM can be applied to virtually any data integration challenge with widely divergent types of data and data streams. In this work, we explored integrating genetics data with reports from traditional media. SDDM was applied to the New Delhi metallo-beta-lactamase gene (NDM-1), an emerging global health threat. The SDDM process constructed a scenario, created a RDF multi-relational directed graph that linked diverse types of data to the Semantic Web, implemented RDF conversion tools (RDFizers) to bring content into the Sematic Web, identified data streams and analytical routines to analyse those streams, and identified user requirements and graph traversals to meet end-user requirements. Conclusions We provided an example where SDDM was applied to a complex data integration challenge. The process created a model of the emerging NDM-1 health threat, identified and filled gaps in that model, and constructed reliable software that monitored data streams based on the scenario derived multi-relational directed graph. The SDDM process significantly reduced the software requirements phase by letting the scenario and resulting multi-relational directed graph define what is possible and then set the scope of the user requirements. Approaches like SDDM will be critical to the future of data intensive, data-driven science because they automate the process of converting massive data streams into usable knowledge. PMID:22165854

  16. An Automated Method to Identify Mesoscale Convective Complexes in the Regional Climate Model Evaluation System

    NASA Astrophysics Data System (ADS)

    Whitehall, K. D.; Jenkins, G. S.; Mattmann, C. A.; Waliser, D. E.; Kim, J.; Goodale, C. E.; Hart, A. F.; Ramirez, P.; Whittell, J.; Zimdars, P. A.

    2012-12-01

    Mesoscale convective complexes (MCCs) are large (2 - 3 x 105 km2) nocturnal convectively-driven weather systems that are generally associated with high precipitation events in short durations (less than 12hrs) in various locations through out the tropics and midlatitudes (Maddox 1980). These systems are particularly important for climate in the West Sahel region, where the precipitation associated with them is a principal component of the rainfall season (Laing and Fritsch 1993). These systems occur on weather timescales and are historically identified from weather data analysis via manual and more recently automated processes (Miller and Fritsch 1991, Nesbett 2006, Balmey and Reason 2012). The Regional Climate Model Evaluation System (RCMES) is an open source tool designed for easy evaluation of climate and Earth system data through access to standardized datasets, and intrinsic tools that perform common analysis and visualization tasks (Hart et al. 2011). The RCMES toolkit also provides the flexibility of user-defined subroutines for further metrics, visualization and even dataset manipulation. The purpose of this study is to present a methodology for identifying MCCs in observation datasets using the RCMES framework. TRMM 3 hourly datasets will be used to demonstrate the methodology for 2005 boreal summer. This method promotes the use of open source software for scientific data systems to address a concern to multiple stakeholders in the earth sciences. A historical MCC dataset provides a platform with regards to further studies of the variability of frequency on various timescales of MCCs that is important for many including climate scientists, meteorologists, water resource managers, and agriculturalists. The methodology of using RCMES for searching and clipping datasets will engender a new realm of studies as users of the system will no longer be restricted to solely using the datasets as they reside in their own local systems; instead will be afforded rapid, effective, and transparent access, processing and visualization of the wealth of remote sensing datasets and climate model outputs available.

  17. Modulation of Temporal Precision in Thalamic Population Responses to Natural Visual Stimuli

    PubMed Central

    Desbordes, Gaëlle; Jin, Jianzhong; Alonso, Jose-Manuel; Stanley, Garrett B.

    2010-01-01

    Natural visual stimuli have highly structured spatial and temporal properties which influence the way visual information is encoded in the visual pathway. In response to natural scene stimuli, neurons in the lateral geniculate nucleus (LGN) are temporally precise – on a time scale of 10–25 ms – both within single cells and across cells within a population. This time scale, established by non stimulus-driven elements of neuronal firing, is significantly shorter than that of natural scenes, yet is critical for the neural representation of the spatial and temporal structure of the scene. Here, a generalized linear model (GLM) that combines stimulus-driven elements with spike-history dependence associated with intrinsic cellular dynamics is shown to predict the fine timing precision of LGN responses to natural scene stimuli, the corresponding correlation structure across nearby neurons in the population, and the continuous modulation of spike timing precision and latency across neurons. A single model captured the experimentally observed neural response, across different levels of contrasts and different classes of visual stimuli, through interactions between the stimulus correlation structure and the nonlinearity in spike generation and spike history dependence. Given the sensitivity of the thalamocortical synapse to closely timed spikes and the importance of fine timing precision for the faithful representation of natural scenes, the modulation of thalamic population timing over these time scales is likely important for cortical representations of the dynamic natural visual environment. PMID:21151356

  18. MassImager: A software for interactive and in-depth analysis of mass spectrometry imaging data.

    PubMed

    He, Jiuming; Huang, Luojiao; Tian, Runtao; Li, Tiegang; Sun, Chenglong; Song, Xiaowei; Lv, Yiwei; Luo, Zhigang; Li, Xin; Abliz, Zeper

    2018-07-26

    Mass spectrometry imaging (MSI) has become a powerful tool to probe molecule events in biological tissue. However, it is a widely held viewpoint that one of the biggest challenges is an easy-to-use data processing software for discovering the underlying biological information from complicated and huge MSI dataset. Here, a user-friendly and full-featured MSI software including three subsystems, Solution, Visualization and Intelligence, named MassImager, is developed focusing on interactive visualization, in-situ biomarker discovery and artificial intelligent pathological diagnosis. Simplified data preprocessing and high-throughput MSI data exchange, serialization jointly guarantee the quick reconstruction of ion image and rapid analysis of dozens of gigabytes datasets. It also offers diverse self-defined operations for visual processing, including multiple ion visualization, multiple channel superposition, image normalization, visual resolution enhancement and image filter. Regions-of-interest analysis can be performed precisely through the interactive visualization between the ion images and mass spectra, also the overlaid optical image guide, to directly find out the region-specific biomarkers. Moreover, automatic pattern recognition can be achieved immediately upon the supervised or unsupervised multivariate statistical modeling. Clear discrimination between cancer tissue and adjacent tissue within a MSI dataset can be seen in the generated pattern image, which shows great potential in visually in-situ biomarker discovery and artificial intelligent pathological diagnosis of cancer. All the features are integrated together in MassImager to provide a deep MSI processing solution at the in-situ metabolomics level for biomarker discovery and future clinical pathological diagnosis. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Selective visual processing across competition episodes: a theory of task-driven visual attention and working memory

    PubMed Central

    Schneider, Werner X.

    2013-01-01

    The goal of this review is to introduce a theory of task-driven visual attention and working memory (TRAM). Based on a specific biased competition model, the ‘theory of visual attention’ (TVA) and its neural interpretation (NTVA), TRAM introduces the following assumption. First, selective visual processing over time is structured in competition episodes. Within an episode, that is, during its first two phases, a limited number of proto-objects are competitively encoded—modulated by the current task—in activation-based visual working memory (VWM). In processing phase 3, relevant VWM objects are transferred via a short-term consolidation into passive VWM. Second, each time attentional priorities change (e.g. after an eye movement), a new competition episode is initiated. Third, if a phase 3 VWM process (e.g. short-term consolidation) is not finished, whereas a new episode is called, a protective maintenance process allows its completion. After a VWM object change, its protective maintenance process is followed by an encapsulation of the VWM object causing attentional resource costs in trailing competition episodes. Viewed from this perspective, a new explanation of key findings of the attentional blink will be offered. Finally, a new suggestion will be made as to how VWM items might interact with visual search processes. PMID:24018722

  20. diffHic: a Bioconductor package to detect differential genomic interactions in Hi-C data.

    PubMed

    Lun, Aaron T L; Smyth, Gordon K

    2015-08-19

    Chromatin conformation capture with high-throughput sequencing (Hi-C) is a technique that measures the in vivo intensity of interactions between all pairs of loci in the genome. Most conventional analyses of Hi-C data focus on the detection of statistically significant interactions. However, an alternative strategy involves identifying significant changes in the interaction intensity (i.e., differential interactions) between two or more biological conditions. This is more statistically rigorous and may provide more biologically relevant results. Here, we present the diffHic software package for the detection of differential interactions from Hi-C data. diffHic provides methods for read pair alignment and processing, counting into bin pairs, filtering out low-abundance events and normalization of trended or CNV-driven biases. It uses the statistical framework of the edgeR package to model biological variability and to test for significant differences between conditions. Several options for the visualization of results are also included. The use of diffHic is demonstrated with real Hi-C data sets. Performance against existing methods is also evaluated with simulated data. On real data, diffHic is able to successfully detect interactions with significant differences in intensity between biological conditions. It also compares favourably to existing software tools on simulated data sets. These results suggest that diffHic is a viable approach for differential analyses of Hi-C data.

  1. Earthscape, a Multi-Purpose Interactive 3d Globe Viewer for Hybrid Data Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Sarthou, A.; Mas, S.; Jacquin, M.; Moreno, N.; Salamon, A.

    2015-08-01

    The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane), raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.

  2. The SCEC Community Modeling Environment(SCEC/CME): A Collaboratory for Seismic Hazard Analysis

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Jordan, T. H.; Minster, J. B.; Moore, R.; Kesselman, C.

    2005-12-01

    The SCEC Community Modeling Environment (SCEC/CME) Project is an NSF-supported Geosciences/IT partnership that is actively developing an advanced information infrastructure for system-level earthquake science in Southern California. This partnership includes SCEC, USC's Information Sciences Institute (ISI), the San Diego Supercomputer Center (SDSC), the Incorporated Institutions for Research in Seismology (IRIS), and the U.S. Geological Survey. The goal of the SCEC/CME is to develop seismological applications and information technology (IT) infrastructure to support the development of Seismic Hazard Analysis (SHA) programs and other geophysical simulations. The SHA application programs developed on the Project include a Probabilistic Seismic Hazard Analysis system called OpenSHA. OpenSHA computational elements that are currently available include a collection of attenuation relationships, and several Earthquake Rupture Forecasts (ERFs). Geophysicists in the collaboration have also developed Anelastic Wave Models (AWMs) using both finite-difference and finite-element approaches. Earthquake simulations using these codes have been run for a variety of earthquake sources. Rupture Dynamic Model (RDM) codes have also been developed that simulate friction-based fault slip. The SCEC/CME collaboration has also developed IT software and hardware infrastructure to support the development, execution, and analysis of these SHA programs. To support computationally expensive simulations, we have constructed a grid-based scientific workflow system. Using the SCEC grid, project collaborators can submit computations from the SCEC/CME servers to High Performance Computers at USC and TeraGrid High Performance Computing Centers. Data generated and archived by the SCEC/CME is stored in a digital library system, the Storage Resource Broker (SRB). This system provides a robust and secure system for maintaining the association between the data seta and their metadata. To provide an easy-to-use system for constructing SHA computations, a browser-based workflow assembly web portal has been developed. Users can compose complex SHA calculations, specifying SCEC/CME data sets as inputs to calculations, and calling SCEC/CME computational programs to process the data and the output. Knowledge-based software tools have been implemented that utilize ontological descriptions of SHA software and data can validate workflows created with this pathway assembly tool. Data visualization software developed by the collaboration supports analysis and validation of data sets. Several programs have been developed to visualize SCEC/CME data including GMT-based map making software for PSHA codes, 4D wavefield propagation visualization software based on OpenGL, and 3D Geowall-based visualization of earthquakes, faults, and seismic wave propagation. The SCEC/CME Project also helps to sponsor the SCEC UseIT Intern program. The UseIT Intern Program provides research opportunities in both Geosciences and Information Technology to undergraduate students in a variety of fields. The UseIT group has developed a 3D data visualization tool, called SCEC-VDO, as a part of this undergraduate research program.

  3. Neurovascular Network Explorer 1.0: a database of 2-photon single-vessel diameter measurements with MATLAB(®) graphical user interface.

    PubMed

    Sridhar, Vishnu B; Tian, Peifang; Dale, Anders M; Devor, Anna; Saisan, Payam A

    2014-01-01

    We present a database client software-Neurovascular Network Explorer 1.0 (NNE 1.0)-that uses MATLAB(®) based Graphical User Interface (GUI) for interaction with a database of 2-photon single-vessel diameter measurements from our previous publication (Tian et al., 2010). These data are of particular interest for modeling the hemodynamic response. NNE 1.0 is downloaded by the user and then runs either as a MATLAB script or as a standalone program on a Windows platform. The GUI allows browsing the database according to parameters specified by the user, simple manipulation and visualization of the retrieved records (such as averaging and peak-normalization), and export of the results. Further, we provide NNE 1.0 source code. With this source code, the user can database their own experimental results, given the appropriate data structure and naming conventions, and thus share their data in a user-friendly format with other investigators. NNE 1.0 provides an example of seamless and low-cost solution for sharing of experimental data by a regular size neuroscience laboratory and may serve as a general template, facilitating dissemination of biological results and accelerating data-driven modeling approaches.

  4. Predictive assimilation framework to support contaminated site understanding and remediation

    NASA Astrophysics Data System (ADS)

    Versteeg, R. J.; Bianchi, M.; Hubbard, S. S.

    2014-12-01

    Subsurface system behavior at contaminated sites is driven and controlled by the interplay of physical, chemical, and biological processes occurring at multiple temporal and spatial scales. Effective remediation and monitoring planning requires an understanding of this complexity that is current, predictive (with some level of confidence) and actionable. We present and demonstrate a predictive assimilation framework (PAF). This framework automatically ingests, quality controls and stores near real-time environmental data and processes these data using different inversion and modeling codes to provide information on the current state and evolution of the subsurface system. PAF is implemented as a cloud based software application which has five components: (1) data acquisition, (2) data management, (3) data assimilation and processing, (4) visualization and result deliver and (5) orchestration. Access to and interaction with PAF is done through a standard browser. PAF is designed to be modular so that it can ingest and process different data streams dependent on the site. We will present an implementation of PAF which uses data from a highly instrumented site (the DOE Rifle Subsurface Biogeochemistry Field Observatory in Rifle, Colorado) for which PAF automatically ingests hydrological data and forward models groundwater flow in the saturated zone.

  5. Large Field Visualization with Demand-Driven Calculation

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.; Henze, Chris

    1999-01-01

    We present a system designed for the interactive definition and visualization of fields derived from large data sets: the Demand-Driven Visualizer (DDV). The system allows the user to write arbitrary expressions to define new fields, and then apply a variety of visualization techniques to the result. Expressions can include differential operators and numerous other built-in functions, ail of which are evaluated at specific field locations completely on demand. The payoff of following a demand-driven design philosophy throughout becomes particularly evident when working with large time-series data, where the costs of eager evaluation alternatives can be prohibitive.

  6. Model-Drive Architecture for Agent-Based Systems

    NASA Technical Reports Server (NTRS)

    Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.

    2004-01-01

    The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.

  7. Software Aids Visualization of Computed Unsteady Flow

    NASA Technical Reports Server (NTRS)

    Kao, David; Kenwright, David

    2003-01-01

    Unsteady Flow Analysis Toolkit (UFAT) is a computer program that synthesizes motions of time-dependent flows represented by very large sets of data generated in computational fluid dynamics simulations. Prior to the development of UFAT, it was necessary to rely on static, single-snapshot depictions of time-dependent flows generated by flow-visualization software designed for steady flows. Whereas it typically takes weeks to analyze the results of a largescale unsteady-flow simulation by use of steady-flow visualization software, the analysis time is reduced to hours when UFAT is used. UFAT can be used to generate graphical objects of flow visualization results using multi-block curvilinear grids in the format of a previously developed NASA data-visualization program, PLOT3D. These graphical objects can be rendered using FAST, another popular flow visualization software developed at NASA. Flow-visualization techniques that can be exploited by use of UFAT include time-dependent tracking of particles, detection of vortex cores, extractions of stream ribbons and surfaces, and tetrahedral decomposition for optimal particle tracking. Unique computational features of UFAT include capabilities for automatic (batch) processing, restart, memory mapping, and parallel processing. These capabilities significantly reduce analysis time and storage requirements, relative to those of prior flow-visualization software. UFAT can be executed on a variety of supercomputers.

  8. 3D Visualization of Global Ocean Circulation

    NASA Astrophysics Data System (ADS)

    Nelson, V. G.; Sharma, R.; Zhang, E.; Schmittner, A.; Jenny, B.

    2015-12-01

    Advanced 3D visualization techniques are seldom used to explore the dynamic behavior of ocean circulation. Streamlines are an effective method for visualization of flow, and they can be designed to clearly show the dynamic behavior of a fluidic system. We employ vector field editing and extraction software to examine the topology of velocity vector fields generated by a 3D global circulation model coupled to a one-layer atmosphere model simulating preindustrial and last glacial maximum (LGM) conditions. This results in a streamline-based visualization along multiple density isosurfaces on which we visualize points of vertical exchange and the distribution of properties such as temperature and biogeochemical tracers. Previous work involving this model examined the change in the energetics driving overturning circulation and mixing between simulations of LGM and preindustrial conditions. This visualization elucidates the relationship between locations of vertical exchange and mixing, as well as demonstrates the effects of circulation and mixing on the distribution of tracers such as carbon isotopes.

  9. The EpiCanvas infectious disease weather map: an interactive visual exploration of temporal and spatial correlations

    PubMed Central

    Livnat, Yarden; Galli, Nathan; Samore, Matthew H; Gundlapalli, Adi V

    2012-01-01

    Advances in surveillance science have supported public health agencies in tracking and responding to disease outbreaks. Increasingly, epidemiologists have been tasked with interpreting multiple streams of heterogeneous data arising from varied surveillance systems. As a result public health personnel have experienced an overload of plots and charts as information visualization techniques have not kept pace with the rapid expansion in data availability. This study sought to advance the science of public health surveillance data visualization by conceptualizing a visual paradigm that provides an ‘epidemiological canvas’ for detection, monitoring, exploration and discovery of regional infectious disease activity and developing a software prototype of an ‘infectious disease weather map'. Design objectives were elucidated and the conceptual model was developed using cognitive task analysis with public health epidemiologists. The software prototype was pilot tested using retrospective data from a large, regional pediatric hospital, and gastrointestinal and respiratory disease outbreaks were re-created as a proof of concept. PMID:22358039

  10. Assessing morphology and function of the semicircular duct system: introducing new in-situ visualization and software toolbox

    PubMed Central

    David, R.; Stoessel, A.; Berthoz, A.; Spoor, F.; Bennequin, D.

    2016-01-01

    The semicircular duct system is part of the sensory organ of balance and essential for navigation and spatial awareness in vertebrates. Its function in detecting head rotations has been modelled with increasing sophistication, but the biomechanics of actual semicircular duct systems has rarely been analyzed, foremost because the fragile membranous structures in the inner ear are hard to visualize undistorted and in full. Here we present a new, easy-to-apply and non-invasive method for three-dimensional in-situ visualization and quantification of the semicircular duct system, using X-ray micro tomography and tissue staining with phosphotungstic acid. Moreover, we introduce Ariadne, a software toolbox which provides comprehensive and improved morphological and functional analysis of any visualized duct system. We demonstrate the potential of these methods by presenting results for the duct system of humans, the squirrel monkey and the rhesus macaque, making comparisons with past results from neurophysiological, oculometric and biomechanical studies. Ariadne is freely available at http://www.earbank.org. PMID:27604473

  11. Scientific Visualization Using the Flow Analysis Software Toolkit (FAST)

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon V.; Kelaita, Paul G.; Mccabe, R. Kevin; Merritt, Fergus J.; Plessel, Todd C.; Sandstrom, Timothy A.; West, John T.

    1993-01-01

    Over the past few years the Flow Analysis Software Toolkit (FAST) has matured into a useful tool for visualizing and analyzing scientific data on high-performance graphics workstations. Originally designed for visualizing the results of fluid dynamics research, FAST has demonstrated its flexibility by being used in several other areas of scientific research. These research areas include earth and space sciences, acid rain and ozone modelling, and automotive design, just to name a few. This paper describes the current status of FAST, including the basic concepts, architecture, existing functionality and features, and some of the known applications for which FAST is being used. A few of the applications, by both NASA and non-NASA agencies, are outlined in more detail. Described in the Outlines are the goals of each visualization project, the techniques or 'tricks' used lo produce the desired results, and custom modifications to FAST, if any, done to further enhance the analysis. Some of the future directions for FAST are also described.

  12. Distributed visualization of gridded geophysical data: the Carbon Data Explorer, version 0.2.3

    NASA Astrophysics Data System (ADS)

    Endsley, K. A.; Billmire, M. G.

    2016-01-01

    Due to the proliferation of geophysical models, particularly climate models, the increasing resolution of their spatiotemporal estimates of Earth system processes, and the desire to easily share results with collaborators, there is a genuine need for tools to manage, aggregate, visualize, and share data sets. We present a new, web-based software tool - the Carbon Data Explorer - that provides these capabilities for gridded geophysical data sets. While originally developed for visualizing carbon flux, this tool can accommodate any time-varying, spatially explicit scientific data set, particularly NASA Earth system science level III products. In addition, the tool's open-source licensing and web presence facilitate distributed scientific visualization, comparison with other data sets and uncertainty estimates, and data publishing and distribution.

  13. Object-oriented software design in semiautomatic building extraction

    NASA Astrophysics Data System (ADS)

    Guelch, Eberhard; Mueller, Hardo

    1997-08-01

    Developing a system for semiautomatic building acquisition is a complex process, that requires constant integration and updating of software modules and user interfaces. To facilitate these processes we apply an object-oriented design not only for the data but also for the software involved. We use the unified modeling language (UML) to describe the object-oriented modeling of the system in different levels of detail. We can distinguish between use cases from the users point of view, that represent a sequence of actions, yielding in an observable result and the use cases for the programmers, who can use the system as a class library to integrate the acquisition modules in their own software. The structure of the system is based on the model-view-controller (MVC) design pattern. An example from the integration of automated texture extraction for the visualization of results demonstrate the feasibility of this approach.

  14. MOST-visualization: software for producing automated textbook-style maps of genome-scale metabolic networks.

    PubMed

    Kelley, James J; Maor, Shay; Kim, Min Kyung; Lane, Anatoliy; Lun, Desmond S

    2017-08-15

    Visualization of metabolites, reactions and pathways in genome-scale metabolic networks (GEMs) can assist in understanding cellular metabolism. Three attributes are desirable in software used for visualizing GEMs: (i) automation, since GEMs can be quite large; (ii) production of understandable maps that provide ease in identification of pathways, reactions and metabolites; and (iii) visualization of the entire network to show how pathways are interconnected. No software currently exists for visualizing GEMs that satisfies all three characteristics, but MOST-Visualization, an extension of the software package MOST (Metabolic Optimization and Simulation Tool), satisfies (i), and by using a pre-drawn overview map of metabolism based on the Roche map satisfies (ii) and comes close to satisfying (iii). MOST is distributed for free on the GNU General Public License. The software and full documentation are available at http://most.ccib.rutgers.edu/. dslun@rutgers.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  15. Technical Data Exchange Software Tools Adapted to Distributed Microsatellite Design

    NASA Astrophysics Data System (ADS)

    Pache, Charly

    2002-01-01

    One critical issue concerning distributed design of satellites, is the collaborative work it requires. In particular, the exchange of data between each group responsible for each subsystem can be complex and very time-consuming. The goal of this paper is to present a design collaborative tool, the SSETI Design Model (SDM), specifically developed for enabling satellite distributed design. SDM is actually used in the ongoing Student Space Exploration &Technology (SSETI) initiative (www.sseti.net). SSETI is lead by European Space Agency (ESA) outreach office (http://www.estec.esa.nl/outreach), involving student groups from all over Europe for design, construction and launch of a microsatellite. The first part of this paper presents the current version of the SDM tool, a collection of Microsoft Excel linked worksheets, one for each subsystem. An overview of the project framework/structure is given, explaining the different actors, the flows between them, as well as the different types of data and the links - formulas - between data sets. Unified Modeling Language (UML) diagrams give an overview of the different parts . Then the SDM's functionalities, developed in VBA scripts (Visual Basic for Application), are introduced, as well as the interactive features, user interfaces and administration tools. The second part discusses the capabilities and limitations of SDM current version. Taking into account these capabilities and limitations, the third part outlines the next version of SDM, a web-oriented, database-driven evolution of the current version. This new approach will enable real-time data exchange and processing between the different actors of the mission. Comprehensive UML diagrams will guide the audience through the entire modeling process of such a system. Tradeoffs simulation capabilities, security, reliability, hardware and software issues will also be thoroughly discussed.

  16. Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text.

    PubMed

    Anderson, Andrew James; Bruni, Elia; Lopopolo, Alessandro; Poesio, Massimo; Baroni, Marco

    2015-10-15

    Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. A Software Developer’s Guide to Informal Evaluation of Visual Analytics Environments Using VAST Challenge Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Kristin A.; Scholtz, Jean; Whiting, Mark A.

    The VAST Challenge has been a popular venue for academic and industry participants for over ten years. Many participants comment that the majority of their time in preparing VAST Challenge entries is discovering elements in their software environments that need to be redesigned in order to solve the given task. Fortunately, there is no need to wait until the VAST Challenge is announced to test out software systems. The Visual Analytics Benchmark Repository contains all past VAST Challenge tasks, data, solutions and submissions. This paper details the various types of evaluations that may be conducted using the Repository information. Inmore » this paper we describe how developers can do informal evaluations of various aspects of their visual analytics environments using VAST Challenge information. Aspects that can be evaluated include the appropriateness of the software for various tasks, the various data types and formats that can be accommodated, the effectiveness and efficiency of the process supported by the software, and the intuitiveness of the visualizations and interactions. Researchers can compare their visualizations and interactions to those submitted to determine novelty. In addition, the paper provides pointers to various guidelines that software teams can use to evaluate the usability of their software. While these evaluations are not a replacement for formal evaluation methods, this information can be extremely useful during the development of visual analytics environments.« less

  18. Visualization of instationary flows by particle traces

    NASA Astrophysics Data System (ADS)

    Raasch, S.

    An abstract on a study which represents a model of atmospheric flow output by computer movies is presented. The structure and evolution of the flow is visualized by starting weightless particles at the locations of the model grid points at distinct, equally spaced times. These particles are then only advected by the flow. In order to avoid useless accumulation of particles, they can be provided with a limited lifetime. Scalar quantities can be shown in addition to using color shaded contours as background information. A movie with several examples of atmospheric flows, for example convection in the atmospheric boundary layer, slope winds, land seabreeze and Kelvin-Helmholtz waves is presented. The simulations are performed by two dimensional and three dimensional nonhydrostatic, finite difference models. Graphics are produced by using the UNIRAS software and the graphic output is in form of CGM metafiles. The single frames are stored on an ABEKAS real time video disc and then transferred to a BETACAM-SP tape recorder. The graphic software is suitable to produce 2 dimensional pictures, for example only cross sections of three dimensional simulations can be made. To produce a movie of typically 90 seconds duration, the graphic software and the particle model need about 10 hours CPU time on a CCD CYBER 990 and the CGM metafile has a size of about 1.4 GByte.

  19. A software platform for continuum modeling of ion channels based on unstructured mesh

    NASA Astrophysics Data System (ADS)

    Tu, B.; Bai, S. Y.; Chen, M. X.; Xie, Y.; Zhang, L. B.; Lu, B. Z.

    2014-01-01

    Most traditional continuum molecular modeling adopted finite difference or finite volume methods which were based on a structured mesh (grid). Unstructured meshes were only occasionally used, but an increased number of applications emerge in molecular simulations. To facilitate the continuum modeling of biomolecular systems based on unstructured meshes, we are developing a software platform with tools which are particularly beneficial to those approaches. This work describes the software system specifically for the simulation of a typical, complex molecular procedure: ion transport through a three-dimensional channel system that consists of a protein and a membrane. The platform contains three parts: a meshing tool chain for ion channel systems, a parallel finite element solver for the Poisson-Nernst-Planck equations describing the electrodiffusion process of ion transport, and a visualization program for continuum molecular modeling. The meshing tool chain in the platform, which consists of a set of mesh generation tools, is able to generate high-quality surface and volume meshes for ion channel systems. The parallel finite element solver in our platform is based on the parallel adaptive finite element package PHG which wass developed by one of the authors [1]. As a featured component of the platform, a new visualization program, VCMM, has specifically been developed for continuum molecular modeling with an emphasis on providing useful facilities for unstructured mesh-based methods and for their output analysis and visualization. VCMM provides a graphic user interface and consists of three modules: a molecular module, a meshing module and a numerical module. A demonstration of the platform is provided with a study of two real proteins, the connexin 26 and hemolysin ion channels.

  20. Hierarchical programming for data storage and visualization

    USGS Publications Warehouse

    Donovan, John M.; Smith, Peter E.; ,

    2001-01-01

    Graphics software is an essential tool for interpreting, analyzing, and presenting data from multidimensional hydrodynamic models used in estuarine and coastal ocean studies. The post-processing of time-varying three-dimensional model output presents unique requirements for data visualization because of the large volume of data that can be generated and the multitude of time scales that must be examined. Such data can relate to estuarine or coastal ocean environments and come from numerical models or field instruments. One useful software tool for the display, editing, visualization, and printing of graphical data is the Gr application, written by the first author for use in U.S. Geological Survey San Francisco Bay Program. The Gr application has been made available to the public via the Internet since the year 2000. The Gr application is written in the Java (Sun Microsystems, Nov. 29, 2001) programming language and uses the Extensible Markup Language standard for hierarchical data storage. Gr presents a hierarchy of objects to the user that can be edited using a common interface. Java's object-oriented capabilities allow Gr to treat data, graphics, and tools equally and to save them all to a single XML file.

  1. Field-aligned currents and large scale magnetospheric electric fields

    NASA Technical Reports Server (NTRS)

    Dangelo, N.

    1980-01-01

    D'Angelo's model of polar cap electric fields (1977) was used to visualize how high-latitude field-aligned currents are driven by the solar wind generator. The region 1 and region 2 currents of Iijima and Potemra (1976) and the cusp field-aligned currents of Wilhjelm et al. (1978) and McDiarmid et al. (1978) are apparently driven by different generators, although in both cases the solar wind is their ultimate source.

  2. Assessing Functional Vision Using Microcomputers.

    ERIC Educational Resources Information Center

    Spencer, Simon; Ross, Malcolm

    1989-01-01

    The paper describes a software system which uses microcomputers to aid in the assessment of functional vision in visually impaired students. The software also aims to be visually stimulating and to develop hand-eye coordination, visual memory, and cognitive abilities. (DB)

  3. Development and evaluation of SOA-based AAL services in real-life environments: a case study and lessons learned.

    PubMed

    Stav, Erlend; Walderhaug, Ståle; Mikalsen, Marius; Hanke, Sten; Benc, Ivan

    2013-11-01

    The proper use of ICT services can support seniors in living independently longer. While such services are starting to emerge, current proprietary solutions are often expensive, covering only isolated parts of seniors' needs, and lack support for sharing information between services and between users. For developers, the challenge is that it is complex and time consuming to develop high quality, interoperable services, and new techniques are needed to simplify the development and reduce the development costs. This paper provides the complete view of the experiences gained in the MPOWER project with respect to using model-driven development (MDD) techniques for Service Oriented Architecture (SOA) system development in the Ambient Assisted Living (AAL) domain. To address this challenge, the approach of the European research project MPOWER (2006-2009) was to investigate and record the user needs, define a set of reusable software services based on these needs, and then implement pilot systems using these services. Further, a model-driven toolchain covering key development phases was developed to support software developers through this process. Evaluations were conducted both on the technical artefacts (methodology and tools), and on end user experience from using the pilot systems in trial sites. The outcome of the work on the user needs is a knowledge base recorded as a Unified Modeling Language (UML) model. This comprehensive model describes actors, use cases, and features derived from these. The model further includes the design of a set of software services, including full trace information back to the features and use cases motivating their design. Based on the model, the services were implemented for use in Service Oriented Architecture (SOA) systems, and are publicly available as open source software. The services were successfully used in the realization of two pilot applications. There is therefore a direct and traceable link from the user needs of the elderly, through the service design knowledge base, to the service and pilot implementations. The evaluation of the SOA approach on the developers in the project revealed that SOA is useful with respect to job performance and quality. Furthermore, they think SOA is easy to use and support development of AAL applications. An important finding is that the developers clearly report that they intend to use SOA in the future, but not for all type of projects. With respect to using model-driven development in web services design and implementation, the developers reported that it was useful. However, it is important that the code generated from the models is correct if the full potential of MDD should be achieved. The pilots and their evaluation in the trial sites showed that the services of the platform are sufficient to create suitable systems for end users in the domain. A SOA platform with a set of reusable domain services is a suitable foundation for more rapid development and tailoring of assisted living systems covering reoccurring needs among elderly users. It is feasible to realize a tool-chain for model-driven development of SOA applications in the AAL domain, and such a tool-chain can be accepted and found useful by software developers. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  4. DspaceOgre 3D Graphics Visualization Tool

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Myin, Steven; Pomerantz, Marc I.

    2011-01-01

    This general-purpose 3D graphics visualization C++ tool is designed for visualization of simulation and analysis data for articulated mechanisms. Examples of such systems are vehicles, robotic arms, biomechanics models, and biomolecular structures. DspaceOgre builds upon the open-source Ogre3D graphics visualization library. It provides additional classes to support the management of complex scenes involving multiple viewpoints and different scene groups, and can be used as a remote graphics server. This software provides improved support for adding programs at the graphics processing unit (GPU) level for improved performance. It also improves upon the messaging interface it exposes for use as a visualization server.

  5. The enhanced Software Life Cyle Support Environment (ProSLCSE): Automation for enterprise and process modeling

    NASA Technical Reports Server (NTRS)

    Milligan, James R.; Dutton, James E.

    1993-01-01

    In this paper, we have introduced a comprehensive method for enterprise modeling that addresses the three important aspects of how an organization goes about its business. FirstEP includes infrastructure modeling, information modeling, and process modeling notations that are intended to be easy to learn and use. The notations stress the use of straightforward visual languages that are intuitive, syntactically simple, and semantically rich. ProSLCSE will be developed with automated tools and services to facilitate enterprise modeling and process enactment. In the spirit of FirstEP, ProSLCSE tools will also be seductively easy to use. Achieving fully managed, optimized software development and support processes will be long and arduous for most software organizations, and many serious problems will have to be solved along the way. ProSLCSE will provide the ability to document, communicate, and modify existing processes, which is the necessary first step.

  6. The nature of the (visualization) game: Challenges and opportunities from computational geophysics

    NASA Astrophysics Data System (ADS)

    Kellogg, L. H.

    2016-12-01

    As the geosciences enters the era of big data, modeling and visualization become increasingly vital tools for discovery, understanding, education, and communication. Here, we focus on modeling and visualization of the structure and dynamics of the Earth's surface and interior. The past decade has seen accelerated data acquisition, including higher resolution imaging and modeling of Earth's deep interior, complex models of geodynamics, and high resolution topographic imaging of the changing surface, with an associated acceleration of computational modeling through better scientific software, increased computing capability, and the use of innovative methods of scientific visualization. The role of modeling is to describe a system, answer scientific questions, and test hypotheses; the term "model" encompasses mathematical models, computational models, physical models, conceptual models, statistical models, and visual models of a structure or process. These different uses of the term require thoughtful communication to avoid confusion. Scientific visualization is integral to every aspect of modeling. Not merely a means of communicating results, the best uses of visualization enable scientists to interact with their data, revealing the characteristics of the data and models to enable better interpretation and inform the direction of future investigation. Innovative immersive technologies like virtual reality, augmented reality, and remote collaboration techniques, are being adapted more widely and are a magnet for students. Time-varying or transient phenomena are especially challenging to model and to visualize; researchers and students may need to investigate the role of initial conditions in driving phenomena, while nonlinearities in the governing equations of many Earth systems make the computations and resulting visualization especially challenging. Training students how to use, design, build, and interpret scientific modeling and visualization tools prepares them to better understand the nature of complex, multiscale geoscience data.

  7. Cognitive programs: software for attention's executive

    PubMed Central

    Tsotsos, John K.; Kruijne, Wouter

    2014-01-01

    What are the computational tasks that an executive controller for visual attention must solve? This question is posed in the context of the Selective Tuning model of attention. The range of required computations go beyond top-down bias signals or region-of-interest determinations, and must deal with overt and covert fixations, process timing and synchronization, information routing, memory, matching control to task, spatial localization, priming, and coordination of bottom-up with top-down information. During task execution, results must be monitored to ensure the expected results. This description includes the kinds of elements that are common in the control of any kind of complex machine or system. We seek a mechanistic integration of the above, in other words, algorithms that accomplish control. Such algorithms operate on representations, transforming a representation of one kind into another, which then forms the input to yet another algorithm. Cognitive Programs (CPs) are hypothesized to capture exactly such representational transformations via stepwise sequences of operations. CPs, an updated and modernized offspring of Ullman's Visual Routines, impose an algorithmic structure to the set of attentional functions and play a role in the overall shaping of attentional modulation of the visual system so that it provides its best performance. This requires that we consider the visual system as a dynamic, yet general-purpose processor tuned to the task and input of the moment. This differs dramatically from the almost universal cognitive and computational views, which regard vision as a passively observing module to which simple questions about percepts can be posed, regardless of task. Differing from Visual Routines, CPs explicitly involve the critical elements of Visual Task Executive (vTE), Visual Attention Executive (vAE), and Visual Working Memory (vWM). Cognitive Programs provide the software that directs the actions of the Selective Tuning model of visual attention. PMID:25505430

  8. GABBs: Cyberinfrastructure for Self-Service Geospatial Data Exploration, Computation, and Sharing

    NASA Astrophysics Data System (ADS)

    Song, C. X.; Zhao, L.; Biehl, L. L.; Merwade, V.; Villoria, N.

    2016-12-01

    Geospatial data are present everywhere today with the proliferation of location-aware computing devices. This is especially true in the scientific community where large amounts of data are driving research and education activities in many domains. Collaboration over geospatial data, for example, in modeling, data analysis and visualization, must still overcome the barriers of specialized software and expertise among other challenges. In addressing these needs, the Geospatial data Analysis Building Blocks (GABBs) project aims at building geospatial modeling, data analysis and visualization capabilities in an open source web platform, HUBzero. Funded by NSF's Data Infrastructure Building Blocks initiative, GABBs is creating a geospatial data architecture that integrates spatial data management, mapping and visualization, and interfaces in the HUBzero platform for scientific collaborations. The geo-rendering enabled Rappture toolkit, a generic Python mapping library, geospatial data exploration and publication tools, and an integrated online geospatial data management solution are among the software building blocks from the project. The GABBS software will be available through Amazon's AWS Marketplace VM images and open source. Hosting services are also available to the user community. The outcome of the project will enable researchers and educators to self-manage their scientific data, rapidly create GIS-enable tools, share geospatial data and tools on the web, and build dynamic workflows connecting data and tools, all without requiring significant software development skills, GIS expertise or IT administrative privileges. This presentation will describe the GABBs architecture, toolkits and libraries, and showcase the scientific use cases that utilize GABBs capabilities, as well as the challenges and solutions for GABBs to interoperate with other cyberinfrastructure platforms.

  9. PKSolver: An add-in program for pharmacokinetic and pharmacodynamic data analysis in Microsoft Excel.

    PubMed

    Zhang, Yong; Huo, Meirong; Zhou, Jianping; Xie, Shaofei

    2010-09-01

    This study presents PKSolver, a freely available menu-driven add-in program for Microsoft Excel written in Visual Basic for Applications (VBA), for solving basic problems in pharmacokinetic (PK) and pharmacodynamic (PD) data analysis. The program provides a range of modules for PK and PD analysis including noncompartmental analysis (NCA), compartmental analysis (CA), and pharmacodynamic modeling. Two special built-in modules, multiple absorption sites (MAS) and enterohepatic circulation (EHC), were developed for fitting the double-peak concentration-time profile based on the classical one-compartment model. In addition, twenty frequently used pharmacokinetic functions were encoded as a macro and can be directly accessed in an Excel spreadsheet. To evaluate the program, a detailed comparison of modeling PK data using PKSolver and professional PK/PD software package WinNonlin and Scientist was performed. The results showed that the parameters estimated with PKSolver were satisfactory. In conclusion, the PKSolver simplified the PK and PD data analysis process and its output could be generated in Microsoft Word in the form of an integrated report. The program provides pharmacokinetic researchers with a fast and easy-to-use tool for routine and basic PK and PD data analysis with a more user-friendly interface. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  10. "Usability of data integration and visualization software for multidisciplinary pediatric intensive care: a human factors approach to assessing technology".

    PubMed

    Lin, Ying Ling; Guerguerian, Anne-Marie; Tomasi, Jessica; Laussen, Peter; Trbovich, Patricia

    2017-08-14

    Intensive care clinicians use several sources of data in order to inform decision-making. We set out to evaluate a new interactive data integration platform called T3™ made available for pediatric intensive care. Three primary functions are supported: tracking of physiologic signals, displaying trajectory, and triggering decisions, by highlighting data or estimating risk of patient instability. We designed a human factors study to identify interface usability issues, to measure ease of use, and to describe interface features that may enable or hinder clinical tasks. Twenty-two participants, consisting of bedside intensive care physicians, nurses, and respiratory therapists, tested the T3™ interface in a simulation laboratory setting. Twenty tasks were performed with a true-to-setting, fully functional, prototype, populated with physiological and therapeutic intervention patient data. Primary data visualization was time series and secondary visualizations were: 1) shading out-of-target values, 2) mini-trends with exaggerated maxima and minima (sparklines), and 3) bar graph of a 16-parameter indicator. Task completion was video recorded and assessed using a use error rating scale. Usability issues were classified in the context of task and type of clinician. A severity rating scale was used to rate potential clinical impact of usability issues. Time series supported tracking a single parameter but partially supported determining patient trajectory using multiple parameters. Visual pattern overload was observed with multiple parameter data streams. Automated data processing using shading and sparklines was often ignored but the 16-parameter data reduction algorithm, displayed as a persistent bar graph, was visually intuitive. However, by selecting or automatically processing data, triggering aids distorted the raw data that clinicians use regularly. Consequently, clinicians could not rely on new data representations because they did not know how they were established or derived. Usability issues, observed through contextual use, provided directions for tangible design improvements of data integration software that may lessen use errors and promote safe use. Data-driven decision making can benefit from iterative interface redesign involving clinician-users in simulated environments. This study is a first step in understanding how software can support clinicians' decision making with integrated continuous monitoring data. Importantly, testing of similar platforms by all the different disciplines who may become clinician users is a fundamental step necessary to understand the impact on clinical outcomes of decision aids.

  11. Identifying Seizure Onset Zone From the Causal Connectivity Inferred Using Directed Information

    NASA Astrophysics Data System (ADS)

    Malladi, Rakesh; Kalamangalam, Giridhar; Tandon, Nitin; Aazhang, Behnaam

    2016-10-01

    In this paper, we developed a model-based and a data-driven estimator for directed information (DI) to infer the causal connectivity graph between electrocorticographic (ECoG) signals recorded from brain and to identify the seizure onset zone (SOZ) in epileptic patients. Directed information, an information theoretic quantity, is a general metric to infer causal connectivity between time-series and is not restricted to a particular class of models unlike the popular metrics based on Granger causality or transfer entropy. The proposed estimators are shown to be almost surely convergent. Causal connectivity between ECoG electrodes in five epileptic patients is inferred using the proposed DI estimators, after validating their performance on simulated data. We then proposed a model-based and a data-driven SOZ identification algorithm to identify SOZ from the causal connectivity inferred using model-based and data-driven DI estimators respectively. The data-driven SOZ identification outperforms the model-based SOZ identification algorithm when benchmarked against visual analysis by neurologist, the current clinical gold standard. The causal connectivity analysis presented here is the first step towards developing novel non-surgical treatments for epilepsy.

  12. Improvements to the APBS biomolecular solvation software suite: Improvements to the APBS Software Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jurrus, Elizabeth; Engel, Dave; Star, Keith

    The Adaptive Poisson-Boltzmann Solver (APBS) software was developed to solve the equations of continuum electrostatics for large biomolecular assemblages that has provided impact in the study of a broad range of chemical, biological, and biomedical applications. APBS addresses three key technology challenges for understanding solvation and electrostatics in biomedical applications: accurate and efficient models for biomolecular solvation and electrostatics, robust and scalable software for applying those theories to biomolecular systems, and mechanisms for sharing and analyzing biomolecular electrostatics data in the scientific community. To address new research applications and advancing computational capabilities, we have continually updated APBS and its suitemore » of accompanying software since its release in 2001. In this manuscript, we discuss the models and capabilities that have recently been implemented within the APBS software package including: a Poisson-Boltzmann analytical and a semi-analytical solver, an optimized boundary element solver, a geometry-based geometric flow solvation model, a graph theory based algorithm for determining pKa values, and an improved web-based visualization tool for viewing electrostatics.« less

  13. Cenozoic Antarctic DiatomWare/BugCam: An aid for research and teaching

    USGS Publications Warehouse

    Wise, S.W.; Olney, M.; Covington, J.M.; Egerton, V.M.; Jiang, S.; Ramdeen, D.K.; ,; Schrader, H.; Sims, P.A.; Wood, A.S.; Davis, A.; Davenport, D.R.; Doepler, N.; Falcon, W.; Lopez, C.; Pressley, T.; Swedberg, O.L.; Harwood, D.M.

    2007-01-01

    Cenozoic Antarctic DiatomWare/BugCam© is an interactive, icon-driven digital-image database/software package that displays over 500 illustrated Cenozoic Antarctic diatom taxa along with original descriptions (including over 100 generic and 20 family-group descriptions). This digital catalog is designed primarily for use by micropaleontologists working in the field (at sea or on the Antarctic continent) where hard-copy literature resources are limited. This new package will also be useful for classroom/lab teaching as well as for any paleontologists making or refining taxonomic identifications at the microscope. The database (Cenozoic Antarctic DiatomWare) is displayed via a custom software program (BugCam) written in Visual Basic for use on PCs running Windows 95 or later operating systems. BugCam is a flexible image display program that utilizes an intuitive thumbnail “tree” structure for navigation through the database. The data are stored on Micrsosoft EXCEL spread sheets, hence no separate relational database program is necessary to run the package

  14. Multiple-Flat-Panel System Displays Multidimensional Data

    NASA Technical Reports Server (NTRS)

    Gundo, Daniel; Levit, Creon; Henze, Christopher; Sandstrom, Timothy; Ellsworth, David; Green, Bryan; Joly, Arthur

    2006-01-01

    The NASA Ames hyperwall is a display system designed to facilitate the visualization of sets of multivariate and multidimensional data like those generated in complex engineering and scientific computations. The hyperwall includes a 77 matrix of computer-driven flat-panel video display units, each presenting an image of 1,280 1,024 pixels. The term hyperwall reflects the fact that this system is a more capable successor to prior computer-driven multiple-flat-panel display systems known by names that include the generic term powerwall and the trade names PowerWall and Powerwall. Each of the 49 flat-panel displays is driven by a rack-mounted, dual-central-processing- unit, workstation-class personal computer equipped with a hig-hperformance graphical-display circuit card and with a hard-disk drive having a storage capacity of 100 GB. Each such computer is a slave node in a master/ slave computing/data-communication system (see Figure 1). The computer that acts as the master node is similar to the slave-node computers, except that it runs the master portion of the system software and is equipped with a keyboard and mouse for control by a human operator. The system utilizes commercially available master/slave software along with custom software that enables the human controller to interact simultaneously with any number of selected slave nodes. In a powerwall, a single rendering task is spread across multiple processors and then the multiple outputs are tiled into one seamless super-display. It must be noted that the hyperwall concept subsumes the powerwall concept in that a single scene could be rendered as a mosaic image on the hyperwall. However, the hyperwall offers a wider set of capabilities to serve a different purpose: The hyperwall concept is one of (1) simultaneously displaying multiple different but related images, and (2) providing means for composing and controlling such sets of images. In place of elaborate software or hardware crossbar switches, the hyperwall concept substitutes reliance on the human visual system for integration, synthesis, and discrimination of patterns in complex and high-dimensional data spaces represented by the multiple displayed images. The variety of multidimensional data sets that can be displayed on the hyperwall is practically unlimited. For example, Figure 2 shows a hyperwall display of surface pressures and streamlines from a computational simulation of airflow about an aerospacecraft at various Mach numbers and angles of attack. In this display, Mach numbers increase from left to right and angles of attack increase from bottom to top. That is, all images in the same column represent simulations at the same Mach number, while all images in the same row represent simulations at the same angle of attack. The same viewing transformations and the same mapping from surface pressure to colors were used in generating all the images.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, Steven Karl; Determan, John C.

    Dynamic System Simulation (DSS) models of fissile solution systems have been developed and verified against a variety of historical configurations. DSS techniques have been applied specifically to subcritical accelerator-driven systems using fissile solution fuels of uranium. Initial DSS models were developed in DESIRE, a specialized simulation scripting language. In order to tailor the DSS models to specifically meet needs of system designers they were converted to a Visual Studio implementation, and one of these subsequently to National Instrument’s LabVIEW for human factors engineering and operator training. Specific operational characteristics of subcritical accelerator-driven systems have been examined using a DSS modelmore » tailored to this particular class using fissile fuel.« less

  16. FY 2002 Report on Software Visualization Techniques for IV and V

    NASA Technical Reports Server (NTRS)

    Fotta, Michael E.

    2002-01-01

    One of the major challenges software engineers often face in performing IV&V is developing an understanding of a system created by a development team they have not been part of. As budgets shrink and software increases in complexity, this challenge will become even greater as these software engineers face increased time and resource constraints. This research will determine which current aspects of providing this understanding (e.g., code inspections, use of control graphs, use of adjacency matrices, requirements traceability) are critical to the performing IV&V and amenable to visualization techniques. We will then develop state-of-the-art software visualization techniques to facilitate the use of these aspects to understand software and perform IV&V.

  17. Modeling some two-dimensional relativistic phenomena using an educational interactive graphics software

    NASA Astrophysics Data System (ADS)

    Sastry, G. P.; Ravuri, Tushar R.

    1990-11-01

    This paper describes several relativistic phenomena in two spatial dimensions that can be modeled using the collision program of Spacetime Software. These include the familiar aberration, the Doppler effect, the headlight effect, and the invariance of the speed of light in vacuum, in addition to the rather unfamiliar effects like the dragging of light in a moving medium, reflection at moving mirrors, Wigner rotation of noncommuting boosts, and relativistic rotation of shrinking and expanding rods. All these phenomena are exhibited by tracings of composite computer printouts of the collision movie. It is concluded that an interactive educational graphics software with pleasing visuals can have considerable investigative power packed within it.

  18. Temporal precision in the visual pathway through the interplay of excitation and stimulus-driven suppression.

    PubMed

    Butts, Daniel A; Weng, Chong; Jin, Jianzhong; Alonso, Jose-Manuel; Paninski, Liam

    2011-08-03

    Visual neurons can respond with extremely precise temporal patterning to visual stimuli that change on much slower time scales. Here, we investigate how the precise timing of cat thalamic spike trains-which can have timing as precise as 1 ms-is related to the stimulus, in the context of both artificial noise and natural visual stimuli. Using a nonlinear modeling framework applied to extracellular data, we demonstrate that the precise timing of thalamic spike trains can be explained by the interplay between an excitatory input and a delayed suppressive input that resembles inhibition, such that neuronal responses only occur in brief windows where excitation exceeds suppression. The resulting description of thalamic computation resembles earlier models of contrast adaptation, suggesting a more general role for mechanisms of contrast adaptation in visual processing. Thus, we describe a more complex computation underlying thalamic responses to artificial and natural stimuli that has implications for understanding how visual information is represented in the early stages of visual processing.

  19. A Hyperbolic Ontology Visualization Tool for Model Application Programming Interface Documentation

    NASA Technical Reports Server (NTRS)

    Hyman, Cody

    2011-01-01

    Spacecraft modeling, a critically important portion in validating planned spacecraft activities, is currently carried out using a time consuming method of mission to mission model implementations and integration. A current project in early development, Integrated Spacecraft Analysis (ISCA), aims to remedy this hindrance by providing reusable architectures and reducing time spent integrating models with planning and sequencing tools. The principle objective of this internship was to develop a user interface for an experimental ontology-based structure visualization of navigation and attitude control system modeling software. To satisfy this, a number of tree and graph visualization tools were researched and a Java based hyperbolic graph viewer was selected for experimental adaptation. Early results show promise in the ability to organize and display large amounts of spacecraft model documentation efficiently and effectively through a web browser. This viewer serves as a conceptual implementation for future development but trials with both ISCA developers and end users should be performed to truly evaluate the effectiveness of continued development of such visualizations.

  20. Anatomy of an anesthesia information management system.

    PubMed

    Shah, Nirav J; Tremper, Kevin K; Kheterpal, Sachin

    2011-09-01

    Anesthesia information management systems (AIMS) have become more prevalent as more sophisticated hardware and software have increased usability and reliability. National mandates and incentives have driven adoption as well. AIMS can be developed in one of several software models (Web based, client/server, or incorporated into a medical device). Irrespective of the development model, the best AIMS have a feature set that allows for comprehensive management of workflow for an anesthesiologist. Key features include preoperative, intraoperative, and postoperative documentation; quality assurance; billing; compliance and operational reporting; patient and operating room tracking; and integration with hospital electronic medical records. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. 3D graphics hardware accelerator programming methods for real-time visualization systems

    NASA Astrophysics Data System (ADS)

    Souetov, Andrew E.

    2001-02-01

    The paper deals with new approaches in software design for creating real-time applications that use modern graphics acceleration hardware. The growing complexity of such type of software compels programmers to use different types of CASE systems in design and development process. The subject under discussion is integration of such systems in a development process, their effective use, and the combination of these new methods with the necessity to produce optimal codes. A method of simulation integration and modeling tools in real-time software development cycle is described.

  2. 3D graphics hardware accelerator programming methods for real-time visualization systems

    NASA Astrophysics Data System (ADS)

    Souetov, Andrew E.

    2000-02-01

    The paper deals with new approaches in software design for creating real-time applications that use modern graphics acceleration hardware. The growing complexity of such type of software compels programmers to use different types of CASE systems in design and development process. The subject under discussion is integration of such systems in a development process, their effective use, and the combination of these new methods with the necessity to produce optimal codes. A method of simulation integration and modeling tools in real-time software development cycle is described.

  3. Modern Scientific Visualization is more than Just Pretty Pictures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, E Wes; Rubel, Oliver; Wu, Kesheng

    2008-12-05

    While the primary product of scientific visualization is images and movies, its primary objective is really scientific insight. Too often, the focus of visualization research is on the product, not the mission. This paper presents two case studies, both that appear in previous publications, that focus on using visualization technology to produce insight. The first applies"Query-Driven Visualization" concepts to laser wakefield simulation data to help identify and analyze the process of beam formation. The second uses topological analysis to provide a quantitative basis for (i) understanding the mixing process in hydrodynamic simulations, and (ii) performing comparative analysis of data frommore » two different types of simulations that model hydrodynamic instability.« less

  4. A case for spiking neural network simulation based on configurable multiple-FPGA systems.

    PubMed

    Yang, Shufan; Wu, Qiang; Li, Renfa

    2011-09-01

    Recent neuropsychological research has begun to reveal that neurons encode information in the timing of spikes. Spiking neural network simulations are a flexible and powerful method for investigating the behaviour of neuronal systems. Simulation of the spiking neural networks in software is unable to rapidly generate output spikes in large-scale of neural network. An alternative approach, hardware implementation of such system, provides the possibility to generate independent spikes precisely and simultaneously output spike waves in real time, under the premise that spiking neural network can take full advantage of hardware inherent parallelism. We introduce a configurable FPGA-oriented hardware platform for spiking neural network simulation in this work. We aim to use this platform to combine the speed of dedicated hardware with the programmability of software so that it might allow neuroscientists to put together sophisticated computation experiments of their own model. A feed-forward hierarchy network is developed as a case study to describe the operation of biological neural systems (such as orientation selectivity of visual cortex) and computational models of such systems. This model demonstrates how a feed-forward neural network constructs the circuitry required for orientation selectivity and provides platform for reaching a deeper understanding of the primate visual system. In the future, larger scale models based on this framework can be used to replicate the actual architecture in visual cortex, leading to more detailed predictions and insights into visual perception phenomenon.

  5. McIDAS-V: Advanced Visualization for 3D Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Rink, T.; Achtor, T. H.

    2010-12-01

    McIDAS-V is a Java-based, open-source, freely available software package for analysis and visualization of geophysical data. Its advanced capabilities provide very interactive 4-D displays, including 3D volumetric rendering and fast sub-manifold slicing, linked to an abstract mathematical data model with built-in metadata for units, coordinate system transforms and sampling topology. A Jython interface provides user defined analysis and computation in terms of the internal data model. These powerful capabilities to integrate data, analysis and visualization are being applied to hyper-spectral sounding retrievals, eg. AIRS and IASI, of moisture and cloud density to interrogate and analyze their 3D structure, as well as, validate with instruments such as CALIPSO, CloudSat and MODIS. The object oriented framework design allows for specialized extensions for novel displays and new sources of data. Community defined CF-conventions for gridded data are understood by the software, and can be immediately imported into the application. This presentation will show examples how McIDAS-V is used in 3-dimensional data analysis, display and evaluation.

  6. Plume capture by a migrating ridge: Analog geodynamic experiments

    NASA Astrophysics Data System (ADS)

    Mendez, J. S.; Hall, P.

    2010-12-01

    Paleomagnetic data from the Hawaii-Emperor Seamount Chain (HESC) suggests that the Hawaiian hotspot moved rapidly (~40 mm/yr) between 81 - 47 Ma but has remained relatively stationary since that time. This implies that the iconic bend in the HESC may in fact reflect the transition from a period of rapid hotspot motion to a stationary state, rather than a change in motion of the Pacific plate. Tarduno et al. (2009) have suggested that this period of rapid hotspot motion might be the surface expression of a plume conduit returning to a largely vertical orientation after having been “captured” and tilted by a migrating mid-ocean ridge. We report on a series of analog fluid dynamic experiments designed to characterize the interaction between a migrating spreading center and a thermally buoyant mantle plume. Experiments were conducted in a clear acrylic tank (100 cm x 70 cm x 50 cm) filled with commercial grade high-fructose corn syrup. Plate-driven flow is modeled by dragging two sheets of Mylar film (driven by independent DC motors) in opposite directions over the surface of the fluid. Ridge migration is achieved by moving the point at which the mylar sheets diverge using a separate motor drive. Buoyant plume flow is modeled using corn syrup introduced into the bottom of the tank from an external, heated, pressurized reservoir. Small (~2 mm diameter), neutrally buoyant Delrin spheres are mixed into reservoir of plume material to aid in visualization. Plate velocities and ridge migration rate are controlled and plume temperature monitored using LabView software. Experiments are recorded using digital video which is then analyzed using digital image analysis software to track the position and shape of the plume conduit throughout the course of the experiment. The intersection of the plume conduit with the surface of the fluid is taken as an analog for the locus of hotspot volcanism and tracked as a function of time to obtain a hotspot migration rate. Experiments are scaled to the Earth's mantle through a combination of a Peclet number and a plume buoyancy number. A range of spreading rates, ridge migration rates, and plume excess temperatures representative of the Earth are considered.

  7. Status of Ongoing Work in Software TRAs/TRLs

    DTIC Science & Technology

    2010-04-29

    to changes/updates being driven by corporate market dynamics • Changes not under control or under the influence of the PMO! • On programs with long...observed and reported esearc articles, peer- reviewed white papers, point papers, early conceptual models n a ca em c , experimental as c researc

  8. Teaching and learning the Hodgkin-Huxley model based on software developed in NEURON's programming language hoc.

    PubMed

    Hernández, Oscar E; Zurek, Eduardo E

    2013-05-15

    We present a software tool called SENB, which allows the geometric and biophysical neuronal properties in a simple computational model of a Hodgkin-Huxley (HH) axon to be changed. The aim of this work is to develop a didactic and easy-to-use computational tool in the NEURON simulation environment, which allows graphical visualization of both the passive and active conduction parameters and the geometric characteristics of a cylindrical axon with HH properties. The SENB software offers several advantages for teaching and learning electrophysiology. First, SENB offers ease and flexibility in determining the number of stimuli. Second, SENB allows immediate and simultaneous visualization, in the same window and time frame, of the evolution of the electrophysiological variables. Third, SENB calculates parameters such as time and space constants, stimuli frequency, cellular area and volume, sodium and potassium equilibrium potentials, and propagation velocity of the action potentials. Furthermore, it allows the user to see all this information immediately in the main window. Finally, with just one click SENB can save an image of the main window as evidence. The SENB software is didactic and versatile, and can be used to improve and facilitate the teaching and learning of the underlying mechanisms in the electrical activity of an axon using the biophysical properties of the squid giant axon.

  9. Organization of area hV5/MT+ in subjects with homonymous visual field defects.

    PubMed

    Papanikolaou, Amalia; Keliris, Georgios A; Papageorgiou, T Dorina; Schiefer, Ulrich; Logothetis, Nikos K; Smirnakis, Stelios M

    2018-04-06

    Damage to the primary visual cortex (V1) leads to a visual field loss (scotoma) in the retinotopically corresponding part of the visual field. Nonetheless, a small amount of residual visual sensitivity persists within the blind field. This residual capacity has been linked to activity observed in the middle temporal area complex (V5/MT+). However, it remains unknown whether the organization of hV5/MT+ changes following early visual cortical lesions. We studied the organization of area hV5/MT+ of five patients with dense homonymous defects in a quadrant of the visual field as a result of partial V1+ or optic radiation lesions. To do so, we developed a new method, which models the boundaries of population receptive fields directly from the BOLD signal of each voxel in the visual cortex. We found responses in hV5/MT+ arising inside the scotoma for all patients and identified two possible sources of activation: 1) responses might originate from partially lesioned parts of area V1 corresponding to the scotoma, and 2) responses can also originate independent of area V1 input suggesting the existence of functional V1-bypassing pathways. Apparently, visually driven activity observed in hV5/MT+ is not sufficient to mediate conscious vision. More surprisingly, visually driven activity in corresponding regions of V1 and early extrastriate areas including hV5/MT+ did not guarantee visual perception in the group of patients with post-geniculate lesions that we examined. This suggests that the fine coordination of visual activity patterns across visual areas may be an important determinant of whether visual perception persists following visual cortical lesions. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. HydroDesktop: An Open Source GIS-Based Platform for Hydrologic Data Discovery, Visualization, and Analysis

    NASA Astrophysics Data System (ADS)

    Ames, D. P.; Kadlec, J.; Cao, Y.; Grover, D.; Horsburgh, J. S.; Whiteaker, T.; Goodall, J. L.; Valentine, D. W.

    2010-12-01

    A growing number of hydrologic information servers are being deployed by government agencies, university networks, and individual researchers using the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS). The CUAHSI HIS Project has developed a standard software stack, called HydroServer, for publishing hydrologic observations data. It includes the Observations Data Model (ODM) database and Water Data Service web services, which together enable publication of data on the Internet in a standard format called Water Markup Language (WaterML). Metadata describing available datasets hosted on these servers is compiled within a central metadata catalog called HIS Central at the San Diego Supercomputer Center and is searchable through a set of predefined web services based queries. Together, these servers and central catalog service comprise a federated HIS of a scale and comprehensiveness never previously available. This presentation will briefly review/introduce the CUAHSI HIS system with special focus on a new HIS software tool called "HydroDesktop" and the open source software development web portal, www.HydroDesktop.org, which supports community development and maintenance of the software. HydroDesktop is a client-side, desktop software application that acts as a search and discovery tool for exploring the distributed network of HydroServers, downloading specific data series, visualizing and summarizing data series and exporting these to formats needed for analysis by external software. HydroDesktop is based on the open source DotSpatial GIS developer toolkit which provides it with map-based data interaction and visualization, and a plug-in interface that can be used by third party developers and researchers to easily extend the software using Microsoft .NET programming languages. HydroDesktop plug-ins that are presently available or currently under development within the project and by third party collaborators include functions for data search and discovery, extensive graphing, data editing and export, HydroServer exploration, integration with the OpenMI workflow and modeling system, and an interface for data analysis through the R statistical package.

  11. The GeoClaw software for depth-averaged flows with adaptive refinement

    USGS Publications Warehouse

    Berger, M.J.; George, D.L.; LeVeque, R.J.; Mandli, Kyle T.

    2011-01-01

    Many geophysical flow or wave propagation problems can be modeled with two-dimensional depth-averaged equations, of which the shallow water equations are the simplest example. We describe the GeoClaw software that has been designed to solve problems of this nature, consisting of open source Fortran programs together with Python tools for the user interface and flow visualization. This software uses high-resolution shock-capturing finite volume methods on logically rectangular grids, including latitude-longitude grids on the sphere. Dry states are handled automatically to model inundation. The code incorporates adaptive mesh refinement to allow the efficient solution of large-scale geophysical problems. Examples are given illustrating its use for modeling tsunamis and dam-break flooding problems. Documentation and download information is available at www.clawpack.org/geoclaw. ?? 2011.

  12. Authentic Astronomical Discovery in Planetariums: Data-Driven Immersive Lectures

    NASA Astrophysics Data System (ADS)

    Wyatt, Ryan Jason

    2018-01-01

    Planetariums are akin to “branch offices” for astronomy in major cities and other locations around the globe. With immersive, fulldome video technology, modern digital planetariums offer the opportunity to integrate authentic astronomical data into both pre-recorded shows and live lectures. At the California Academy of Sciences Morrison Planetarium, we host the monthly Benjamin Dean Astronomy Lecture Series, which features researchers describing their cutting-edge work to well-informed lay audiences. The Academy’s visualization studio and engineering teams work with researchers to visualize their data in both pre-rendered and real-time formats, and these visualizations are integrated into a variety of programs—including lectures! The assets are then made available to any other planetariums with similar software to support their programming. A lecturer can thus give the same immersive presentation to audiences in a variety of planetariums. The Academy has also collaborated with Chicago’s Adler Planetarium to bring Kavli Fulldome Lecture Series to San Francisco, and the two theaters have also linked together in live “domecasts” to share real-time content with audiences in both cities. These lecture series and other, similar projects suggest a bright future for astronomers to bring their research to the public in an immersive and visually compelling format.

  13. Using connectome-based predictive modeling to predict individual behavior from brain connectivity

    PubMed Central

    Shen, Xilin; Finn, Emily S.; Scheinost, Dustin; Rosenberg, Monica D.; Chun, Marvin M.; Papademetris, Xenophon; Constable, R Todd

    2017-01-01

    Neuroimaging is a fast developing research area where anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale datasets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: 1) feature selection, 2) feature summarization, 3) model building, and 4) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a significant amount of the variance in these measures. It has been demonstrated that the CPM protocol performs equivalently or better than most of the existing approaches in brain-behavior prediction. However, because CPM focuses on linear modeling and a purely data-driven driven approach, neuroscientists with limited or no experience in machine learning or optimization would find it easy to implement the protocols. Depending on the volume of data to be processed, the protocol can take 10–100 minutes for model building, 1–48 hours for permutation testing, and 10–20 minutes for visualization of results. PMID:28182017

  14. BrainLiner: A Neuroinformatics Platform for Sharing Time-Aligned Brain-Behavior Data

    PubMed Central

    Takemiya, Makoto; Majima, Kei; Tsukamoto, Mitsuaki; Kamitani, Yukiyasu

    2016-01-01

    Data-driven neuroscience aims to find statistical relationships between brain activity and task behavior from large-scale datasets. To facilitate high-throughput data processing and modeling, we created BrainLiner as a web platform for sharing time-aligned, brain-behavior data. Using an HDF5-based data format, BrainLiner treats brain activity and data related to behavior with the same salience, aligning both behavioral and brain activity data on a common time axis. This facilitates learning the relationship between behavior and brain activity. Using a common data file format also simplifies data processing and analyses. Properties describing data are unambiguously defined using a schema, allowing machine-readable definition of data. The BrainLiner platform allows users to upload and download data, as well as to explore and search for data from the web platform. A WebGL-based data explorer can visualize highly detailed neurophysiological data from within the web browser, and a data-driven search feature allows users to search for similar time windows of data. This increases transparency, and allows for visual inspection of neural coding. BrainLiner thus provides an essential set of tools for data sharing and data-driven modeling. PMID:26858636

  15. Interactive Classification of Construction Materials: Feedback Driven Framework for Annotation and Analysis of 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Hess, M. R.; Petrovic, V.; Kuester, F.

    2017-08-01

    Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.

  16. Intuitive Visualization of Transient Flow: Towards a Full 3D Tool

    NASA Astrophysics Data System (ADS)

    Michel, Isabel; Schröder, Simon; Seidel, Torsten; König, Christoph

    2015-04-01

    Visualization of geoscientific data is a challenging task especially when targeting a non-professional audience. In particular, the graphical presentation of transient vector data can be a significant problem. With STRING Fraunhofer ITWM (Kaiserslautern, Germany) in collaboration with delta h Ingenieurgesellschaft mbH (Witten, Germany) developed a commercial software for intuitive 2D visualization of 3D flow problems. Through the intuitive character of the visualization experts can more easily transport their findings to non-professional audiences. In STRING pathlets moving with the flow provide an intuition of velocity and direction of both steady-state and transient flow fields. The visualization concept is based on the Lagrangian view of the flow which means that the pathlets' movement is along the direction given by pathlines. In order to capture every detail of the flow an advanced method for intelligent, time-dependent seeding of the pathlets is implemented based on ideas of the Finite Pointset Method (FPM) originally conceived at and continuously developed by Fraunhofer ITWM. Furthermore, by the same method pathlets are removed during the visualization to avoid visual cluttering. Additional scalar flow attributes, for example concentration or potential, can either be mapped directly to the pathlets or displayed in the background of the pathlets on the 2D visualization plane. The extensive capabilities of STRING are demonstrated with the help of different applications in groundwater modeling. We will discuss the strengths and current restrictions of STRING which have surfaced during daily use of the software, for example by delta h. Although the software focusses on the graphical presentation of flow data for non-professional audiences its intuitive visualization has also proven useful to experts when investigating details of flow fields. Due to the popular reception of STRING and its limitation to 2D, the need arises for the extension to a full 3D tool. Currently STRING can generate animations of single 2D cuts, either planar or curved surfaces, through 3D simulation domains. To provide a general tool for experts enabling also direct exploration and analysis of large 3D flow fields the software needs to be extended to intuitive as well as interactive visualizations of entire 3D flow domains. The current research concerning this project, which is funded by the Federal Ministry for Economic Affairs and Energy (Germany), is presented.

  17. An agent architecture for an integrated forest ecosystem management decision support system

    Treesearch

    Donald Nute; Walter D. Potter; Mayukh Dass; Astrid Glende; Frederick Maier; Hajime Uchiyama; Jin Wang; Mark Twery; Peter Knopp; Scott Thomasma; H. Michael Rauscher

    2003-01-01

    A wide variety of software tools are available to support decision in the management of forest ecosystems. These tools include databases, growth and yield models, wildlife models, silvicultural expert systems, financial models, geographical informations systems, and visualization tools. Typically, each of these tools has its own complex interface and data format. To...

  18. Photo-realistic Terrain Modeling and Visualization for Mars Exploration Rover Science Operations

    NASA Technical Reports Server (NTRS)

    Edwards, Laurence; Sims, Michael; Kunz, Clayton; Lees, David; Bowman, Judd

    2005-01-01

    Modern NASA planetary exploration missions employ complex systems of hardware and software managed by large teams of. engineers and scientists in order to study remote environments. The most complex and successful of these recent projects is the Mars Exploration Rover mission. The Computational Sciences Division at NASA Ames Research Center delivered a 30 visualization program, Viz, to the MER mission that provides an immersive, interactive environment for science analysis of the remote planetary surface. In addition, Ames provided the Athena Science Team with high-quality terrain reconstructions generated with the Ames Stereo-pipeline. The on-site support team for these software systems responded to unanticipated opportunities to generate 30 terrain models during the primary MER mission. This paper describes Viz, the Stereo-pipeline, and the experiences of the on-site team supporting the scientists at JPL during the primary MER mission.

  19. ePMV embeds molecular modeling into professional animation software environments.

    PubMed

    Johnson, Graham T; Autin, Ludovic; Goodsell, David S; Sanner, Michel F; Olson, Arthur J

    2011-03-09

    Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties, and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. ePMV Embeds Molecular Modeling into Professional Animation Software Environments

    PubMed Central

    Johnson, Graham T.; Autin, Ludovic; Goodsell, David S.; Sanner, Michel F.; Olson, Arthur J.

    2011-01-01

    SUMMARY Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers, we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. PMID:21397181

  1. Wind Turbine Blade CAD Models Used as Scaffolding Technique to Teach Design Engineers

    ERIC Educational Resources Information Center

    Irwin, John

    2013-01-01

    The Siemens PLM CAD software NX is commonly used for designing mechanical systems, and in complex systems such as the emerging area of wind power, the ability to have a model controlled by design parameters is a certain advantage. Formula driven expressions based on the amount of available wind in an area can drive the amount of effective surface…

  2. Models and Frameworks: A Synergistic Association for Developing Component-Based Applications

    PubMed Central

    Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A.; Álvarez, Bárbara

    2014-01-01

    The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development. PMID:25147858

  3. Models and frameworks: a synergistic association for developing component-based applications.

    PubMed

    Alonso, Diego; Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A; Álvarez, Bárbara

    2014-01-01

    The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development.

  4. Iterating between Tools to Create and Edit Visualizations.

    PubMed

    Bigelow, Alex; Drucker, Steven; Fisher, Danyel; Meyer, Miriah

    2017-01-01

    A common workflow for visualization designers begins with a generative tool, like D3 or Processing, to create the initial visualization; and proceeds to a drawing tool, like Adobe Illustrator or Inkscape, for editing and cleaning. Unfortunately, this is typically a one-way process: once a visualization is exported from the generative tool into a drawing tool, it is difficult to make further, data-driven changes. In this paper, we propose a bridge model to allow designers to bring their work back from the drawing tool to re-edit in the generative tool. Our key insight is to recast this iteration challenge as a merge problem - similar to when two people are editing a document and changes between them need to reconciled. We also present a specific instantiation of this model, a tool called Hanpuku, which bridges between D3 scripts and Illustrator. We show several examples of visualizations that are iteratively created using Hanpuku in order to illustrate the flexibility of the approach. We further describe several hypothetical tools that bridge between other visualization tools to emphasize the generality of the model.

  5. Atrioventricular junction (AVJ) motion tracking: a software tool with ITK/VTK/Qt.

    PubMed

    Pengdong Xiao; Shuang Leng; Xiaodan Zhao; Hua Zou; Ru San Tan; Wong, Philip; Liang Zhong

    2016-08-01

    The quantitative measurement of the Atrioventricular Junction (AVJ) motion is an important index for ventricular functions of one cardiac cycle including systole and diastole. In this paper, a software tool that can conduct AVJ motion tracking from cardiovascular magnetic resonance (CMR) images is presented by using Insight Segmentation and Registration Toolkit (ITK), The Visualization Toolkit (VTK) and Qt. The software tool is written in C++ by using Visual Studio Community 2013 integrated development environment (IDE) containing both an editor and a Microsoft complier. The software package has been successfully implemented. From the software engineering practice, it is concluded that ITK, VTK, and Qt are very handy software systems to implement automatic image analysis functions for CMR images such as quantitative measure of motion by visual tracking.

  6. Parmodel: a web server for automated comparative modeling of proteins.

    PubMed

    Uchôa, Hugo Brandão; Jorge, Guilherme Eberhart; Freitas Da Silveira, Nelson José; Camera, João Carlos; Canduri, Fernanda; De Azevedo, Walter Filgueira

    2004-12-24

    Parmodel is a web server for automated comparative modeling and evaluation of protein structures. The aim of this tool is to help inexperienced users to perform modeling, assessment, visualization, and optimization of protein models as well as crystallographers to evaluate structures solved experimentally. It is subdivided in four modules: Parmodel Modeling, Parmodel Assessment, Parmodel Visualization, and Parmodel Optimization. The main module is the Parmodel Modeling that allows the building of several models for a same protein in a reduced time, through the distribution of modeling processes on a Beowulf cluster. Parmodel automates and integrates the main softwares used in comparative modeling as MODELLER, Whatcheck, Procheck, Raster3D, Molscript, and Gromacs. This web server is freely accessible at .

  7. A versatile system for processing geostationary satellite data with run-time visualization capability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landsfeld, M.; Gautier, C.; Figel, T.

    1995-01-01

    To better predict global climate change, scientists are developing climate models that require interdisciplinary and collaborative efforts in their building. The authors are currently involved in several such projects but will briefly discuss activities in support of two such complementary projects: the Atmospheric Radiation Measurement (ARM) program of the Department of Energy and Sequoia 2000, a joint venture of the University of California, the private sector, and government. The author`s contribution to the ARM program is to investigate the role of clouds on the top of the atmosphere and on surface radiance fields through the data analysis of surface andmore » satellite observations and complex modeling of the interaction of radiation with clouds. One of the first ARM research activities involves the computation of the broadband shortwave surface irradiance from satellite observations. Geostationary satellite images centered over the first ARM observation site are received hourly over the Internet network and processed in real time to compute hourly and daily composite shortwave irradiance fields. The images and the results are transferred via a high-speed network to the Sequoia 2000 storage facility in Berkeley, where they are archived. These satellite-derived results are compared with the surface observations to evaluate the accuracy of the satellite estimate and the spatial representation of the surface observations. In developing the software involved in calculating the surface shortwave irradiance, the authors have produced an environment whereby they can easily modify and monitor the data processing as required. Through the principles of modular programming, they have developed software that is easily modified as new algorithms for computation are developed or input data availability changes. In addition, the software was designed so that it could be run from an interactive, icon-driven, graphical interface, TCL-TK, developed by Sequoia 2000 participants.« less

  8. Intrinsic, stimulus-driven and task-dependent connectivity in human auditory cortex.

    PubMed

    Häkkinen, Suvi; Rinne, Teemu

    2018-06-01

    A hierarchical and modular organization is a central hypothesis in the current primate model of auditory cortex (AC) but lacks validation in humans. Here we investigated whether fMRI connectivity at rest and during active tasks is informative of the functional organization of human AC. Identical pitch-varying sounds were presented during a visual discrimination (i.e. no directed auditory attention), pitch discrimination, and two versions of pitch n-back memory tasks. Analysis based on fMRI connectivity at rest revealed a network structure consisting of six modules in supratemporal plane (STP), temporal lobe, and inferior parietal lobule (IPL) in both hemispheres. In line with the primate model, in which higher-order regions have more longer-range connections than primary regions, areas encircling the STP module showed the highest inter-modular connectivity. Multivariate pattern analysis indicated significant connectivity differences between the visual task and rest (driven by the presentation of sounds during the visual task), between auditory and visual tasks, and between pitch discrimination and pitch n-back tasks. Further analyses showed that these differences were particularly due to connectivity modulations between the STP and IPL modules. While the results are generally in line with the primate model, they highlight the important role of human IPL during the processing of both task-irrelevant and task-relevant auditory information. Importantly, the present study shows that fMRI connectivity at rest, during presentation of sounds, and during active listening provides novel information about the functional organization of human AC.

  9. SOCR: Statistics Online Computational Resource

    PubMed Central

    Dinov, Ivo D.

    2011-01-01

    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR). This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student’s intuition and enhance their learning. PMID:21451741

  10. Model-Driven Development of Safety Architectures

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Pai, Ganesh; Whiteside, Iain

    2017-01-01

    We describe the use of model-driven development for safety assurance of a pioneering NASA flight operation involving a fleet of small unmanned aircraft systems (sUAS) flying beyond visual line of sight. The central idea is to develop a safety architecture that provides the basis for risk assessment and visualization within a safety case, the formal justification of acceptable safety required by the aviation regulatory authority. A safety architecture is composed from a collection of bow tie diagrams (BTDs), a practical approach to manage safety risk by linking the identified hazards to the appropriate mitigation measures. The safety justification for a given unmanned aircraft system (UAS) operation can have many related BTDs. In practice, however, each BTD is independently developed, which poses challenges with respect to incremental development, maintaining consistency across different safety artifacts when changes occur, and in extracting and presenting stakeholder specific information relevant for decision making. We show how a safety architecture reconciles the various BTDs of a system, and, collectively, provide an overarching picture of system safety, by considering them as views of a unified model. We also show how it enables model-driven development of BTDs, replete with validations, transformations, and a range of views. Our approach, which we have implemented in our toolset, AdvoCATE, is illustrated with a running example drawn from a real UAS safety case. The models and some of the innovations described here were instrumental in successfully obtaining regulatory flight approval.

  11. MAVEN-SA: Model-Based Automated Visualization for Enhanced Situation Awareness

    DTIC Science & Technology

    2005-11-01

    34 methods. But historically, as arts evolve, these how to methods become systematized and codified (e.g. the development and refinement of color theory ...schema (as necessary) 3. Draw inferences from new knowledge to support decision making process 33 Visual language theory suggests that humans process...informed by theories of learning. Over the years, many types of software have been developed to support student learning. The various types of

  12. Multiresolution Algorithms for Processing Giga-Models: Real-time Visualization, Reasoning, and Interaction

    DTIC Science & Technology

    2012-04-23

    Interactive Virtual Hair Salon , Presence, (05 2007): 237. doi: 2012/04/17 12:55:26 31 Theodore Kim, Jason Sewall, Avneesh Sud, Ming Lin. Fast...in Games , Utrecht, Netherlands, Nov. 2009. Keynote Speaker, IADIS International Conference on Computer Graphics and Visualization, Portugal, June 2009...Keynote Speaker, ACM Symposium on Virtual Reality Software and Technology, Bordeaux, France, October 2008. Invited Speaker, Motion in Games , Utrecht

  13. Application of simulation models for the optimization of business processes

    NASA Astrophysics Data System (ADS)

    Jašek, Roman; Sedláček, Michal; Chramcov, Bronislav; Dvořák, Jiří

    2016-06-01

    The paper deals with the applications of modeling and simulation tools in the optimization of business processes, especially in solving an optimization of signal flow in security company. As a modeling tool was selected Simul8 software that is used to process modeling based on discrete event simulation and which enables the creation of a visual model of production and distribution processes.

  14. Physiologically based pharmacokinetic modeling using microsoft excel and visual basic for applications.

    PubMed

    Marino, Dale J

    2005-01-01

    Abstract Physiologically based pharmacokinetic (PBPK) models are mathematical descriptions depicting the relationship between external exposure and internal dose. These models have found great utility for interspecies extrapolation. However, specialized computer software packages, which are not widely distributed, have typically been used for model development and utilization. A few physiological models have been reported using more widely available software packages (e.g., Microsoft Excel), but these tend to include less complex processes and dose metrics. To ascertain the capability of Microsoft Excel and Visual Basis for Applications (VBA) for PBPK modeling, models for styrene, vinyl chloride, and methylene chloride were coded in Advanced Continuous Simulation Language (ACSL), Excel, and VBA, and simulation results were compared. For styrene, differences between ACSL and Excel or VBA compartment concentrations and rates of change were less than +/-7.5E-10 using the same numerical integration technique and time step. Differences using VBA fixed step or ACSL Gear's methods were generally <1.00E-03, although larger differences involving very small values were noted after exposure transitions. For vinyl chloride and methylene chloride, Excel and VBA PBPK model dose metrics differed by no more than -0.013% or -0.23%, respectively, from ACSL results. These differences are likely attributable to different step sizes rather than different numerical integration techniques. These results indicate that Microsoft Excel and VBA can be useful tools for utilizing PBPK models, and given the availability of these software programs, it is hoped that this effort will help facilitate the use and investigation of PBPK modeling.

  15. Modeling and visual simulation of Microalgae photobioreactor

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Hou, Dapeng; Hu, Dawei

    Microalgae is a kind of nutritious and high photosynthetic efficiency autotrophic plant, which is widely distributed in the land and the sea. It can be extensively used in medicine, food, aerospace, biotechnology, environmental protection and other fields. Photobioreactor which is important equipment is mainly used to cultivate massive and high-density microalgae. In this paper, based on the mathematical model of microalgae which grew under different light intensity, three-dimensional visualization model was built and implemented in 3ds max, Virtools and some other three dimensional software. Microalgae is photosynthetic organism, it can efficiently produce oxygen and absorb carbon dioxide. The goal of the visual simulation is to display its change and impacting on oxygen and carbon dioxide intuitively. In this paper, different temperatures and light intensities were selected to control the photobioreactor, and dynamic change of microalgal biomass, Oxygen and carbon dioxide was observed with the aim of providing visualization support for microalgal and photobioreactor research.

  16. UWB Tracking Software Development

    NASA Technical Reports Server (NTRS)

    Gross, Julia; Arndt, Dickey; Ngo, Phong; Phan, Chau; Dusl, John; Ni, Jianjun; Rafford, Melinda

    2006-01-01

    An Ultra-Wideband (UWB) two-cluster Angle of Arrival (AOA) tracking prototype system is currently being developed and tested at NASA Johnson Space Center for space exploration applications. This talk discusses the software development efforts for this UWB two-cluster AOA tracking system. The role the software plays in this system is to take waveform data from two UWB radio receivers as an input, feed this input into an AOA tracking algorithm, and generate the target position as an output. The architecture of the software (Input/Output Interface and Algorithm Core) will be introduced in this talk. The development of this software has three phases. In Phase I, the software is mostly Matlab driven and calls C++ socket functions to provide the communication links to the radios. This is beneficial in the early stage when it is necessary to frequently test changes in the algorithm. Phase II of the development is to have the software mostly C++ driven and call a Matlab function for the AOA tracking algorithm. This is beneficial in order to send the tracking results to other systems and also to improve the tracking update rate of the system. The third phase is part of future work and is to have the software completely C++ driven with a graphics user interface. This software design enables the fine resolution tracking of the UWB two-cluster AOA tracking system.

  17. Bigdata Driven Cloud Security: A Survey

    NASA Astrophysics Data System (ADS)

    Raja, K.; Hanifa, Sabibullah Mohamed

    2017-08-01

    Cloud Computing (CC) is a fast-growing technology to perform massive-scale and complex computing. It eliminates the need to maintain expensive computing hardware, dedicated space, and software. Recently, it has been observed that massive growth in the scale of data or big data generated through cloud computing. CC consists of a front-end, includes the users’ computers and software required to access the cloud network, and back-end consists of various computers, servers and database systems that create the cloud. In SaaS (Software as-a-Service - end users to utilize outsourced software), PaaS (Platform as-a-Service-platform is provided) and IaaS (Infrastructure as-a-Service-physical environment is outsourced), and DaaS (Database as-a-Service-data can be housed within a cloud), where leading / traditional cloud ecosystem delivers the cloud services become a powerful and popular architecture. Many challenges and issues are in security or threats, most vital barrier for cloud computing environment. The main barrier to the adoption of CC in health care relates to Data security. When placing and transmitting data using public networks, cyber attacks in any form are anticipated in CC. Hence, cloud service users need to understand the risk of data breaches and adoption of service delivery model during deployment. This survey deeply covers the CC security issues (covering Data Security in Health care) so as to researchers can develop the robust security application models using Big Data (BD) on CC (can be created / deployed easily). Since, BD evaluation is driven by fast-growing cloud-based applications developed using virtualized technologies. In this purview, MapReduce [12] is a good example of big data processing in a cloud environment, and a model for Cloud providers.

  18. Experimental investigation of nozzle/plume aerodynamics at hypersonic speeds

    NASA Technical Reports Server (NTRS)

    Bogdanoff, David W.; Cambier, Jean-Luc

    1993-01-01

    Work continued on the improvement of 16-Inch Shock Tunnel. This comprised studies of ways of improving driver gas ignition, an improved driver gas mixing system, an axial translation system for the driver tube, improved diaphragm materials (carbon steel vs. stainless steel), a copper liner for the part of the driven tube near the nozzle, the use of a buffer gas between the driver and driven gases, the use of N2O in the driven tube, the use of a converging driven tube, operation of the facility as a non-reflected shock tunnel and expansion tube, operation with heated hydrogen or helium driver gas, the use of detonations in the driver and the construction of an enlarged test section. Maintenance and developmental work continued on the scramjet combustor continued. New software which greatly speeds up data analysis has been written and brought on line. In particular, software which provides very rapid generation of model surface heat flux profiles has been brought on line. A considerable amount of theoretical work was performed in connection with upgrading the 16 Inch Shock Tunnel Facility. A one-dimensional Godunov code for very high velocities and any equation of state is intended to add viscous effects in studying the operation of the Shock Tunnel and also of two-stage light gas guns.

  19. Neural Pathways Conveying Novisual Information to the Visual Cortex

    PubMed Central

    2013-01-01

    The visual cortex has been traditionally considered as a stimulus-driven, unimodal system with a hierarchical organization. However, recent animal and human studies have shown that the visual cortex responds to non-visual stimuli, especially in individuals with visual deprivation congenitally, indicating the supramodal nature of the functional representation in the visual cortex. To understand the neural substrates of the cross-modal processing of the non-visual signals in the visual cortex, we firstly showed the supramodal nature of the visual cortex. We then reviewed how the nonvisual signals reach the visual cortex. Moreover, we discussed if these non-visual pathways are reshaped by early visual deprivation. Finally, the open question about the nature (stimulus-driven or top-down) of non-visual signals is also discussed. PMID:23840972

  20. The Cloud-Based Integrated Data Viewer (IDV)

    NASA Astrophysics Data System (ADS)

    Fisher, Ward

    2015-04-01

    Maintaining software compatibility across new computing environments and the associated underlying hardware is a common problem for software engineers and scientific programmers. While there are a suite of tools and methodologies used in traditional software engineering environments to mitigate this issue, they are typically ignored by developers lacking a background in software engineering. The result is a large body of software which is simultaneously critical and difficult to maintain. Visualization software is particularly vulnerable to this problem, given the inherent dependency on particular graphics hardware and software API's. The advent of cloud computing has provided a solution to this problem, which was not previously practical on a large scale; Application Streaming. This technology allows a program to run entirely on a remote virtual machine while still allowing for interactivity and dynamic visualizations, with little-to-no re-engineering required. Through application streaming we are able to bring the same visualization to a desktop, a netbook, a smartphone, and the next generation of hardware, whatever it may be. Unidata has been able to harness Application Streaming to provide a tablet-compatible version of our visualization software, the Integrated Data Viewer (IDV). This work will examine the challenges associated with adapting the IDV to an application streaming platform, and include a brief discussion of the underlying technologies involved. We will also discuss the differences between local software and software-as-a-service.

  1. Visual analytics in cheminformatics: user-supervised descriptor selection for QSAR methods.

    PubMed

    Martínez, María Jimena; Ponzoni, Ignacio; Díaz, Mónica F; Vazquez, Gustavo E; Soto, Axel J

    2015-01-01

    The design of QSAR/QSPR models is a challenging problem, where the selection of the most relevant descriptors constitutes a key step of the process. Several feature selection methods that address this step are concentrated on statistical associations among descriptors and target properties, whereas the chemical knowledge is left out of the analysis. For this reason, the interpretability and generality of the QSAR/QSPR models obtained by these feature selection methods are drastically affected. Therefore, an approach for integrating domain expert's knowledge in the selection process is needed for increase the confidence in the final set of descriptors. In this paper a software tool, which we named Visual and Interactive DEscriptor ANalysis (VIDEAN), that combines statistical methods with interactive visualizations for choosing a set of descriptors for predicting a target property is proposed. Domain expertise can be added to the feature selection process by means of an interactive visual exploration of data, and aided by statistical tools and metrics based on information theory. Coordinated visual representations are presented for capturing different relationships and interactions among descriptors, target properties and candidate subsets of descriptors. The competencies of the proposed software were assessed through different scenarios. These scenarios reveal how an expert can use this tool to choose one subset of descriptors from a group of candidate subsets or how to modify existing descriptor subsets and even incorporate new descriptors according to his or her own knowledge of the target property. The reported experiences showed the suitability of our software for selecting sets of descriptors with low cardinality, high interpretability, low redundancy and high statistical performance in a visual exploratory way. Therefore, it is possible to conclude that the resulting tool allows the integration of a chemist's expertise in the descriptor selection process with a low cognitive effort in contrast with the alternative of using an ad-hoc manual analysis of the selected descriptors. Graphical abstractVIDEAN allows the visual analysis of candidate subsets of descriptors for QSAR/QSPR. In the two panels on the top, users can interactively explore numerical correlations as well as co-occurrences in the candidate subsets through two interactive graphs.

  2. Advanced software development workstation. Comparison of two object-oriented development methodologies

    NASA Technical Reports Server (NTRS)

    Izygon, Michel E.

    1992-01-01

    This report is an attempt to clarify some of the concerns raised about the OMT method, specifically that OMT is weaker than the Booch method in a few key areas. This interim report specifically addresses the following issues: (1) is OMT object-oriented or only data-driven?; (2) can OMT be used as a front-end to implementation in C++?; (3) the inheritance concept in OMT is in contradiction with the 'pure and real' inheritance concept found in object-oriented (OO) design; (4) low support for software life-cycle issues, for project and risk management; (5) uselessness of functional modeling for the ROSE project; and (6) problems with event-driven and simulation systems. The conclusion of this report is that both Booch's method and Rumbaugh's method are good OO methods, each with strengths and weaknesses in different areas of the development process.

  3. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    PubMed

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.

  4. Faceted Visualization of Three Dimensional Neuroanatomy By Combining Ontology with Faceted Search

    PubMed Central

    Veeraraghavan, Harini; Miller, James V.

    2013-01-01

    In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset. PMID:24006207

  5. Faceted visualization of three dimensional neuroanatomy by combining ontology with faceted search.

    PubMed

    Veeraraghavan, Harini; Miller, James V

    2014-04-01

    In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset.

  6. An open-source software platform for data management, visualisation, model building and model sharing in water, energy and other resource modelling domains.

    NASA Astrophysics Data System (ADS)

    Knox, S.; Meier, P.; Mohammed, K.; Korteling, B.; Matrosov, E. S.; Hurford, A.; Huskova, I.; Harou, J. J.; Rosenberg, D. E.; Thilmant, A.; Medellin-Azuara, J.; Wicks, J.

    2015-12-01

    Capacity expansion on resource networks is essential to adapting to economic and population growth and pressures such as climate change. Engineered infrastructure systems such as water, energy, or transport networks require sophisticated and bespoke models to refine management and investment strategies. Successful modeling of such complex systems relies on good data management and advanced methods to visualize and share data.Engineered infrastructure systems are often represented as networks of nodes and links with operating rules describing their interactions. Infrastructure system management and planning can be abstracted to simulating or optimizing new operations and extensions of the network. By separating the data storage of abstract networks from manipulation and modeling we have created a system where infrastructure modeling across various domains is facilitated.We introduce Hydra Platform, a Free Open Source Software designed for analysts and modelers to store, manage and share network topology and data. Hydra Platform is a Python library with a web service layer for remote applications, called Apps, to connect. Apps serve various functions including network or results visualization, data export (e.g. into a proprietary format) or model execution. This Client-Server architecture allows users to manipulate and share centrally stored data. XML templates allow a standardised description of the data structure required for storing network data such that it is compatible with specific models.Hydra Platform represents networks in an abstract way and is therefore not bound to a single modeling domain. It is the Apps that create domain-specific functionality. Using Apps researchers from different domains can incorporate different models within the same network enabling cross-disciplinary modeling while minimizing errors and streamlining data sharing. Separating the Python library from the web layer allows developers to natively expand the software or build web-based apps in other languages for remote functionality. Partner CH2M is developing a commercial user-interface for Hydra Platform however custom interfaces and visualization tools can be built. Hydra Platform is available on GitHub while Apps will be shared on a central repository.

  7. Working memory-driven attention improves spatial resolution: Support for perceptual enhancement.

    PubMed

    Pan, Yi; Luo, Qianying; Cheng, Min

    2016-08-01

    Previous research has indicated that attention can be biased toward those stimuli matching the contents of working memory and thereby facilitates visual processing at the location of the memory-matching stimuli. However, whether this working memory-driven attentional modulation takes place on early perceptual processes remains unclear. Our present results showed that working memory-driven attention improved identification of a brief Landolt target presented alone in the visual field. Because the suprathreshold target appeared without any external noise added (i.e., no distractors or masks), the results suggest that working memory-driven attention enhances the target signal at early perceptual stages of visual processing. Furthermore, given that performance in the Landolt target identification task indexes spatial resolution, this attentional facilitation indicates that working memory-driven attention can boost early perceptual processing via enhancement of spatial resolution at the attended location.

  8. Center for Advanced Modeling and Simulation Intern

    ScienceCinema

    Gertman, Vanessa

    2017-12-13

    Some interns just copy papers and seal envelopes. Not at INL! Check out how Vanessa Gertman, an INL intern working at the Center for Advanced Modeling and Simulation, spent her summer working with some intense visualization software. Lots more content like this is available at INL's facebook page http://www.facebook.com/idahonationallaboratory.

  9. Center for Advanced Modeling and Simulation Intern

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gertman, Vanessa

    Some interns just copy papers and seal envelopes. Not at INL! Check out how Vanessa Gertman, an INL intern working at the Center for Advanced Modeling and Simulation, spent her summer working with some intense visualization software. Lots more content like this is available at INL's facebook page http://www.facebook.com/idahonationallaboratory.

  10. Model-Driven Development of Interactive Multimedia Applications with MML

    NASA Astrophysics Data System (ADS)

    Pleuss, Andreas; Hussmann, Heinrich

    There is an increasing demand for high-quality interactive applications which combine complex application logic with a sophisticated user interface, making use of individual media objects like graphics, animations, 3D graphics, audio or video. Their development is still challenging as it requires the integration of software design, user interface design, and media design.

  11. Combining Domain-driven Design and Mashups for Service Development

    NASA Astrophysics Data System (ADS)

    Iglesias, Carlos A.; Fernández-Villamor, José Ignacio; Del Pozo, David; Garulli, Luca; García, Boni

    This chapter presents the Romulus project approach to Service Development using Java-based web technologies. Romulus aims at improving productivity of service development by providing a tool-supported model to conceive Java-based web applications. This model follows a Domain Driven Design approach, which states that the primary focus of software projects should be the core domain and domain logic. Romulus proposes a tool-supported model, Roma Metaframework, that provides an abstraction layer on top of existing web frameworks and automates the application generation from the domain model. This metaframework follows an object centric approach, and complements Domain Driven Design by identifying the most common cross-cutting concerns (security, service, view, ...) of web applications. The metaframework uses annotations for enriching the domain model with these cross-cutting concerns, so-called aspects. In addition, the chapter presents the usage of mashup technology in the metaframework for service composition, using the web mashup editor MyCocktail. This approach is applied to a scenario of the Mobile Phone Service Portability case study for the development of a new service.

  12. Generating Accurate 3d Models of Architectural Heritage Structures Using Low-Cost Camera and Open Source Algorithms

    NASA Astrophysics Data System (ADS)

    Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.

    2017-05-01

    These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.

  13. A Verification-Driven Approach to Traceability and Documentation for Auto-Generated Mathematical Software

    NASA Technical Reports Server (NTRS)

    Denney, Ewen W.; Fischer, Bernd

    2009-01-01

    Model-based development and automated code generation are increasingly used for production code in safety-critical applications, but since code generators are typically not qualified, the generated code must still be fully tested, reviewed, and certified. This is particularly arduous for mathematical and control engineering software which requires reviewers to trace subtle details of textbook formulas and algorithms to the code, and to match requirements (e.g., physical units or coordinate frames) not represented explicitly in models or code. Both tasks are complicated by the often opaque nature of auto-generated code. We address these problems by developing a verification-driven approach to traceability and documentation. We apply the AUTOCERT verification system to identify and then verify mathematical concepts in the code, based on a mathematical domain theory, and then use these verified traceability links between concepts, code, and verification conditions to construct a natural language report that provides a high-level structured argument explaining why and how the code uses the assumptions and complies with the requirements. We have applied our approach to generate review documents for several sub-systems of NASA s Project Constellation.

  14. Enigma Version 12

    NASA Technical Reports Server (NTRS)

    Shores, David; Goza, Sharon P.; McKeegan, Cheyenne; Easley, Rick; Way, Janet; Everett, Shonn; Guerra, Mark; Kraesig, Ray; Leu, William

    2013-01-01

    Enigma Version 12 software combines model building, animation, and engineering visualization into one concise software package. Enigma employs a versatile user interface to allow average users access to even the most complex pieces of the application. Using Enigma eliminates the need to buy and learn several software packages to create an engineering visualization. Models can be created and/or modified within Enigma down to the polygon level. Textures and materials can be applied for additional realism. Within Enigma, these models can be combined to create systems of models that have a hierarchical relationship to one another, such as a robotic arm. Then these systems can be animated within the program or controlled by an external application programming interface (API). In addition, Enigma provides the ability to use plug-ins. Plugins allow the user to create custom code for a specific application and access the Enigma model and system data, but still use the Enigma drawing functionality. CAD files can be imported into Enigma and combined to create systems of computer graphics models that can be manipulated with constraints. An API is available so that an engineer can write a simulation and drive the computer graphics models with no knowledge of computer graphics. An animation editor allows an engineer to set up sequences of animations generated by simulations or by conceptual trajectories in order to record these to highquality media for presentation. Enigma Version 12 Lyndon B. Johnson Space Center, Houston, Texas 28 NASA Tech Briefs, September 2013 Planetary Protection Bioburden Analysis Program NASA's Jet Propulsion Laboratory, Pasadena, California This program is a Microsoft Access program that performed statistical analysis of the colony counts from assays performed on the Mars Science Laboratory (MSL) spacecraft to determine the bioburden density, 3-sigma biodensity, and the total bioburdens required for the MSL prelaunch reports. It also contains numerous tools that report the data in various ways to simplify the reports required. The program performs all the calculations directly in the MS Access program. Prior to this development, the data was exported to large Excel files that had to be cut and pasted to provide the desired results. The program contains a main menu and a number of submenus. Analyses can be performed by using either all the assays, or only the accountable assays that will be used in the final analysis. There are three options on the first menu: either calculate using (1) the old MER (Mars Exploration Rover) statistics, (2) the MSL statistics for all the assays, or This software implements penetration limit equations for common micrometeoroid and orbital debris (MMOD) shield configurations, windows, and thermal protection systems. Allowable MMOD risk is formulated in terms of the probability of penetration (PNP) of the spacecraft pressure hull. For calculating the risk, spacecraft geometry models, mission profiles, debris environment models, and penetration limit equations for installed shielding configurations are required. Risk assessment software such as NASA's BUMPERII is used to calculate mission PNP; however, they are unsuitable for use in shield design and preliminary analysis studies. The software defines a single equation for the design and performance evaluation of common MMOD shielding configurations, windows, and thermal protection systems, along with a description of their validity range and guidelines for their application. Recommendations are based on preliminary reviews of fundamental assumptions, and accuracy in predicting experimental impact test results. The software is programmed in Visual Basic for Applications for installation as a simple add-in for Microsoft Excel. The user is directed to a graphical user interface (GUI) that requires user inputs and provides solutions directly in Microsoft Excel workbooks. This work was done by Shannon Ryan of the USRA Lunar and Planetary Institute for Johnson Space Center. Further information is contained in a TSP (see page 1). MSC- 24582-1 Micrometeoroid and Orbital Debris (MMOD) Shield Ballistic Limit Analysis Program Lyndon B. Johnson Space Center, Houston, Texas Commercially, because it is so generic, Enigma can be used for almost any project that requires engineering visualization, model building, or animation. Models in Enigma can be exported to many other formats for use in other applications as well. Educationally, Enigma is being used to allow university students to visualize robotic algorithms in a simulation mode before using them with actual hardware.

  15. ZOOM Lite: next-generation sequencing data mapping and visualization software

    PubMed Central

    Zhang, Zefeng; Lin, Hao; Ma, Bin

    2010-01-01

    High-throughput next-generation sequencing technologies pose increasing demands on the efficiency, accuracy and usability of data analysis software. In this article, we present ZOOM Lite, a software for efficient reads mapping and result visualization. With a kernel capable of mapping tens of millions of Illumina or AB SOLiD sequencing reads efficiently and accurately, and an intuitive graphical user interface, ZOOM Lite integrates reads mapping and result visualization into a easy to use pipeline on desktop PC. The software handles both single-end and paired-end reads, and can output both the unique mapping result or the top N mapping results for each read. Additionally, the software takes a variety of input file formats and outputs to several commonly used result formats. The software is freely available at http://bioinfor.com/zoom/lite/. PMID:20530531

  16. SPICE Module for the Satellite Orbit Analysis Program (SOAP)

    NASA Technical Reports Server (NTRS)

    Coggi, John; Carnright, Robert; Hildebrand, Claude

    2008-01-01

    A SPICE module for the Satellite Orbit Analysis Program (SOAP) precisely represents complex motion and maneuvers in an interactive, 3D animated environment with support for user-defined quantitative outputs. (SPICE stands for Spacecraft, Planet, Instrument, Camera-matrix, and Events). This module enables the SOAP software to exploit NASA mission ephemeris represented in the JPL Ancillary Information Facility (NAIF) SPICE formats. Ephemeris types supported include position, velocity, and orientation for spacecraft and planetary bodies including the Sun, planets, natural satellites, comets, and asteroids. Entire missions can now be imported into SOAP for 3D visualization, playback, and analysis. The SOAP analysis and display features can now leverage detailed mission files to offer the analyst both a numerically correct and aesthetically pleasing combination of results that can be varied to study many hypothetical scenarios. The software provides a modeling and simulation environment that can encompass a broad variety of problems using orbital prediction. For example, ground coverage analysis, communications analysis, power and thermal analysis, and 3D visualization that provide the user with insight into complex geometric relations are included. The SOAP SPICE module allows distributed science and engineering teams to share common mission models of known pedigree, which greatly reduces duplication of effort and the potential for error. The use of the software spans all phases of the space system lifecycle, from the study of future concepts to operations and anomaly analysis. It allows SOAP software to correctly position and orient all of the principal bodies of the Solar System within a single simulation session along with multiple spacecraft trajectories and the orientation of mission payloads. In addition to the 3D visualization, the user can define numeric variables and x-y plots to quantitatively assess metrics of interest.

  17. NASA Data Acquisition System Software Development for Rocket Propulsion Test Facilities

    NASA Technical Reports Server (NTRS)

    Herbert, Phillip W., Sr.; Elliot, Alex C.; Graves, Andrew R.

    2015-01-01

    Current NASA propulsion test facilities include Stennis Space Center in Mississippi, Marshall Space Flight Center in Alabama, Plum Brook Station in Ohio, and White Sands Test Facility in New Mexico. Within and across these centers, a diverse set of data acquisition systems exist with different hardware and software platforms. The NASA Data Acquisition System (NDAS) is a software suite designed to operate and control many critical aspects of rocket engine testing. The software suite combines real-time data visualization, data recording to a variety formats, short-term and long-term acquisition system calibration capabilities, test stand configuration control, and a variety of data post-processing capabilities. Additionally, data stream conversion functions exist to translate test facility data streams to and from downstream systems, including engine customer systems. The primary design goals for NDAS are flexibility, extensibility, and modularity. Providing a common user interface for a variety of hardware platforms helps drive consistency and error reduction during testing. In addition, with an understanding that test facilities have different requirements and setups, the software is designed to be modular. One engine program may require real-time displays and data recording; others may require more complex data stream conversion, measurement filtering, or test stand configuration management. The NDAS suite allows test facilities to choose which components to use based on their specific needs. The NDAS code is primarily written in LabVIEW, a graphical, data-flow driven language. Although LabVIEW is a general-purpose programming language; large-scale software development in the language is relatively rare compared to more commonly used languages. The NDAS software suite also makes extensive use of a new, advanced development framework called the Actor Framework. The Actor Framework provides a level of code reuse and extensibility that has previously been difficult to achieve using LabVIEW. The

  18. Reward processing in the value-driven attention network: reward signals tracking cue identity and location.

    PubMed

    Anderson, Brian A

    2017-03-01

    Through associative reward learning, arbitrary cues acquire the ability to automatically capture visual attention. Previous studies have examined the neural correlates of value-driven attentional orienting, revealing elevated activity within a network of brain regions encompassing the visual corticostriatal loop [caudate tail, lateral occipital complex (LOC) and early visual cortex] and intraparietal sulcus (IPS). Such attentional priority signals raise a broader question concerning how visual signals are combined with reward signals during learning to create a representation that is sensitive to the confluence of the two. This study examines reward signals during the cued reward training phase commonly used to generate value-driven attentional biases. High, compared with low, reward feedback preferentially activated the value-driven attention network, in addition to regions typically implicated in reward processing. Further examination of these reward signals within the visual system revealed information about the identity of the preceding cue in the caudate tail and LOC, and information about the location of the preceding cue in IPS, while early visual cortex represented both location and identity. The results reveal teaching signals within the value-driven attention network during associative reward learning, and further suggest functional specialization within different regions of this network during the acquisition of an integrated representation of stimulus value. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  19. The evolution of meaning: spatio-temporal dynamics of visual object recognition.

    PubMed

    Clarke, Alex; Taylor, Kirsten I; Tyler, Lorraine K

    2011-08-01

    Research on the spatio-temporal dynamics of visual object recognition suggests a recurrent, interactive model whereby an initial feedforward sweep through the ventral stream to prefrontal cortex is followed by recurrent interactions. However, critical questions remain regarding the factors that mediate the degree of recurrent interactions necessary for meaningful object recognition. The novel prediction we test here is that recurrent interactivity is driven by increasing semantic integration demands as defined by the complexity of semantic information required by the task and driven by the stimuli. To test this prediction, we recorded magnetoencephalography data while participants named living and nonliving objects during two naming tasks. We found that the spatio-temporal dynamics of neural activity were modulated by the level of semantic integration required. Specifically, source reconstructed time courses and phase synchronization measures showed increased recurrent interactions as a function of semantic integration demands. These findings demonstrate that the cortical dynamics of object processing are modulated by the complexity of semantic information required from the visual input.

  20. Web-GIS platform for monitoring and forecasting of regional climate and ecological changes

    NASA Astrophysics Data System (ADS)

    Gordov, E. P.; Krupchatnikov, V. N.; Lykosov, V. N.; Okladnikov, I.; Titov, A. G.; Shulgina, T. M.

    2012-12-01

    Growing volume of environmental data from sensors and model outputs makes development of based on modern information-telecommunication technologies software infrastructure for information support of integrated scientific researches in the field of Earth sciences urgent and important task (Gordov et al, 2012, van der Wel, 2005). It should be considered that original heterogeneity of datasets obtained from different sources and institutions not only hampers interchange of data and analysis results but also complicates their intercomparison leading to a decrease in reliability of analysis results. However, modern geophysical data processing techniques allow combining of different technological solutions for organizing such information resources. Nowadays it becomes a generally accepted opinion that information-computational infrastructure should rely on a potential of combined usage of web- and GIS-technologies for creating applied information-computational web-systems (Titov et al, 2009, Gordov et al. 2010, Gordov, Okladnikov and Titov, 2011). Using these approaches for development of internet-accessible thematic information-computational systems, and arranging of data and knowledge interchange between them is a very promising way of creation of distributed information-computation environment for supporting of multidiscipline regional and global research in the field of Earth sciences including analysis of climate changes and their impact on spatial-temporal vegetation distribution and state. Experimental software and hardware platform providing operation of a web-oriented production and research center for regional climate change investigations which combines modern web 2.0 approach, GIS-functionality and capabilities of running climate and meteorological models, large geophysical datasets processing, visualization, joint software development by distributed research groups, scientific analysis and organization of students and post-graduate students education is presented. Platform software developed (Shulgina et al, 2012, Okladnikov et al, 2012) includes dedicated modules for numerical processing of regional and global modeling results for consequent analysis and visualization. Also data preprocessing, run and visualization of modeling results of models WRF and «Planet Simulator» integrated into the platform is provided. All functions of the center are accessible by a user through a web-portal using common graphical web-browser in the form of an interactive graphical user interface which provides, particularly, capabilities of visualization of processing results, selection of geographical region of interest (pan and zoom) and data layers manipulation (order, enable/disable, features extraction). Platform developed provides users with capabilities of heterogeneous geophysical data analysis, including high-resolution data, and discovering of tendencies in climatic and ecosystem changes in the framework of different multidisciplinary researches (Shulgina et al, 2011). Using it even unskilled user without specific knowledge can perform computational processing and visualization of large meteorological, climatological and satellite monitoring datasets through unified graphical web-interface.

  1. Improvements to the APBS biomolecular solvation software suite.

    PubMed

    Jurrus, Elizabeth; Engel, Dave; Star, Keith; Monson, Kyle; Brandi, Juan; Felberg, Lisa E; Brookes, David H; Wilson, Leighton; Chen, Jiahui; Liles, Karina; Chun, Minju; Li, Peter; Gohara, David W; Dolinsky, Todd; Konecny, Robert; Koes, David R; Nielsen, Jens Erik; Head-Gordon, Teresa; Geng, Weihua; Krasny, Robert; Wei, Guo-Wei; Holst, Michael J; McCammon, J Andrew; Baker, Nathan A

    2018-01-01

    The Adaptive Poisson-Boltzmann Solver (APBS) software was developed to solve the equations of continuum electrostatics for large biomolecular assemblages that have provided impact in the study of a broad range of chemical, biological, and biomedical applications. APBS addresses the three key technology challenges for understanding solvation and electrostatics in biomedical applications: accurate and efficient models for biomolecular solvation and electrostatics, robust and scalable software for applying those theories to biomolecular systems, and mechanisms for sharing and analyzing biomolecular electrostatics data in the scientific community. To address new research applications and advancing computational capabilities, we have continually updated APBS and its suite of accompanying software since its release in 2001. In this article, we discuss the models and capabilities that have recently been implemented within the APBS software package including a Poisson-Boltzmann analytical and a semi-analytical solver, an optimized boundary element solver, a geometry-based geometric flow solvation model, a graph theory-based algorithm for determining pK a values, and an improved web-based visualization tool for viewing electrostatics. © 2017 The Protein Society.

  2. Software-Based Visual Loan Calculator For Banking Industry

    NASA Astrophysics Data System (ADS)

    Isizoh, A. N.; Anazia, A. E.; Okide, S. O. 3; Onyeyili, T. I.; Okwaraoka, C. A. P.

    2012-03-01

    industry is very necessary in modern day banking system using many design techniques for security reasons. This paper thus presents the software-based design and implementation of a Visual Loan calculator for banking industry using Visual Basic .Net (VB.Net). The fundamental approach to this is to develop a Graphical User Interface (GUI) using VB.Net operating tools, and then developing a working program which calculates the interest of any loan obtained. The VB.Net programming was done, implemented and the software proved satisfactory.

  3. A Framework for the Design of Effective Graphics for Scientific Visualization

    NASA Technical Reports Server (NTRS)

    Miceli, Kristina D.

    1992-01-01

    This proposal presents a visualization framework, based on a data model, that supports the production of effective graphics for scientific visualization. Visual representations are effective only if they augment comprehension of the increasing amounts of data being generated by modern computer simulations. These representations are created by taking into account the goals and capabilities of the scientist, the type of data to be displayed, and software and hardware considerations. This framework is embodied in an assistant-based visualization system to guide the scientist in the visualization process. This will improve the quality of the visualizations and decrease the time the scientist is required to spend in generating the visualizations. I intend to prove that such a framework will create a more productive environment for tile analysis and interpretation of large, complex data sets.

  4. The design of real time infrared image generation software based on Creator and Vega

    NASA Astrophysics Data System (ADS)

    Wang, Rui-feng; Wu, Wei-dong; Huo, Jun-xiu

    2013-09-01

    Considering the requirement of high reality and real-time quality dynamic infrared image of an infrared image simulation, a method to design real-time infrared image simulation application on the platform of VC++ is proposed. This is based on visual simulation software Creator and Vega. The functions of Creator are introduced simply, and the main features of Vega developing environment are analyzed. The methods of infrared modeling and background are offered, the designing flow chart of the developing process of IR image real-time generation software and the functions of TMM Tool and MAT Tool and sensor module are explained, at the same time, the real-time of software is designed.

  5. Visual and computer software-aided estimates of Dupuytren's contractures: correlation with clinical goniometric measurements.

    PubMed

    Smith, R P; Dias, J J; Ullah, A; Bhowal, B

    2009-05-01

    Corrective surgery for Dupuytren's disease represents a significant proportion of a hand surgeon's workload. The decision to go ahead with surgery and the success of surgery requires measuring the degree of contracture of the diseased finger(s). This is performed in clinic with a goniometer, pre- and postoperatively. Monitoring the recurrence of the contracture can inform on surgical outcome, research and audit. We compared visual and computer software-aided estimation of Dupuytren's contractures to clinical goniometric measurements in 60 patients with Dupuytren's disease. Patients' hands were digitally photographed. There were 76 contracted finger joints--70 proximal interphalangeal joints and six distal interphalangeal joints. The degrees of contracture of these images were visually assessed by six orthopaedic staff of differing seniority and re-assessed with computer software. Across assessors, the Pearson correlation between the goniometric measurements and the visual estimations was 0.83 and this significantly improved to 0.88 with computer software. Reliability with intra-class correlations achieved 0.78 and 0.92 for the visual and computer-aided estimations, respectively, and with test-retest analysis, 0.92 for visual estimation and 0.95 for computer-aided measurements. Visual estimations of Dupuytren's contractures correlate well with actual clinical goniometric measurements and improve further if measured with computer software. Digital images permit monitoring of contracture after surgery and may facilitate research into disease progression and auditing of surgical technique.

  6. Image processing, geometric modeling and data management for development of a virtual bone surgery system.

    PubMed

    Niu, Qiang; Chi, Xiaoyi; Leu, Ming C; Ochoa, Jorge

    2008-01-01

    This paper describes image processing, geometric modeling and data management techniques for the development of a virtual bone surgery system. Image segmentation is used to divide CT scan data into different segments representing various regions of the bone. A region-growing algorithm is used to extract cortical bone and trabecular bone structures systematically and efficiently. Volume modeling is then used to represent the bone geometry based on the CT scan data. Material removal simulation is achieved by continuously performing Boolean subtraction of the surgical tool model from the bone model. A quadtree-based adaptive subdivision technique is developed to handle the large set of data in order to achieve the real-time simulation and visualization required for virtual bone surgery. A Marching Cubes algorithm is used to generate polygonal faces from the volumetric data. Rendering of the generated polygons is performed with the publicly available VTK (Visualization Tool Kit) software. Implementation of the developed techniques consists of developing a virtual bone-drilling software program, which allows the user to manipulate a virtual drill to make holes with the use of a PHANToM device on a bone model derived from real CT scan data.

  7. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    PubMed

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  8. Integrating Statistical Visualization Research into the Political Science Classroom

    ERIC Educational Resources Information Center

    Draper, Geoffrey M.; Liu, Baodong; Riesenfeld, Richard F.

    2011-01-01

    The use of computer software to facilitate learning in political science courses is well established. However, the statistical software packages used in many political science courses can be difficult to use and counter-intuitive. We describe the results of a preliminary user study suggesting that visually-oriented analysis software can help…

  9. Autonomous cloud based site monitoring through hydro geophysical data assimilation, processing and result delivery

    NASA Astrophysics Data System (ADS)

    Versteeg, R.; Johnson, D. V.; Rodzianko, A.; Zhou, H.; Dafflon, B.; Leger, E.; de Kleine, M.

    2017-12-01

    Understanding of processes in the shallow subsurface requires that geophysical, biogeochemical, hydrological and remote sensing datasets are assimilated, processed and interpreted. Multiple enabling software capabilities for process understanding have been developed by the science community. These include information models (ODM2), reactive transport modeling (PFLOTRAN, Modflow, CLM, Landlab), geophysical inversion (E4D, BERT), parameter estimation (PEST, DAKOTA), visualization (ViSiT, Paraview, D3, QGIS) as well as numerous tools written in python and R for petrophysical mapping, stochastic modeling, data analysis and so on. These capabilities use data collected using sensors and analytical tools developed by multiple manufacturers which produce many different measurements. While scientists obviously leverage tools, capabilities and lessons learned from one site at other sites, the current approach to site characterization and monitoring is very labor intensive and does not scale well. Our objective is to be able to monitor many (hundreds - thousands) of sites. This requires that monitoring can be done in a near time, affordable, auditable and essentially autonomous manner. For this we have developed a modular vertically integrated cloud based software framework which was designed from the ground up for effective site and process monitoring. This software framework (PAF - Predictive Assimilation Framework) is multitenant software and provides automation of data ingestion, processing and visualization of hydrological, geochemical and geophysical (ERT/DTS) data. The core organizational element of PAF is a project/user one in which capabilities available to users are controlled by a combination of available data and access permissions. All PAF capabilities are exposed through APIs, making it easy to quickly add new components. PAF is fully integrated with newly developed autonomous electrical geophysical hardware and thus allows for automation of electrical geophysical ingestion and processing and the ability for co analysis and visualization of the raw and processed data with other data of interest (e.g. soil temperature, soil moisture, precipitation). We will demonstrate current PAF capabilities and discuss future efforts.

  10. UML as a cell and biochemistry modeling language.

    PubMed

    Webb, Ken; White, Tony

    2005-06-01

    The systems biology community is building increasingly complex models and simulations of cells and other biological entities, and are beginning to look at alternatives to traditional representations such as those provided by ordinary differential equations (ODE). The lessons learned over the years by the software development community in designing and building increasingly complex telecommunication and other commercial real-time reactive systems, can be advantageously applied to the problems of modeling in the biology domain. Making use of the object-oriented (OO) paradigm, the unified modeling language (UML) and Real-Time Object-Oriented Modeling (ROOM) visual formalisms, and the Rational Rose RealTime (RRT) visual modeling tool, we describe a multi-step process we have used to construct top-down models of cells and cell aggregates. The simple example model described in this paper includes membranes with lipid bilayers, multiple compartments including a variable number of mitochondria, substrate molecules, enzymes with reaction rules, and metabolic pathways. We demonstrate the relevance of abstraction, reuse, objects, classes, component and inheritance hierarchies, multiplicity, visual modeling, and other current software development best practices. We show how it is possible to start with a direct diagrammatic representation of a biological structure such as a cell, using terminology familiar to biologists, and by following a process of gradually adding more and more detail, arrive at a system with structure and behavior of arbitrary complexity that can run and be observed on a computer. We discuss our CellAK (Cell Assembly Kit) approach in terms of features found in SBML, CellML, E-CELL, Gepasi, Jarnac, StochSim, Virtual Cell, and membrane computing systems.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Kris A.; Scholtz, Jean; Whiting, Mark A.

    The VAST Challenge has been a popular venue for academic and industry participants for over ten years. Many participants comment that the majority of their time in preparing VAST Challenge entries is discovering elements in their software environments that need to be redesigned in order to solve the given task. Fortunately, there is no need to wait until the VAST Challenge is announced to test out software systems. The Visual Analytics Benchmark Repository contains all past VAST Challenge tasks, data, solutions and submissions. This paper details the various types of evaluations that may be conducted using the Repository information. Inmore » this paper we describe how developers can do informal evaluations of various aspects of their visual analytics environments using VAST Challenge information. Aspects that can be evaluated include the appropriateness of the software for various tasks, the various data types and formats that can be accommodated, the effectiveness and efficiency of the process supported by the software, and the intuitiveness of the visualizations and interactions. Researchers can compare their visualizations and interactions to those submitted to determine novelty. In addition, the paper provides pointers to various guidelines that software teams can use to evaluate the usability of their software. While these evaluations are not a replacement for formal evaluation methods, this information can be extremely useful during the development of visual analytics environments.« less

  12. Optimizing Automatic Deployment Using Non-functional Requirement Annotations

    NASA Astrophysics Data System (ADS)

    Kugele, Stefan; Haberl, Wolfgang; Tautschnig, Michael; Wechs, Martin

    Model-driven development has become common practice in design of safety-critical real-time systems. High-level modeling constructs help to reduce the overall system complexity apparent to developers. This abstraction caters for fewer implementation errors in the resulting systems. In order to retain correctness of the model down to the software executed on a concrete platform, human faults during implementation must be avoided. This calls for an automatic, unattended deployment process including allocation, scheduling, and platform configuration.

  13. VERAView

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Ronald W.; Collins, Benjamin S.; Godfrey, Andrew T.

    2016-12-09

    In order to support engineering analysis of Virtual Environment for Reactor Analysis (VERA) model results, the Consortium for Advanced Simulation of Light Water Reactors (CASL) needs a tool that provides visualizations of HDF5 files that adhere to the VERAOUT specification. VERAView provides an interactive graphical interface for the visualization and engineering analyses of output data from VERA. The Python-based software provides instantaneous 2D and 3D images, 1D plots, and alphanumeric data from VERA multi-physics simulations.

  14. An introduction to Space Weather Integrated Modeling

    NASA Astrophysics Data System (ADS)

    Zhong, D.; Feng, X.

    2012-12-01

    The need for a software toolkit that integrates space weather models and data is one of many challenges we are facing with when applying the models to space weather forecasting. To meet this challenge, we have developed Space Weather Integrated Modeling (SWIM) that is capable of analysis and visualizations of the results from a diverse set of space weather models. SWIM has a modular design and is written in Python, by using NumPy, matplotlib, and the Visualization ToolKit (VTK). SWIM provides data management module to read a variety of spacecraft data products and a specific data format of Solar-Interplanetary Conservation Element/Solution Element MHD model (SIP-CESE MHD model) for the study of solar-terrestrial phenomena. Data analysis, visualization and graphic user interface modules are also presented in a user-friendly way to run the integrated models and visualize the 2-D and 3-D data sets interactively. With these tools we can locally or remotely analysis the model result rapidly, such as extraction of data on specific location in time-sequence data sets, plotting interplanetary magnetic field lines, multi-slicing of solar wind speed, volume rendering of solar wind density, animation of time-sequence data sets, comparing between model result and observational data. To speed-up the analysis, an in-situ visualization interface is used to support visualizing the data 'on-the-fly'. We also modified some critical time-consuming analysis and visualization methods with the aid of GPU and multi-core CPU. We have used this tool to visualize the data of SIP-CESE MHD model in real time, and integrated the Database Model of shock arrival, Shock Propagation Model, Dst forecasting model and SIP-CESE MHD model developed by SIGMA Weather Group at State Key Laboratory of Space Weather/CAS.

  15. Scientific Staff | ast.noao.edu

    Science.gov Websites

    Emeritus Double stars; stellar rotation; stellar characteristics; publication practices in astronomy Thai formation; infrared astronomy and instrumentation NOAO Associate Director for Kitt Peak National Observatory clumpy media, software development, modeling & SED fitting, big data, HPC in astronomy, visualization

  16. Software Tools for Developing and Simulating the NASA LaRC CMF Motion Base

    NASA Technical Reports Server (NTRS)

    Bryant, Richard B., Jr.; Carrelli, David J.

    2006-01-01

    The NASA Langley Research Center (LaRC) Cockpit Motion Facility (CMF) motion base has provided many design and analysis challenges. In the process of addressing these challenges, a comprehensive suite of software tools was developed. The software tools development began with a detailed MATLAB/Simulink model of the motion base which was used primarily for safety loads prediction, design of the closed loop compensator and development of the motion base safety systems1. A Simulink model of the digital control law, from which a portion of the embedded code is directly generated, was later added to this model to form a closed loop system model. Concurrently, software that runs on a PC was created to display and record motion base parameters. It includes a user interface for controlling time history displays, strip chart displays, data storage, and initializing of function generators used during motion base testing. Finally, a software tool was developed for kinematic analysis and prediction of mechanical clearances for the motion system. These tools work together in an integrated package to support normal operations of the motion base, simulate the end to end operation of the motion base system providing facilities for software-in-the-loop testing, mechanical geometry and sensor data visualizations, and function generator setup and evaluation.

  17. Component Models for Semantic Web Languages

    NASA Astrophysics Data System (ADS)

    Henriksson, Jakob; Aßmann, Uwe

    Intelligent applications and agents on the Semantic Web typically need to be specified with, or interact with specifications written in, many different kinds of formal languages. Such languages include ontology languages, data and metadata query languages, as well as transformation languages. As learnt from years of experience in development of complex software systems, languages need to support some form of component-based development. Components enable higher software quality, better understanding and reusability of already developed artifacts. Any component approach contains an underlying component model, a description detailing what valid components are and how components can interact. With the multitude of languages developed for the Semantic Web, what are their underlying component models? Do we need to develop one for each language, or is a more general and reusable approach achievable? We present a language-driven component model specification approach. This means that a component model can be (automatically) generated from a given base language (actually, its specification, e.g. its grammar). As a consequence, we can provide components for different languages and simplify the development of software artifacts used on the Semantic Web.

  18. Evaluation of Software for Introducing Protein Structure: Visualization and Simulation

    ERIC Educational Resources Information Center

    White, Brian; Kahriman, Azmin; Luberice, Lois; Idleh, Farhia

    2010-01-01

    Communicating an understanding of the forces and factors that determine a protein's structure is an important goal of many biology and biochemistry courses at a variety of levels. Many educators use computer software that allows visualization of these complex molecules for this purpose. Although visualization is in wide use and has been associated…

  19. How Students Solve Problems in Spatial Geometry while Using a Software Application for Visualizing 3D Geometric Objects

    ERIC Educational Resources Information Center

    Widder, Mirela; Gorsky, Paul

    2013-01-01

    In schools, learning spatial geometry is usually dependent upon a student's ability to visualize three dimensional geometric configurations from two dimensional drawings. Such a process, however, often creates visual obstacles which are unique to spatial geometry. Useful software programs which realistically depict three dimensional geometric…

  20. The OpenEarth Framework (OEF) for the 3D Visualization of Integrated Earth Science Data

    NASA Astrophysics Data System (ADS)

    Nadeau, David; Moreland, John; Baru, Chaitan; Crosby, Chris

    2010-05-01

    Data integration is increasingly important as we strive to combine data from disparate sources and assemble better models of the complex processes operating at the Earth's surface and within its interior. These data are often large, multi-dimensional, and subject to differing conventions for data structures, file formats, coordinate spaces, and units of measure. When visualized, these data require differing, and sometimes conflicting, conventions for visual representations, dimensionality, symbology, and interaction. All of this makes the visualization of integrated Earth science data particularly difficult. The OpenEarth Framework (OEF) is an open-source data integration and visualization suite of applications and libraries being developed by the GEON project at the University of California, San Diego, USA. Funded by the NSF, the project is leveraging virtual globe technology from NASA's WorldWind to create interactive 3D visualization tools that combine and layer data from a wide variety of sources to create a holistic view of features at, above, and beneath the Earth's surface. The OEF architecture is open, cross-platform, modular, and based upon Java. The OEF's modular approach to software architecture yields an array of mix-and-match software components for assembling custom applications. Available modules support file format handling, web service communications, data management, user interaction, and 3D visualization. File parsers handle a variety of formal and de facto standard file formats used in the field. Each one imports data into a general-purpose common data model supporting multidimensional regular and irregular grids, topography, feature geometry, and more. Data within these data models may be manipulated, combined, reprojected, and visualized. The OEF's visualization features support a variety of conventional and new visualization techniques for looking at topography, tomography, point clouds, imagery, maps, and feature geometry. 3D data such as seismic tomography may be sliced by multiple oriented cutting planes and isosurfaced to create 3D skins that trace feature boundaries within the data. Topography may be overlaid with satellite imagery, maps, and data such as gravity and magnetics measurements. Multiple data sets may be visualized simultaneously using overlapping layers within a common 3D coordinate space. Data management within the OEF handles and hides the inevitable quirks of differing file formats, web protocols, storage structures, coordinate spaces, and metadata representations. Heuristics are used to extract necessary metadata used to guide data and visual operations. Derived data representations are computed to better support fluid interaction and visualization while the original data is left unchanged in its original form. Data is cached for better memory and network efficiency, and all visualization makes use of 3D graphics hardware support found on today's computers. The OpenEarth Framework project is currently prototyping the software for use in the visualization, and integration of continental scale geophysical data being produced by EarthScope-related research in the Western US. The OEF is providing researchers with new ways to display and interrogate their data and is anticipated to be a valuable tool for future EarthScope-related research.

  1. Wyoming greater sage-grouse habitat prioritization: A collection of multi-scale seasonal models and geographic information systems land management tools

    USGS Publications Warehouse

    O'Donnell, Michael S.; Aldridge, Cameron L.; Doherty, Kevin E.; Fedy, Bradley C.

    2015-01-01

    We deliver all products described herein as online geographic information system data for visualization and downloading. We outline the data properties for each model and their data inputs, describe the process of selecting appropriate data products for multifarious applications, describe all data products and software, provide newly derived model composites, and discuss how land managers may use the models to inform future sage-grouse studies and potentially refine conservation efforts. The models, software tools, and associated opportunities for novel applications of these products should provide a suite of additional, but not exclusive, tools for assessing Wyoming Greater Sage-grouse habitats, which land managers, conservationists, and scientists can apply to myriad applications.

  2. Visualization of RNA structure models within the Integrative Genomics Viewer.

    PubMed

    Busan, Steven; Weeks, Kevin M

    2017-07-01

    Analyses of the interrelationships between RNA structure and function are increasingly important components of genomic studies. The SHAPE-MaP strategy enables accurate RNA structure probing and realistic structure modeling of kilobase-length noncoding RNAs and mRNAs. Existing tools for visualizing RNA structure models are not suitable for efficient analysis of long, structurally heterogeneous RNAs. In addition, structure models are often advantageously interpreted in the context of other experimental data and gene annotation information, for which few tools currently exist. We have developed a module within the widely used and well supported open-source Integrative Genomics Viewer (IGV) that allows visualization of SHAPE and other chemical probing data, including raw reactivities, data-driven structural entropies, and data-constrained base-pair secondary structure models, in context with linear genomic data tracks. We illustrate the usefulness of visualizing RNA structure in the IGV by exploring structure models for a large viral RNA genome, comparing bacterial mRNA structure in cells with its structure under cell- and protein-free conditions, and comparing a noncoding RNA structure modeled using SHAPE data with a base-pairing model inferred through sequence covariation analysis. © 2017 Busan and Weeks; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  3. Software architecture of biomimetic underwater vehicle

    NASA Astrophysics Data System (ADS)

    Praczyk, Tomasz; Szymak, Piotr

    2016-05-01

    Autonomous underwater vehicles are vehicles that are entirely or partly independent of human decisions. In order to obtain operational independence, the vehicles have to be equipped with a specialized software. The main task of the software is to move the vehicle along a trajectory with collision avoidance. Moreover, the software has also to manage different devices installed on the vehicle board, e.g. to start and stop cameras, sonars etc. In addition to the software embedded on the vehicle board, the software responsible for managing the vehicle by the operator is also necessary. Its task is to define mission of the vehicle, to start, to stop the mission, to send emergency commands, to monitor vehicle parameters, and to control the vehicle in remotely operated mode. An important objective of the software is also to support development and tests of other software components. To this end, a simulation environment is necessary, i.e. simulation model of the vehicle and all its key devices, the model of the sea environment, and the software to visualize behavior of the vehicle. The paper presents architecture of the software designed for biomimetic autonomous underwater vehicle (BAUV) that is being constructed within the framework of the scientific project financed by Polish National Center of Research and Development.

  4. Academic Testing and Grading with Spreadsheet Software.

    ERIC Educational Resources Information Center

    Ho, James K.

    1987-01-01

    Explains how spreadsheet software can be used in the design and grading of academic tests and in assigning grades. Macro programs and menu-driven software are highlighted and an example using IBM PCs and Lotus 1-2-3 software is given. (Author/LRW)

  5. Teaching and learning the Hodgkin-Huxley model based on software developed in NEURON’s programming language hoc

    PubMed Central

    2013-01-01

    Background We present a software tool called SENB, which allows the geometric and biophysical neuronal properties in a simple computational model of a Hodgkin-Huxley (HH) axon to be changed. The aim of this work is to develop a didactic and easy-to-use computational tool in the NEURON simulation environment, which allows graphical visualization of both the passive and active conduction parameters and the geometric characteristics of a cylindrical axon with HH properties. Results The SENB software offers several advantages for teaching and learning electrophysiology. First, SENB offers ease and flexibility in determining the number of stimuli. Second, SENB allows immediate and simultaneous visualization, in the same window and time frame, of the evolution of the electrophysiological variables. Third, SENB calculates parameters such as time and space constants, stimuli frequency, cellular area and volume, sodium and potassium equilibrium potentials, and propagation velocity of the action potentials. Furthermore, it allows the user to see all this information immediately in the main window. Finally, with just one click SENB can save an image of the main window as evidence. Conclusions The SENB software is didactic and versatile, and can be used to improve and facilitate the teaching and learning of the underlying mechanisms in the electrical activity of an axon using the biophysical properties of the squid giant axon. PMID:23675833

  6. Development of Hardware-in-the-Loop Simulation Based on Gazebo and Pixhawk for Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Nguyen, Khoa Dang; Ha, Cheolkeun

    2018-04-01

    Hardware-in-the-loop simulation (HILS) is well known as an effective approach in the design of unmanned aerial vehicles (UAV) systems, enabling engineers to test the control algorithm on a hardware board with a UAV model on the software. Performance of HILS is determined by performances of the control algorithm, the developed model, and the signal transfer between the hardware and software. The result of HILS is degraded if any signal could not be transferred to the correct destination. Therefore, this paper aims to develop a middleware software to secure communications in HILS system for testing the operation of a quad-rotor UAV. In our HILS, the Gazebo software is used to generate a nonlinear six-degrees-of-freedom (6DOF) model, sensor model, and 3D visualization for the quad-rotor UAV. Meanwhile, the flight control algorithm is designed and implemented on the Pixhawk hardware. New middleware software, referred to as the control application software (CAS), is proposed to ensure the connection and data transfer between Gazebo and Pixhawk using the multithread structure in Qt Creator. The CAS provides a graphical user interface (GUI), allowing the user to monitor the status of packet transfer, and perform the flight control commands and the real-time tuning parameters for the quad-rotor UAV. Numerical implementations have been performed to prove the effectiveness of the middleware software CAS suggested in this paper.

  7. Specification and Verification of Medical Monitoring System Using Petri-nets.

    PubMed

    Majma, Negar; Babamir, Seyed Morteza

    2014-07-01

    To monitor the patient behavior, data are collected from patient's body by a medical monitoring device so as to calculate the output using embedded software. Incorrect calculations may endanger the patient's life if the software fails to meet the patient's requirements. Accordingly, the veracity of the software behavior is a matter of concern in the medicine; moreover, the data collected from the patient's body are fuzzy. Some methods have already dealt with monitoring the medical monitoring devices; however, model based monitoring fuzzy computations of such devices have been addressed less. The present paper aims to present synthesizing a fuzzy Petri-net (FPN) model to verify behavior of a sample medical monitoring device called continuous infusion insulin (INS) because Petri-net (PN) is one of the formal and visual methods to verify the software's behavior. The device is worn by the diabetic patients and then the software calculates the INS dose and makes a decision for injection. The input and output of the infusion INS software are not crisp in the real world; therefore, we present them in fuzzy variables. Afterwards, we use FPN instead of clear PN to model the fuzzy variables. The paper follows three steps to synthesize an FPN to deal with verification of the infusion INS device: (1) Definition of fuzzy variables, (2) definition of fuzzy rules and (3) design of the FPN model to verify the software behavior.

  8. QMachine: commodity supercomputing in web browsers.

    PubMed

    Wilkinson, Sean R; Almeida, Jonas S

    2014-06-09

    Ongoing advancements in cloud computing provide novel opportunities in scientific computing, especially for distributed workflows. Modern web browsers can now be used as high-performance workstations for querying, processing, and visualizing genomics' "Big Data" from sources like The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) without local software installation or configuration. The design of QMachine (QM) was driven by the opportunity to use this pervasive computing model in the context of the Web of Linked Data in Biomedicine. QM is an open-sourced, publicly available web service that acts as a messaging system for posting tasks and retrieving results over HTTP. The illustrative application described here distributes the analyses of 20 Streptococcus pneumoniae genomes for shared suffixes. Because all analytical and data retrieval tasks are executed by volunteer machines, few server resources are required. Any modern web browser can submit those tasks and/or volunteer to execute them without installing any extra plugins or programs. A client library provides high-level distribution templates including MapReduce. This stark departure from the current reliance on expensive server hardware running "download and install" software has already gathered substantial community interest, as QM received more than 2.2 million API calls from 87 countries in 12 months. QM was found adequate to deliver the sort of scalable bioinformatics solutions that computation- and data-intensive workflows require. Paradoxically, the sandboxed execution of code by web browsers was also found to enable them, as compute nodes, to address critical privacy concerns that characterize biomedical environments.

  9. Enrichr: interactive and collaborative HTML5 gene list enrichment analysis tool

    PubMed Central

    2013-01-01

    Background System-wide profiling of genes and proteins in mammalian cells produce lists of differentially expressed genes/proteins that need to be further analyzed for their collective functions in order to extract new knowledge. Once unbiased lists of genes or proteins are generated from such experiments, these lists are used as input for computing enrichment with existing lists created from prior knowledge organized into gene-set libraries. While many enrichment analysis tools and gene-set libraries databases have been developed, there is still room for improvement. Results Here, we present Enrichr, an integrative web-based and mobile software application that includes new gene-set libraries, an alternative approach to rank enriched terms, and various interactive visualization approaches to display enrichment results using the JavaScript library, Data Driven Documents (D3). The software can also be embedded into any tool that performs gene list analysis. We applied Enrichr to analyze nine cancer cell lines by comparing their enrichment signatures to the enrichment signatures of matched normal tissues. We observed a common pattern of up regulation of the polycomb group PRC2 and enrichment for the histone mark H3K27me3 in many cancer cell lines, as well as alterations in Toll-like receptor and interlukin signaling in K562 cells when compared with normal myeloid CD33+ cells. Such analyses provide global visualization of critical differences between normal tissues and cancer cell lines but can be applied to many other scenarios. Conclusions Enrichr is an easy to use intuitive enrichment analysis web-based tool providing various types of visualization summaries of collective functions of gene lists. Enrichr is open source and freely available online at: http://amp.pharm.mssm.edu/Enrichr. PMID:23586463

  10. Enrichr: interactive and collaborative HTML5 gene list enrichment analysis tool.

    PubMed

    Chen, Edward Y; Tan, Christopher M; Kou, Yan; Duan, Qiaonan; Wang, Zichen; Meirelles, Gabriela Vaz; Clark, Neil R; Ma'ayan, Avi

    2013-04-15

    System-wide profiling of genes and proteins in mammalian cells produce lists of differentially expressed genes/proteins that need to be further analyzed for their collective functions in order to extract new knowledge. Once unbiased lists of genes or proteins are generated from such experiments, these lists are used as input for computing enrichment with existing lists created from prior knowledge organized into gene-set libraries. While many enrichment analysis tools and gene-set libraries databases have been developed, there is still room for improvement. Here, we present Enrichr, an integrative web-based and mobile software application that includes new gene-set libraries, an alternative approach to rank enriched terms, and various interactive visualization approaches to display enrichment results using the JavaScript library, Data Driven Documents (D3). The software can also be embedded into any tool that performs gene list analysis. We applied Enrichr to analyze nine cancer cell lines by comparing their enrichment signatures to the enrichment signatures of matched normal tissues. We observed a common pattern of up regulation of the polycomb group PRC2 and enrichment for the histone mark H3K27me3 in many cancer cell lines, as well as alterations in Toll-like receptor and interlukin signaling in K562 cells when compared with normal myeloid CD33+ cells. Such analyses provide global visualization of critical differences between normal tissues and cancer cell lines but can be applied to many other scenarios. Enrichr is an easy to use intuitive enrichment analysis web-based tool providing various types of visualization summaries of collective functions of gene lists. Enrichr is open source and freely available online at: http://amp.pharm.mssm.edu/Enrichr.

  11. High performance visual display for HENP detectors

    NASA Astrophysics Data System (ADS)

    McGuigan, Michael; Smith, Gordon; Spiletic, John; Fine, Valeri; Nevski, Pavel

    2001-08-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactive control, including the ability to slice, search and mark areas of the detector. We incorporate the ability to make a high quality still image of a view of the detector and the ability to generate animations and a fly through of the detector and output these to MPEG or VRML models. We develop data compression hardware and software so that remote interactive visualization will be possible among dispersed collaborators. We obtain real time visual display for events accumulated during simulations.

  12. BioContainers: an open-source and community-driven framework for software standardization.

    PubMed

    da Veiga Leprevost, Felipe; Grüning, Björn A; Alves Aflitos, Saulo; Röst, Hannes L; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I; Perez-Riverol, Yasset

    2017-08-15

    BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). The software is freely available at github.com/BioContainers/. yperez@ebi.ac.uk. © The Author(s) 2017. Published by Oxford University Press.

  13. BioContainers: an open-source and community-driven framework for software standardization

    PubMed Central

    da Veiga Leprevost, Felipe; Grüning, Björn A.; Alves Aflitos, Saulo; Röst, Hannes L.; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C.; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I.; Perez-Riverol, Yasset

    2017-01-01

    Abstract Motivation BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). Availability and Implementation The software is freely available at github.com/BioContainers/. Contact yperez@ebi.ac.uk PMID:28379341

  14. A workflow learning model to improve geovisual analytics utility

    PubMed Central

    Roth, Robert E; MacEachren, Alan M; McCabe, Craig A

    2011-01-01

    Introduction This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. Objectives The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. Methodology The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on the concept of scientific workflows. Second, we implemented an interface in the G-EX Portal Learn Module to demonstrate the workflow learning model. The workflow interface allows users to drag learning artifacts uploaded to the G-EX Portal onto a central whiteboard and then annotate the workflow using text and drawing tools. Once completed, users can visit the assembled workflow to get an idea of the kind, number, and scale of analysis steps, view individual learning artifacts associated with each node in the workflow, and ask questions about the overall workflow or individual learning artifacts through the associated forums. An example learning workflow in the domain of epidemiology is provided to demonstrate the effectiveness of the approach. Results/Conclusions In the context of geovisual analytics, GIScientists are not only responsible for developing software to facilitate visually-mediated reasoning about large and complex spatiotemporal information, but also for ensuring that this software works. The workflow learning model discussed in this paper and demonstrated in the G-EX Portal Learn Module is one approach to improving the utility of geovisual analytics software. While development of the G-EX Portal Learn Module is ongoing, we expect to release the G-EX Portal Learn Module by Summer 2009. PMID:21983545

  15. A workflow learning model to improve geovisual analytics utility.

    PubMed

    Roth, Robert E; Maceachren, Alan M; McCabe, Craig A

    2009-01-01

    INTRODUCTION: This paper describes the design and implementation of the G-EX Portal Learn Module, a web-based, geocollaborative application for organizing and distributing digital learning artifacts. G-EX falls into the broader context of geovisual analytics, a new research area with the goal of supporting visually-mediated reasoning about large, multivariate, spatiotemporal information. Because this information is unprecedented in amount and complexity, GIScientists are tasked with the development of new tools and techniques to make sense of it. Our research addresses the challenge of implementing these geovisual analytics tools and techniques in a useful manner. OBJECTIVES: The objective of this paper is to develop and implement a method for improving the utility of geovisual analytics software. The success of software is measured by its usability (i.e., how easy the software is to use?) and utility (i.e., how useful the software is). The usability and utility of software can be improved by refining the software, increasing user knowledge about the software, or both. It is difficult to achieve transparent usability (i.e., software that is immediately usable without training) of geovisual analytics software because of the inherent complexity of the included tools and techniques. In these situations, improving user knowledge about the software through the provision of learning artifacts is as important, if not more so, than iterative refinement of the software itself. Therefore, our approach to improving utility is focused on educating the user. METHODOLOGY: The research reported here was completed in two steps. First, we developed a model for learning about geovisual analytics software. Many existing digital learning models assist only with use of the software to complete a specific task and provide limited assistance with its actual application. To move beyond task-oriented learning about software use, we propose a process-oriented approach to learning based on the concept of scientific workflows. Second, we implemented an interface in the G-EX Portal Learn Module to demonstrate the workflow learning model. The workflow interface allows users to drag learning artifacts uploaded to the G-EX Portal onto a central whiteboard and then annotate the workflow using text and drawing tools. Once completed, users can visit the assembled workflow to get an idea of the kind, number, and scale of analysis steps, view individual learning artifacts associated with each node in the workflow, and ask questions about the overall workflow or individual learning artifacts through the associated forums. An example learning workflow in the domain of epidemiology is provided to demonstrate the effectiveness of the approach. RESULTS/CONCLUSIONS: In the context of geovisual analytics, GIScientists are not only responsible for developing software to facilitate visually-mediated reasoning about large and complex spatiotemporal information, but also for ensuring that this software works. The workflow learning model discussed in this paper and demonstrated in the G-EX Portal Learn Module is one approach to improving the utility of geovisual analytics software. While development of the G-EX Portal Learn Module is ongoing, we expect to release the G-EX Portal Learn Module by Summer 2009.

  16. Neo: an object model for handling electrophysiology data in multiple formats

    PubMed Central

    Garcia, Samuel; Guarino, Domenico; Jaillet, Florent; Jennings, Todd; Pröpper, Robert; Rautenberg, Philipp L.; Rodgers, Chris C.; Sobolev, Andrey; Wachtler, Thomas; Yger, Pierre; Davison, Andrew P.

    2014-01-01

    Neuroscientists use many different software tools to acquire, analyze and visualize electrophysiological signals. However, incompatible data models and file formats make it difficult to exchange data between these tools. This reduces scientific productivity, renders potentially useful analysis methods inaccessible and impedes collaboration between labs. A common representation of the core data would improve interoperability and facilitate data-sharing. To that end, we propose here a language-independent object model, named “Neo,” suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. As a concrete instantiation of this object model we have developed an open source implementation in the Python programming language. In addition to representing electrophysiology data in memory for the purposes of analysis and visualization, the Python implementation provides a set of input/output (IO) modules for reading/writing the data from/to a variety of commonly used file formats. Support is included for formats produced by most of the major manufacturers of electrophysiology recording equipment and also for more generic formats such as MATLAB. Data representation and data analysis are conceptually separate: it is easier to write robust analysis code if it is focused on analysis and relies on an underlying package to handle data representation. For that reason, and also to be as lightweight as possible, the Neo object model and the associated Python package are deliberately limited to representation of data, with no functions for data analysis or visualization. Software for neurophysiology data analysis and visualization built on top of Neo automatically gains the benefits of interoperability, easier data sharing and automatic format conversion; there is already a burgeoning ecosystem of such tools. We intend that Neo should become the standard basis for Python tools in neurophysiology. PMID:24600386

  17. Neo: an object model for handling electrophysiology data in multiple formats.

    PubMed

    Garcia, Samuel; Guarino, Domenico; Jaillet, Florent; Jennings, Todd; Pröpper, Robert; Rautenberg, Philipp L; Rodgers, Chris C; Sobolev, Andrey; Wachtler, Thomas; Yger, Pierre; Davison, Andrew P

    2014-01-01

    Neuroscientists use many different software tools to acquire, analyze and visualize electrophysiological signals. However, incompatible data models and file formats make it difficult to exchange data between these tools. This reduces scientific productivity, renders potentially useful analysis methods inaccessible and impedes collaboration between labs. A common representation of the core data would improve interoperability and facilitate data-sharing. To that end, we propose here a language-independent object model, named "Neo," suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. As a concrete instantiation of this object model we have developed an open source implementation in the Python programming language. In addition to representing electrophysiology data in memory for the purposes of analysis and visualization, the Python implementation provides a set of input/output (IO) modules for reading/writing the data from/to a variety of commonly used file formats. Support is included for formats produced by most of the major manufacturers of electrophysiology recording equipment and also for more generic formats such as MATLAB. Data representation and data analysis are conceptually separate: it is easier to write robust analysis code if it is focused on analysis and relies on an underlying package to handle data representation. For that reason, and also to be as lightweight as possible, the Neo object model and the associated Python package are deliberately limited to representation of data, with no functions for data analysis or visualization. Software for neurophysiology data analysis and visualization built on top of Neo automatically gains the benefits of interoperability, easier data sharing and automatic format conversion; there is already a burgeoning ecosystem of such tools. We intend that Neo should become the standard basis for Python tools in neurophysiology.

  18. Open data models for smart health interconnected applications: the example of openEHR.

    PubMed

    Demski, Hans; Garde, Sebastian; Hildebrand, Claudia

    2016-10-22

    Smart Health is known as a concept that enhances networking, intelligent data processing and combining patient data with other parameters. Open data models can play an important role in creating a framework for providing interoperable data services that support the development of innovative Smart Health applications profiting from data fusion and sharing. This article describes a model-driven engineering approach based on standardized clinical information models and explores its application for the development of interoperable electronic health record systems. The following possible model-driven procedures were considered: provision of data schemes for data exchange, automated generation of artefacts for application development and native platforms that directly execute the models. The applicability of the approach in practice was examined using the openEHR framework as an example. A comprehensive infrastructure for model-driven engineering of electronic health records is presented using the example of the openEHR framework. It is shown that data schema definitions to be used in common practice software development processes can be derived from domain models. The capabilities for automatic creation of implementation artefacts (e.g., data entry forms) are demonstrated. Complementary programming libraries and frameworks that foster the use of open data models are introduced. Several compatible health data platforms are listed. They provide standard based interfaces for interconnecting with further applications. Open data models help build a framework for interoperable data services that support the development of innovative Smart Health applications. Related tools for model-driven application development foster semantic interoperability and interconnected innovative applications.

  19. Cognitive search model and a new query paradigm

    NASA Astrophysics Data System (ADS)

    Xu, Zhonghui

    2001-06-01

    This paper proposes a cognitive model in which people begin to search pictures by using semantic content and find a right picture by judging whether its visual content is a proper visualization of the semantics desired. It is essential that human search is not just a process of matching computation on visual feature but rather a process of visualization of the semantic content known. For people to search electronic images in the way as they manually do in the model, we suggest that querying be a semantic-driven process like design. A query-by-design paradigm is prosed in the sense that what you design is what you find. Unlike query-by-example, query-by-design allows users to specify the semantic content through an iterative and incremental interaction process so that a retrieval can start with association and identification of the given semantic content and get refined while further visual cues are available. An experimental image retrieval system, Kuafu, has been under development using the query-by-design paradigm and an iconic language is adopted.

  20. Constructing Flexible, Configurable, ETL Pipelines for the Analysis of "Big Data" with Apache OODT

    NASA Astrophysics Data System (ADS)

    Hart, A. F.; Mattmann, C. A.; Ramirez, P.; Verma, R.; Zimdars, P. A.; Park, S.; Estrada, A.; Sumarlidason, A.; Gil, Y.; Ratnakar, V.; Krum, D.; Phan, T.; Meena, A.

    2013-12-01

    A plethora of open source technologies for manipulating, transforming, querying, and visualizing 'big data' have blossomed and matured in the last few years, driven in large part by recognition of the tremendous value that can be derived by leveraging data mining and visualization techniques on large data sets. One facet of many of these tools is that input data must often be prepared into a particular format (e.g.: JSON, CSV), or loaded into a particular storage technology (e.g.: HDFS) before analysis can take place. This process, commonly known as Extract-Transform-Load, or ETL, often involves multiple well-defined steps that must be executed in a particular order, and the approach taken for a particular data set is generally sensitive to the quantity and quality of the input data, as well as the structure and complexity of the desired output. When working with very large, heterogeneous, unstructured or semi-structured data sets, automating the ETL process and monitoring its progress becomes increasingly important. Apache Object Oriented Data Technology (OODT) provides a suite of complementary data management components called the Process Control System (PCS) that can be connected together to form flexible ETL pipelines as well as browser-based user interfaces for monitoring and control of ongoing operations. The lightweight, metadata driven middleware layer can be wrapped around custom ETL workflow steps, which themselves can be implemented in any language. Once configured, it facilitates communication between workflow steps and supports execution of ETL pipelines across a distributed cluster of compute resources. As participants in a DARPA-funded effort to develop open source tools for large-scale data analysis, we utilized Apache OODT to rapidly construct custom ETL pipelines for a variety of very large data sets to prepare them for analysis and visualization applications. We feel that OODT, which is free and open source software available through the Apache Software Foundation, is particularly well suited to developing and managing arbitrary large-scale ETL processes both for the simplicity and flexibility of its wrapper framework, as well as the detailed provenance information it exposes throughout the process. Our experience using OODT to manage processing of large-scale data sets in domains as diverse as radio astronomy, life sciences, and social network analysis demonstrates the flexibility of the framework, and the range of potential applications to a broad array of big data ETL challenges.

  1. System on chip module configured for event-driven architecture

    DOEpatents

    Robbins, Kevin; Brady, Charles E.; Ashlock, Tad A.

    2017-10-17

    A system on chip (SoC) module is described herein, wherein the SoC modules comprise a processor subsystem and a hardware logic subsystem. The processor subsystem and hardware logic subsystem are in communication with one another, and transmit event messages between one another. The processor subsystem executes software actors, while the hardware logic subsystem includes hardware actors, the software actors and hardware actors conform to an event-driven architecture, such that the software actors receive and generate event messages and the hardware actors receive and generate event messages.

  2. Application of data mining approaches to drug delivery.

    PubMed

    Ekins, Sean; Shimada, Jun; Chang, Cheng

    2006-11-30

    Computational approaches play a key role in all areas of the pharmaceutical industry from data mining, experimental and clinical data capture to pharmacoeconomics and adverse events monitoring. They will likely continue to be indispensable assets along with a growing library of software applications. This is primarily due to the increasingly massive amount of biology, chemistry and clinical data, which is now entering the public domain mainly as a result of NIH and commercially funded projects. We are therefore in need of new methods for mining this mountain of data in order to enable new hypothesis generation. The computational approaches include, but are not limited to, database compilation, quantitative structure activity relationships (QSAR), pharmacophores, network visualization models, decision trees, machine learning algorithms and multidimensional data visualization software that could be used to improve drug delivery after mining public and/or proprietary data. We will discuss some areas of unmet needs in the area of data mining for drug delivery that can be addressed with new software tools or databases of relevance to future pharmaceutical projects.

  3. Computational Software for Fitting Seismic Data to Epidemic-Type Aftershock Sequence Models

    NASA Astrophysics Data System (ADS)

    Chu, A.

    2014-12-01

    Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work introduces software to implement two of ETAS models described in Ogata (1998). To find the Maximum-Likelihood Estimates (MLEs), my software provides estimates of the homogeneous background rate parameter and the temporal and spatial parameters that govern triggering effects by applying the Expectation-Maximization (EM) algorithm introduced in Veen and Schoenberg (2008). Despite other computer programs exist for similar data modeling purpose, using EM-algorithm has the benefits of stability and robustness (Veen and Schoenberg, 2008). Spatial shapes that are very long and narrow cause difficulties in optimization convergence and problems with flat or multi-modal log-likelihood functions encounter similar issues. My program uses a robust method to preset a parameter to overcome the non-convergence computational issue. In addition to model fitting, the software is equipped with useful tools for examining modeling fitting results, for example, visualization of estimated conditional intensity, and estimation of expected number of triggered aftershocks. A simulation generator is also given with flexible spatial shapes that may be defined by the user. This open-source software has a very simple user interface. The user may execute it on a local computer, and the program also has potential to be hosted online. Java language is used for the software's core computing part and an optional interface to the statistical package R is provided.

  4. [Parallel virtual reality visualization of extreme large medical datasets].

    PubMed

    Tang, Min

    2010-04-01

    On the basis of a brief description of grid computing, the essence and critical techniques of parallel visualization of extreme large medical datasets are discussed in connection with Intranet and common-configuration computers of hospitals. In this paper are introduced several kernel techniques, including the hardware structure, software framework, load balance and virtual reality visualization. The Maximum Intensity Projection algorithm is realized in parallel using common PC cluster. In virtual reality world, three-dimensional models can be rotated, zoomed, translated and cut interactively and conveniently through the control panel built on virtual reality modeling language (VRML). Experimental results demonstrate that this method provides promising and real-time results for playing the role in of a good assistant in making clinical diagnosis.

  5. Sequence alignment visualization in HTML5 without Java.

    PubMed

    Gille, Christoph; Birgit, Weyand; Gille, Andreas

    2014-01-01

    Java has been extensively used for the visualization of biological data in the web. However, the Java runtime environment is an additional layer of software with an own set of technical problems and security risks. HTML in its new version 5 provides features that for some tasks may render Java unnecessary. Alignment-To-HTML is the first HTML-based interactive visualization for annotated multiple sequence alignments. The server side script interpreter can perform all tasks like (i) sequence retrieval, (ii) alignment computation, (iii) rendering, (iv) identification of a homologous structural models and (v) communication with BioDAS-servers. The rendered alignment can be included in web pages and is displayed in all browsers on all platforms including touch screen tablets. The functionality of the user interface is similar to legacy Java applets and includes color schemes, highlighting of conserved and variable alignment positions, row reordering by drag and drop, interlinked 3D visualization and sequence groups. Novel features are (i) support for multiple overlapping residue annotations, such as chemical modifications, single nucleotide polymorphisms and mutations, (ii) mechanisms to quickly hide residue annotations, (iii) export to MS-Word and (iv) sequence icons. Alignment-To-HTML, the first interactive alignment visualization that runs in web browsers without additional software, confirms that to some extend HTML5 is already sufficient to display complex biological data. The low speed at which programs are executed in browsers is still the main obstacle. Nevertheless, we envision an increased use of HTML and JavaScript for interactive biological software. Under GPL at: http://www.bioinformatics.org/strap/toHTML/.

  6. CHEMFLO-2000: INTERACTIVE SOFTWARE FOR PREDICTING AND VISUALIZING TRANSIENT WATER AND CHEMICAL MOVEMENT IN SOILS AND ASSOCIATED UNCERTAINTIES

    EPA Science Inventory

    An interactive Java applet and a stand-alone application program will be developed based on the CHEMFLO model developed in the mid-1980s and published as an EPA report (EPA/600/8-89/076). The model solves Richards Equation for transient water movement in unsaturated soils, and so...

  7. Neutron Scattering Software

    Science.gov Websites

    Array Manipulation Program (LAMP): IDL-based data analysis and visualization Open Genie: interactive -ray powder data ORTEP: Oak Ridge Thermal Ellipsoid Plot program for crystal structure illustrations structure VRML generator aClimax: modeling of inelastic neutron spectroscopy using Density Functional Theory

  8. Field-aligned currents and large-scale magnetospheric electric fields

    NASA Technical Reports Server (NTRS)

    Dangelo, N.

    1979-01-01

    The existence of field-aligned currents (FAC) at northern and southern high latitudes was confirmed by a number of observations, most clearly by experiments on the TRIAD and ISIS 2 satellites. The high-latitude FAC system is used to relate what is presently known about the large-scale pattern of high-latitude ionospheric electric fields and their relation to solar wind parameters. Recently a simplified model was presented for polar cap electric fields. The model is of considerable help in visualizing the large-scale features of FAC systems. A summary of the FAC observations is given. The simplified model is used to visualize how the FAC systems are driven by their generators.

  9. Large-scale visualization projects for teaching software engineering.

    PubMed

    Müller, Christoph; Reina, Guido; Burch, Michael; Weiskopf, Daniel

    2012-01-01

    The University of Stuttgart's software engineering major complements the traditional computer science major with more practice-oriented education. Two-semester software projects in various application areas offered by the university's different computer science institutes are a successful building block in the curriculum. With this realistic, complex project setting, students experience the practice of software engineering, including software development processes, technologies, and soft skills. In particular, visualization-based projects are popular with students. Such projects offer them the opportunity to gain profound knowledge that would hardly be possible with only regular lectures and homework assignments.

  10. Agent-based Modeling with MATSim for Hazards Evacuation Planning

    NASA Astrophysics Data System (ADS)

    Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.

    2015-12-01

    Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.

  11. Coupled RipCAS-DFLOW (CoRD) Software and Data Management System for Reproducible Floodplain Vegetation Succession Modeling

    NASA Astrophysics Data System (ADS)

    Turner, M. A.; Miller, S.; Gregory, A.; Cadol, D. D.; Stone, M. C.; Sheneman, L.

    2016-12-01

    We present the Coupled RipCAS-DFLOW (CoRD) modeling system created to encapsulate the workflow to analyze the effects of stream flooding on vegetation succession. CoRD provides an intuitive command-line and web interface to run DFLOW and RipCAS in succession over many years automatically, which is a challenge because, for our application, DFLOW must be run on a supercomputing cluster via the PBS job scheduler. RipCAS is a vegetation succession model, and DFLOW is a 2D open channel flow model. Data adaptors have been developed to seamlessly connect DFLOW output data to be RipCAS inputs, and vice-versa. CoRD provides automated statistical analysis and visualization, plus automatic syncing of input and output files and model run metadata to the hydrological data management system HydroShare using its excellent Python REST client. This combination of technologies and data management techniques allows the results to be shared with collaborators and eventually published. Perhaps most importantly, it allows results to be easily reproduced via either the command-line or web user interface. This system is a result of collaboration between software developers and hydrologists participating in the Western Consortium for Watershed Analysis, Visualization, and Exploration (WC-WAVE). Because of the computing-intensive nature of this particular workflow, including automating job submission/monitoring and data adaptors, software engineering expertise is required. However, the hydrologists provide the software developers with a purpose and ensure a useful, intuitive tool is developed. Our hydrologists contribute software, too: RipCAS was developed from scratch by hydrologists on the team as a specialized, open-source version of the Computer Aided Simulation Model for Instream Flow and Riparia (CASiMiR) vegetation model; our hydrologists running DFLOW provided numerous examples and help with the supercomputing system. This project is written in Python, a popular language in the geosciences and a good beginner programming language, and is completely open source. It can be accessed at https://github.com/VirtualWatershed/CoRD with documentation available at http://virtualwatershed.github.io/CoRD. These facts enable continued development and use beyond the involvement of the current authors.

  12. Differential geometry based solvation model II: Lagrangian formulation.

    PubMed

    Chen, Zhan; Baker, Nathan A; Wei, G W

    2011-12-01

    Solvation is an elementary process in nature and is of paramount importance to more sophisticated chemical, biological and biomolecular processes. The understanding of solvation is an essential prerequisite for the quantitative description and analysis of biomolecular systems. This work presents a Lagrangian formulation of our differential geometry based solvation models. The Lagrangian representation of biomolecular surfaces has a few utilities/advantages. First, it provides an essential basis for biomolecular visualization, surface electrostatic potential map and visual perception of biomolecules. Additionally, it is consistent with the conventional setting of implicit solvent theories and thus, many existing theoretical algorithms and computational software packages can be directly employed. Finally, the Lagrangian representation does not need to resort to artificially enlarged van der Waals radii as often required by the Eulerian representation in solvation analysis. The main goal of the present work is to analyze the connection, similarity and difference between the Eulerian and Lagrangian formalisms of the solvation model. Such analysis is important to the understanding of the differential geometry based solvation model. The present model extends the scaled particle theory of nonpolar solvation model with a solvent-solute interaction potential. The nonpolar solvation model is completed with a Poisson-Boltzmann (PB) theory based polar solvation model. The differential geometry theory of surfaces is employed to provide a natural description of solvent-solute interfaces. The optimization of the total free energy functional, which encompasses the polar and nonpolar contributions, leads to coupled potential driven geometric flow and PB equations. Due to the development of singularities and nonsmooth manifolds in the Lagrangian representation, the resulting potential-driven geometric flow equation is embedded into the Eulerian representation for the purpose of computation, thanks to the equivalence of the Laplace-Beltrami operator in the two representations. The coupled partial differential equations (PDEs) are solved with an iterative procedure to reach a steady state, which delivers desired solvent-solute interface and electrostatic potential for problems of interest. These quantities are utilized to evaluate the solvation free energies and protein-protein binding affinities. A number of computational methods and algorithms are described for the interconversion of Lagrangian and Eulerian representations, and for the solution of the coupled PDE system. The proposed approaches have been extensively validated. We also verify that the mean curvature flow indeed gives rise to the minimal molecular surface and the proposed variational procedure indeed offers minimal total free energy. Solvation analysis and applications are considered for a set of 17 small compounds and a set of 23 proteins. The salt effect on protein-protein binding affinity is investigated with two protein complexes by using the present model. Numerical results are compared to the experimental measurements and to those obtained by using other theoretical methods in the literature. © Springer-Verlag 2011

  13. Differential geometry based solvation model II: Lagrangian formulation

    PubMed Central

    Chen, Zhan; Baker, Nathan A.; Wei, G. W.

    2010-01-01

    Solvation is an elementary process in nature and is of paramount importance to more sophisticated chemical, biological and biomolecular processes. The understanding of solvation is an essential prerequisite for the quantitative description and analysis of biomolecular systems. This work presents a Lagrangian formulation of our differential geometry based solvation model. The Lagrangian representation of biomolecular surfaces has a few utilities/advantages. First, it provides an essential basis for biomolecular visualization, surface electrostatic potential map and visual perception of biomolecules. Additionally, it is consistent with the conventional setting of implicit solvent theories and thus, many existing theoretical algorithms and computational software packages can be directly employed. Finally, the Lagrangian representation does not need to resort to artificially enlarged van der Waals radii as often required by the Eulerian representation in solvation analysis. The main goal of the present work is to analyze the connection, similarity and difference between the Eulerian and Lagrangian formalisms of the solvation model. Such analysis is important to the understanding of the differential geometry based solvation model. The present model extends the scaled particle theory (SPT) of nonpolar solvation model with a solvent-solute interaction potential. The nonpolar solvation model is completed with a Poisson-Boltzmann (PB) theory based polar solvation model. The differential geometry theory of surfaces is employed to provide a natural description of solvent-solute interfaces. The minimization of the total free energy functional, which encompasses the polar and nonpolar contributions, leads to coupled potential driven geometric flow and Poisson-Boltzmann equations. Due to the development of singularities and nonsmooth manifolds in the Lagrangian representation, the resulting potential-driven geometric flow equation is embedded into the Eulerian representation for the purpose of computation, thanks to the equivalence of the Laplace-Beltrami operator in the two representations. The coupled partial differential equations (PDEs) are solved with an iterative procedure to reach a steady state, which delivers desired solvent-solute interface and electrostatic potential for problems of interest. These quantities are utilized to evaluate the solvation free energies and protein-protein binding affinities. A number of computational methods and algorithms are described for the interconversion of Lagrangian and Eulerian representations, and for the solution of the coupled PDE system. The proposed approaches have been extensively validated. We also verify that the mean curvature flow indeed gives rise to the minimal molecular surface (MMS) and the proposed variational procedure indeed offers minimal total free energy. Solvation analysis and applications are considered for a set of 17 small compounds and a set of 23 proteins. The salt effect on protein-protein binding affinity is investigated with two protein complexes by using the present model. Numerical results are compared to the experimental measurements and to those obtained by using other theoretical methods in the literature. PMID:21279359

  14. Enhancing the T-shaped learning profile when teaching hydrology using data, modeling, and visualization activities

    NASA Astrophysics Data System (ADS)

    Sanchez, Christopher A.; Ruddell, Benjamin L.; Schiesser, Roy; Merwade, Venkatesh

    2016-03-01

    Previous research has suggested that the use of more authentic learning activities can produce more robust and durable knowledge gains. This is consistent with calls within civil engineering education, specifically hydrology, that suggest that curricula should more often include professional perspective and data analysis skills to better develop the "T-shaped" knowledge profile of a professional hydrologist (i.e., professional breadth combined with technical depth). It was expected that the inclusion of a data-driven simulation lab exercise that was contextualized within a real-world situation and more consistent with the job duties of a professional in the field, would provide enhanced learning and appreciation of job duties beyond more conventional paper-and-pencil exercises in a lower-division undergraduate course. Results indicate that while students learned in both conditions, learning was enhanced for the data-driven simulation group in nearly every content area. This pattern of results suggests that the use of data-driven modeling and visualization activities can have a significant positive impact on instruction. This increase in learning likely facilitates the development of student perspective and conceptual mastery, enabling students to make better choices about their studies, while also better preparing them for work as a professional in the field.

  15. Enhancing the T-shaped learning profile when teaching hydrology using data, modeling, and visualization activities

    NASA Astrophysics Data System (ADS)

    Sanchez, C. A.; Ruddell, B. L.; Schiesser, R.; Merwade, V.

    2015-07-01

    Previous research has suggested that the use of more authentic learning activities can produce more robust and durable knowledge gains. This is consistent with calls within civil engineering education, specifically hydrology, that suggest that curricula should more often include professional perspective and data analysis skills to better develop the "T-shaped" knowledge profile of a professional hydrologist (i.e., professional breadth combined with technical depth). It was expected that the inclusion of a data driven simulation lab exercise that was contextualized within a real-world situation and more consistent with the job duties of a professional in the field, would provide enhanced learning and appreciation of job duties beyond more conventional paper-and-pencil exercises in a lower division undergraduate course. Results indicate that while students learned in both conditions, learning was enhanced for the data-driven simulation group in nearly every content area. This pattern of results suggests that the use of data-driven modeling and visualization activities can have a significant positive impact on instruction. This increase in learning likely facilitates the development of student perspective and conceptual mastery, enabling students to make better choices about their studies, while also better preparing them for work as a professional in the field.

  16. Radio Frequency Ablation Registration, Segmentation, and Fusion Tool

    PubMed Central

    McCreedy, Evan S.; Cheng, Ruida; Hemler, Paul F.; Viswanathan, Anand; Wood, Bradford J.; McAuliffe, Matthew J.

    2008-01-01

    The Radio Frequency Ablation Segmentation Tool (RFAST) is a software application developed using NIH's Medical Image Processing Analysis and Visualization (MIPAV) API for the specific purpose of assisting physicians in the planning of radio frequency ablation (RFA) procedures. The RFAST application sequentially leads the physician through the steps necessary to register, fuse, segment, visualize and plan the RFA treatment. Three-dimensional volume visualization of the CT dataset with segmented 3D surface models enables the physician to interactively position the ablation probe to simulate burns and to semi-manually simulate sphere packing in an attempt to optimize probe placement. PMID:16871716

  17. Geowall: Investigations into low-cost stereo display technologies

    USGS Publications Warehouse

    Steinwand, Daniel R.; Davis, Brian; Weeks, Nathan

    2003-01-01

    Recently, the combination of new projection technology, fast, low-cost graphics cards, and Linux-powered personal computers has made it possible to provide a stereoprojection and stereoviewing system that is much more affordable than previous commercial solutions. These Geowall systems are low-cost visualization systems built with commodity off-the-shelf components, run on open-source (and other) operating systems, and using open-source applications software. In short, they are ?Beowulf-class? visualization systems that provide a cost-effective way for the U. S. Geological Survey to broaden participation in the visualization community and view stereoimagery and three-dimensional models2.

  18. An object-oriented framework for medical image registration, fusion, and visualization.

    PubMed

    Zhu, Yang-Ming; Cochoff, Steven M

    2006-06-01

    An object-oriented framework for image registration, fusion, and visualization was developed based on the classic model-view-controller paradigm. The framework employs many design patterns to facilitate legacy code reuse, manage software complexity, and enhance the maintainability and portability of the framework. Three sample applications built a-top of this framework are illustrated to show the effectiveness of this framework: the first one is for volume image grouping and re-sampling, the second one is for 2D registration and fusion, and the last one is for visualization of single images as well as registered volume images.

  19. Integrating Visualization Applications, such as ParaView, into HEP Software Frameworks for In-situ Event Displays

    NASA Astrophysics Data System (ADS)

    Lyon, A. L.; Kowalkowski, J. B.; Jones, C. D.

    2017-10-01

    ParaView is a high performance visualization application not widely used in High Energy Physics (HEP). It is a long standing open source project led by Kitware and involves several Department of Energy (DOE) and Department of Defense (DOD) laboratories. Futhermore, it has been adopted by many DOE supercomputing centers and other sites. ParaView is unique in speed and efficiency by using state-of-the-art techniques developed by the academic visualization community that are often not found in applications written by the HEP community. In-situ visualization of events, where event details are visualized during processing/analysis, is a common task for experiment software frameworks. Kitware supplies Catalyst, a library that enables scientific software to serve visualization objects to client ParaView viewers yielding a real-time event display. Connecting ParaView to the Fermilab art framework will be described and the capabilities it brings discussed.

  20. Voxel Datacubes for 3D Visualization in Blender

    NASA Astrophysics Data System (ADS)

    Gárate, Matías

    2017-05-01

    The growth of computational astrophysics and the complexity of multi-dimensional data sets evidences the need for new versatile visualization tools for both the analysis and presentation of the data. In this work, we show how to use the open-source software Blender as a three-dimensional (3D) visualization tool to study and visualize numerical simulation results, focusing on astrophysical hydrodynamic experiments. With a datacube as input, the software can generate a volume rendering of the 3D data, show the evolution of a simulation in time, and do a fly-around camera animation to highlight the points of interest. We explain the process to import simulation outputs into Blender using the voxel data format, and how to set up a visualization scene in the software interface. This method allows scientists to perform a complementary visual analysis of their data and display their results in an appealing way, both for outreach and science presentations.

  1. MONGKIE: an integrated tool for network analysis and visualization for multi-omics data.

    PubMed

    Jang, Yeongjun; Yu, Namhee; Seo, Jihae; Kim, Sun; Lee, Sanghyuk

    2016-03-18

    Network-based integrative analysis is a powerful technique for extracting biological insights from multilayered omics data such as somatic mutations, copy number variations, and gene expression data. However, integrated analysis of multi-omics data is quite complicated and can hardly be done in an automated way. Thus, a powerful interactive visual mining tool supporting diverse analysis algorithms for identification of driver genes and regulatory modules is much needed. Here, we present a software platform that integrates network visualization with omics data analysis tools seamlessly. The visualization unit supports various options for displaying multi-omics data as well as unique network models for describing sophisticated biological networks such as complex biomolecular reactions. In addition, we implemented diverse in-house algorithms for network analysis including network clustering and over-representation analysis. Novel functions include facile definition and optimized visualization of subgroups, comparison of a series of data sets in an identical network by data-to-visual mapping and subsequent overlaying function, and management of custom interaction networks. Utility of MONGKIE for network-based visual data mining of multi-omics data was demonstrated by analysis of the TCGA glioblastoma data. MONGKIE was developed in Java based on the NetBeans plugin architecture, thus being OS-independent with intrinsic support of module extension by third-party developers. We believe that MONGKIE would be a valuable addition to network analysis software by supporting many unique features and visualization options, especially for analysing multi-omics data sets in cancer and other diseases. .

  2. A versatile stereoscopic visual display system for vestibular and oculomotor research.

    PubMed

    Kramer, P D; Roberts, D C; Shelhamer, M; Zee, D S

    1998-01-01

    Testing of the vestibular system requires a vestibular stimulus (motion) and/or a visual stimulus. We have developed a versatile, low cost, stereoscopic visual display system, using "virtual reality" (VR) technology. The display system can produce images for each eye that correspond to targets at any virtual distance relative to the subject, and so require the appropriate ocular vergence. We elicited smooth pursuit, "stare" optokinetic nystagmus (OKN) and after-nystagmus (OKAN), vergence for targets at various distances, and short-term adaptation of the vestibulo-ocular reflex (VOR), using both conventional methods and the stereoscopic display. Pursuit, OKN, and OKAN were comparable with both methods. When used with a vestibular stimulus, VR induced appropriate adaptive changes of the phase and gain of the angular VOR. In addition, using the VR display system and a human linear acceleration sled, we adapted the phase of the linear VOR. The VR-based stimulus system not only offers an alternative to more cumbersome means of stimulating the visual system in vestibular experiments, it also can produce visual stimuli that would otherwise be impractical or impossible. Our techniques provide images without the latencies encountered in most VR systems. Its inherent versatility allows it to be useful in several different types of experiments, and because it is software driven it can be quickly adapted to provide a new stimulus. These two factors allow VR to provide considerable savings in time and money, as well as flexibility in developing experimental paradigms.

  3. Size matters: large objects capture attention in visual search.

    PubMed

    Proulx, Michael J

    2010-12-23

    Can objects or events ever capture one's attention in a purely stimulus-driven manner? A recent review of the literature set out the criteria required to find stimulus-driven attentional capture independent of goal-directed influences, and concluded that no published study has satisfied that criteria. Here visual search experiments assessed whether an irrelevantly large object can capture attention. Capture of attention by this static visual feature was found. The results suggest that a large object can indeed capture attention in a stimulus-driven manner and independent of displaywide features of the task that might encourage a goal-directed bias for large items. It is concluded that these results are either consistent with the stimulus-driven criteria published previously or alternatively consistent with a flexible, goal-directed mechanism of saliency detection.

  4. Cyberinfrastructure to Support Collaborative and Reproducible Computational Hydrologic Modeling

    NASA Astrophysics Data System (ADS)

    Goodall, J. L.; Castronova, A. M.; Bandaragoda, C.; Morsy, M. M.; Sadler, J. M.; Essawy, B.; Tarboton, D. G.; Malik, T.; Nijssen, B.; Clark, M. P.; Liu, Y.; Wang, S. W.

    2017-12-01

    Creating cyberinfrastructure to support reproducibility of computational hydrologic models is an important research challenge. Addressing this challenge requires open and reusable code and data with machine and human readable metadata, organized in ways that allow others to replicate results and verify published findings. Specific digital objects that must be tracked for reproducible computational hydrologic modeling include (1) raw initial datasets, (2) data processing scripts used to clean and organize the data, (3) processed model inputs, (4) model results, and (5) the model code with an itemization of all software dependencies and computational requirements. HydroShare is a cyberinfrastructure under active development designed to help users store, share, and publish digital research products in order to improve reproducibility in computational hydrology, with an architecture supporting hydrologic-specific resource metadata. Researchers can upload data required for modeling, add hydrology-specific metadata to these resources, and use the data directly within HydroShare.org for collaborative modeling using tools like CyberGIS, Sciunit-CLI, and JupyterHub that have been integrated with HydroShare to run models using notebooks, Docker containers, and cloud resources. Current research aims to implement the Structure For Unifying Multiple Modeling Alternatives (SUMMA) hydrologic model within HydroShare to support hypothesis-driven hydrologic modeling while also taking advantage of the HydroShare cyberinfrastructure. The goal of this integration is to create the cyberinfrastructure that supports hypothesis-driven model experimentation, education, and training efforts by lowering barriers to entry, reducing the time spent on informatics technology and software development, and supporting collaborative research within and across research groups.

  5. Data visualization and analysis tools for the MAVEN mission

    NASA Astrophysics Data System (ADS)

    Harter, B.; De Wolfe, A. W.; Putnam, B.; Brain, D.; Chaffin, M.

    2016-12-01

    The Mars Atmospheric and Volatile Evolution (MAVEN) mission has been collecting data at Mars since September 2014. We have developed new software tools for exploring and analyzing the science data. Our open-source Python toolkit for working with data from MAVEN and other missions is based on the widely-used "tplot" IDL toolkit. We have replicated all of the basic tplot functionality in Python, and use the bokeh and matplotlib libraries to generate interactive line plots and spectrograms, providing additional functionality beyond the capabilities of IDL graphics. These Python tools are generalized to work with missions beyond MAVEN, and our software is available on Github. We have also been exploring 3D graphics as a way to better visualize the MAVEN science data and models. We have constructed a 3D visualization of MAVEN's orbit using the CesiumJS library, which not only allows viewing of MAVEN's orientation and position, but also allows the display of selected science data sets and their variation over time.

  6. Event Display for the Visualization of CMS Events

    NASA Astrophysics Data System (ADS)

    Bauerdick, L. A. T.; Eulisse, G.; Jones, C. D.; Kovalskyi, D.; McCauley, T.; Mrak Tadel, A.; Muelmenstaedt, J.; Osborne, I.; Tadel, M.; Tu, Y.; Yagil, A.

    2011-12-01

    During the last year the CMS experiment engaged in consolidation of its existing event display programs. The core of the new system is based on the Fireworks event display program which was by-design directly integrated with the CMS Event Data Model (EDM) and the light version of the software framework (FWLite). The Event Visualization Environment (EVE) of the ROOT framework is used to manage a consistent set of 3D and 2D views, selection, user-feedback and user-interaction with the graphics windows; several EVE components were developed by CMS in collaboration with the ROOT project. In event display operation simple plugins are registered into the system to perform conversion from EDM collections into their visual representations which are then managed by the application. Full event navigation and filtering as well as collection-level filtering is supported. The same data-extraction principle can also be applied when Fireworks will eventually operate as a service within the full software framework.

  7. Agile Acceptance Test–Driven Development of Clinical Decision Support Advisories: Feasibility of Using Open Source Software

    PubMed Central

    Baldwin, Krystal L; Kannan, Vaishnavi; Flahaven, Emily L; Parks, Cassandra J; Ott, Jason M; Willett, Duwayne L

    2018-01-01

    Background Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test–driven development and automated regression testing promotes reliability. Test–driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a “safety net” for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and “living” design documentation. Rapid-cycle development or “agile” methods are being successfully applied to CDS development. The agile practice of automated test–driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as “executable requirements.” Objective We aimed to establish feasibility of acceptance test–driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). Methods Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory’s expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test suite. Results We used test–driven development to construct a new CDS tool advising Emergency Department nurses to perform a swallowing assessment prior to administering oral medication to a patient with suspected stroke. Test tables specified desired behavior for (1) applicable clinical settings, (2) triggering action, (3) rule logic, (4) user interface, and (5) system actions in response to user input. Automated test suite results for the “executable requirements” are shown prior to building the CDS alert, during build, and after successful build. Conclusions Automated acceptance test–driven development and continuous regression testing of CDS configuration in a commercial EHR proves feasible with open source software. Automated test–driven development offers one potential contribution to achieving high-reliability EHR configuration. Vetting acceptance tests with clinicians elicits their input on crucial configuration details early during initial CDS design and iteratively during rapid-cycle optimization. PMID:29653922

  8. Agile Acceptance Test-Driven Development of Clinical Decision Support Advisories: Feasibility of Using Open Source Software.

    PubMed

    Basit, Mujeeb A; Baldwin, Krystal L; Kannan, Vaishnavi; Flahaven, Emily L; Parks, Cassandra J; Ott, Jason M; Willett, Duwayne L

    2018-04-13

    Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test-driven development and automated regression testing promotes reliability. Test-driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a "safety net" for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and "living" design documentation. Rapid-cycle development or "agile" methods are being successfully applied to CDS development. The agile practice of automated test-driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as "executable requirements." We aimed to establish feasibility of acceptance test-driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory's expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test suite. We used test-driven development to construct a new CDS tool advising Emergency Department nurses to perform a swallowing assessment prior to administering oral medication to a patient with suspected stroke. Test tables specified desired behavior for (1) applicable clinical settings, (2) triggering action, (3) rule logic, (4) user interface, and (5) system actions in response to user input. Automated test suite results for the "executable requirements" are shown prior to building the CDS alert, during build, and after successful build. Automated acceptance test-driven development and continuous regression testing of CDS configuration in a commercial EHR proves feasible with open source software. Automated test-driven development offers one potential contribution to achieving high-reliability EHR configuration. Vetting acceptance tests with clinicians elicits their input on crucial configuration details early during initial CDS design and iteratively during rapid-cycle optimization. ©Mujeeb A Basit, Krystal L Baldwin, Vaishnavi Kannan, Emily L Flahaven, Cassandra J Parks, Jason M Ott, Duwayne L Willett. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 13.04.2018.

  9. Bringing the Unidata IDV to the Cloud

    NASA Astrophysics Data System (ADS)

    Fisher, W. I.; Oxelson Ganter, J.

    2015-12-01

    Maintaining software compatibility across new computing environments and the associated underlying hardware is a common problem for software engineers and scientific programmers. While traditional software engineering provides a suite of tools and methodologies which may mitigate this issue, they are typically ignored by developers lacking a background in software engineering. Causing further problems, these methodologies are best applied at the start of project; trying to apply them to an existing, mature project can require an immense effort. Visualization software is particularly vulnerable to this problem, given the inherent dependency on particular graphics hardware and software API's. As a result of these issues, there exists a large body of software which is simultaneously critical to the scientists who are dependent upon it, and yet increasingly difficult to maintain.The solution to this problem was partially provided with the advent of Cloud Computing; Application Streaming. This technology allows a program to run entirely on a remote virtual machine while still allowing for interactivity and dynamic visualizations, with little-to-no re-engineering required. When coupled with containerization technology such as Docker, we are able to easily bring the same visualization software to a desktop, a netbook, a smartphone, and the next generation of hardware, whatever it may be.Unidata has been able to harness Application Streaming to provide a tablet-compatible version of our visualization software, the Integrated Data Viewer (IDV). This work will examine the challenges associated with adapting the IDV to an application streaming platform, and include a brief discussion of the underlying technologies involved.

  10. Open Source Software in Teaching Physics: A Case Study on Vector Algebra and Visual Representations

    ERIC Educational Resources Information Center

    Cataloglu, Erdat

    2006-01-01

    This study aims to report the effort on teaching vector algebra using free open source software (FOSS). Recent studies showed that students have difficulties in learning basic physics concepts. Constructivist learning theories suggest the use of visual and hands-on activities in learning. We will report on the software used for this purpose. The…

  11. The Orion GN and C Data-Driven Flight Software Architecture for Automated Sequencing and Fault Recovery

    NASA Technical Reports Server (NTRS)

    King, Ellis; Hart, Jeremy; Odegard, Ryan

    2010-01-01

    The Orion Crew Exploration Vehicle (CET) is being designed to include significantly more automation capability than either the Space Shuttle or the International Space Station (ISS). In particular, the vehicle flight software has requirements to accommodate increasingly automated missions throughout all phases of flight. A data-driven flight software architecture will provide an evolvable automation capability to sequence through Guidance, Navigation & Control (GN&C) flight software modes and configurations while maintaining the required flexibility and human control over the automation. This flexibility is a key aspect needed to address the maturation of operational concepts, to permit ground and crew operators to gain trust in the system and mitigate unpredictability in human spaceflight. To allow for mission flexibility and reconfrgurability, a data driven approach is being taken to load the mission event plan as well cis the flight software artifacts associated with the GN&C subsystem. A database of GN&C level sequencing data is presented which manages and tracks the mission specific and algorithm parameters to provide a capability to schedule GN&C events within mission segments. The flight software data schema for performing automated mission sequencing is presented with a concept of operations for interactions with ground and onboard crew members. A prototype architecture for fault identification, isolation and recovery interactions with the automation software is presented and discussed as a forward work item.

  12. So Wide a Web, So Little Time.

    ERIC Educational Resources Information Center

    McConville, David; And Others

    1996-01-01

    Discusses new trends in the World Wide Web. Highlights include multimedia; digitized audio-visual files; compression technology; telephony; virtual reality modeling language (VRML); open architecture; and advantages of Java, an object-oriented programming language, including platform independence, distributed development, and pay-per-use software.…

  13. Services Oriented Smart City Platform Based On 3d City Model Visualization

    NASA Astrophysics Data System (ADS)

    Prandi, F.; Soave, M.; Devigili, F.; Andreolli, M.; De Amicis, R.

    2014-04-01

    The rapid technological evolution, which is characterizing all the disciplines involved within the wide concept of smart cities, is becoming a key factor to trigger true user-driven innovation. However to fully develop the Smart City concept to a wide geographical target, it is required an infrastructure that allows the integration of heterogeneous geographical information and sensor networks into a common technological ground. In this context 3D city models will play an increasingly important role in our daily lives and become an essential part of the modern city information infrastructure (Spatial Data Infrastructure). The work presented in this paper describes an innovative Services Oriented Architecture software platform aimed at providing smartcities services on top of 3D urban models. 3D city models are the basis of many applications and can became the platform for integrating city information within the Smart-Cites context. In particular the paper will investigate how the efficient visualisation of 3D city models using different levels of detail (LODs) is one of the pivotal technological challenge to support Smart-Cities applications. The goal is to provide to the final user realistic and abstract 3D representations of the urban environment and the possibility to interact with a massive amounts of semantic information contained into the geospatial 3D city model. The proposed solution, using OCG standards and a custom service to provide 3D city models, lets the users to consume the services and interact with the 3D model via Web in a more effective way.

  14. An interactive environment for the analysis of large Earth observation and model data sets

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.

    1994-01-01

    Envision is an interactive environment that provides researchers in the earth sciences convenient ways to manage, browse, and visualize large observed or model data sets. Its main features are support for the netCDF and HDF file formats, an easy to use X/Motif user interface, a client-server configuration, and portability to many UNIX workstations. The Envision package also provides new ways to view and change metadata in a set of data files. It permits a scientist to conveniently and efficiently manage large data sets consisting of many data files. It also provides links to popular visualization tools so that data can be quickly browsed. Envision is a public domain package, freely available to the scientific community. Envision software (binaries and source code) and documentation can be obtained from either of these servers: ftp://vista.atmos.uiuc.edu/pub/envision/ and ftp://csrp.tamu.edu/pub/envision/. Detailed descriptions of Envision capabilities and operations can be found in the User's Guide and Reference Manuals distributed with Envision software.

  15. Advances in the production of freeform optical surfaces

    NASA Astrophysics Data System (ADS)

    Tohme, Yazid E.; Luniya, Suneet S.

    2007-05-01

    Recent market demands for free-form optics have challenged the industry to find new methods and techniques to manufacture free-form optical surfaces with a high level of accuracy and reliability. Production techniques are becoming a mix of multi-axis single point diamond machining centers or deterministic ultra precision grinding centers coupled with capable measurement systems to accomplish the task. It has been determined that a complex software tool is required to seamlessly integrate all aspects of the manufacturing process chain. Advances in computational power and improved performance of computer controlled precision machinery have driven the use of such software programs to measure, visualize, analyze, produce and re-validate the 3D free-form design thus making the process of manufacturing such complex surfaces a viable task. Consolidation of the entire production cycle in a comprehensive software tool that can interact with all systems in design, production and measurement phase will enable manufacturers to solve these complex challenges providing improved product quality, simplified processes, and enhanced performance. The work being presented describes the latest advancements in developing such software package for the entire fabrication process chain for aspheric and free-form shapes. It applies a rational B-spline based kernel to transform an optical design in the form of parametrical definition (optical equation), standard CAD format, or a cloud of points to a central format that drives the simulation. This software tool creates a closed loop for the fabrication process chain. It integrates surface analysis and compensation, tool path generation, and measurement analysis in one package.

  16. Visualization of stereoscopic anatomic models of the paranasal sinuses and cervical vertebrae from the surgical and procedural perspective.

    PubMed

    Chen, Jian; Smith, Andrew D; Khan, Majid A; Sinning, Allan R; Conway, Marianne L; Cui, Dongmei

    2017-11-01

    Recent improvements in three-dimensional (3D) virtual modeling software allows anatomists to generate high-resolution, visually appealing, colored, anatomical 3D models from computed tomography (CT) images. In this study, high-resolution CT images of a cadaver were used to develop clinically relevant anatomic models including facial skull, nasal cavity, septum, turbinates, paranasal sinuses, optic nerve, pituitary gland, carotid artery, cervical vertebrae, atlanto-axial joint, cervical spinal cord, cervical nerve root, and vertebral artery that can be used to teach clinical trainees (students, residents, and fellows) approaches for trans-sphenoidal pituitary surgery and cervical spine injection procedure. Volume, surface rendering and a new rendering technique, semi-auto-combined, were applied in the study. These models enable visualization, manipulation, and interaction on a computer and can be presented in a stereoscopic 3D virtual environment, which makes users feel as if they are inside the model. Anat Sci Educ 10: 598-606. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.

  17. Ten recommendations for software engineering in research.

    PubMed

    Hastings, Janna; Haug, Kenneth; Steinbeck, Christoph

    2014-01-01

    Research in the context of data-driven science requires a backbone of well-written software, but scientific researchers are typically not trained at length in software engineering, the principles for creating better software products. To address this gap, in particular for young researchers new to programming, we give ten recommendations to ensure the usability, sustainability and practicality of research software.

  18. Immunogenetic Management Software: a new tool for visualization and analysis of complex immunogenetic datasets

    PubMed Central

    Johnson, Z. P.; Eady, R. D.; Ahmad, S. F.; Agravat, S.; Morris, T; Else, J; Lank, S. M.; Wiseman, R. W.; O’Connor, D. H.; Penedo, M. C. T.; Larsen, C. P.

    2012-01-01

    Here we describe the Immunogenetic Management Software (IMS) system, a novel web-based application that permitsmultiplexed analysis of complex immunogenetic traits that are necessary for the accurate planning and execution of experiments involving large animal models, including nonhuman primates. IMS is capable of housing complex pedigree relationships, microsatellite-based MHC typing data, as well as MHC pyrosequencing expression analysis of class I alleles. It includes a novel, automated MHC haplotype naming algorithm and has accomplished an innovative visualization protocol that allows users to view multiple familial and MHC haplotype relationships through a single, interactive graphical interface. Detailed DNA and RNA-based data can also be queried and analyzed in a highly accessible fashion, and flexible search capabilities allow experimental choices to be made based on multiple, individualized and expandable immunogenetic factors. This web application is implemented in Java, MySQL, Tomcat, and Apache, with supported browsers including Internet Explorer and Firefox onWindows and Safari on Mac OS. The software is freely available for distribution to noncommercial users by contacting Leslie. kean@emory.edu. A demonstration site for the software is available at http://typing.emory.edu/typing_demo, user name: imsdemo7@gmail.com and password: imsdemo. PMID:22080300

  19. Immunogenetic Management Software: a new tool for visualization and analysis of complex immunogenetic datasets.

    PubMed

    Johnson, Z P; Eady, R D; Ahmad, S F; Agravat, S; Morris, T; Else, J; Lank, S M; Wiseman, R W; O'Connor, D H; Penedo, M C T; Larsen, C P; Kean, L S

    2012-04-01

    Here we describe the Immunogenetic Management Software (IMS) system, a novel web-based application that permits multiplexed analysis of complex immunogenetic traits that are necessary for the accurate planning and execution of experiments involving large animal models, including nonhuman primates. IMS is capable of housing complex pedigree relationships, microsatellite-based MHC typing data, as well as MHC pyrosequencing expression analysis of class I alleles. It includes a novel, automated MHC haplotype naming algorithm and has accomplished an innovative visualization protocol that allows users to view multiple familial and MHC haplotype relationships through a single, interactive graphical interface. Detailed DNA and RNA-based data can also be queried and analyzed in a highly accessible fashion, and flexible search capabilities allow experimental choices to be made based on multiple, individualized and expandable immunogenetic factors. This web application is implemented in Java, MySQL, Tomcat, and Apache, with supported browsers including Internet Explorer and Firefox on Windows and Safari on Mac OS. The software is freely available for distribution to noncommercial users by contacting Leslie.kean@emory.edu. A demonstration site for the software is available at http://typing.emory.edu/typing_demo , user name: imsdemo7@gmail.com and password: imsdemo.

  20. A New Framework for Software Visualization: A Multi-Layer Approach

    DTIC Science & Technology

    2006-09-01

    primary target is an exploration of the current state of the area so that we can discover the challenges and propose solutions for them. The study ...Small define both areas of study to collectively be a part of Software Visualization. 22 Visual Programming as ’Visual Programming’ (VP) refers to...founded taxonomy, with the proper characteristics, can further investigation in any field of study . A common language or terminology and the existence of

Top